AI Tools Compared

Profiling is essential for understanding your application’s performance characteristics and identifying bottlenecks. Integrating profilers with Claude Code creates a powerful workflow for analyzing performance data and implementing optimizations. This guide covers how to set up and use various profiling tools alongside Claude Code for performance analysis.

Understanding Profiling Fundamentals

Before integrating profilers, understand the different types of profiling available:

Claude Code can help interpret profiler output, suggest optimizations based on the data, and implement changes iteratively.

Setting Up Python Profilers

Python offers several profiling tools that integrate well with Claude Code workflows.

Using cProfile

The built-in cProfile module provides detailed execution profiling:

python -m cProfile -o profile.stats your_script.py

This generates a .stats file that Claude Code can analyze. You can then use the pstats module to examine results:

import pstats
p = pstats.Stats('profile.stats')
p.sort_stats('cumulative').print_stats(20)

Installing py-spy for Real-time Profiling

py-spy profiles Python programs without modifying code:

pip install py-spy

Profile a running process:

py-spy top -- python your_script.py

Generate a flame graph:

py-spy record -o flamegraph.svg -- python your_script.py

Memory Profiling with memory_profiler

Track memory usage line-by-line:

pip install memory_profiler

Add profiling decorators to functions:

from memory_profiler import profile

@profile
def memory_intensive_function():
    data = [i ** 2 for i in range(1000000)]
    return sum(data)

Run with memory tracking:

python -m memory_profiler your_script.py

Integrating Profilers with Claude Code Workflow

Automated Profile Analysis

Create a helper script that Claude Code can interpret:

import pstats
import sys

def analyze_profile(stats_file, top_n=20):
    p = pstats.Stats(stats_file)
    p.sort_stats('cumulative')

    print("=== Top Functions by Cumulative Time ===")
    p.print_stats(top_n)

    print("\n=== Functions with Most Primitive Calls ===")
    p.sort_stats('ncalls')
    p.print_stats(top_n)

    print("\n=== Slowest Individual Functions ===")
    p.sort_stats('time')
    p.print_stats(top_n)

if __name__ == '__main__':
    analyze_profile(sys.argv[1] if len(sys.argv) > 1 else 'profile.stats')

Run this script and share the output with Claude Code to get optimization suggestions.

Continuous Profiling in Development

Set up a profiling wrapper for repeated testing:

import cProfile
import pstats
import io
from functools import wraps

def profile_function(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        profiler = cProfile.Profile()
        profiler.enable()

        result = func(*args, **kwargs)

        profiler.disable()

        s = io.StringIO()
        ps = pstats.Stats(profiler, stream=s)
        ps.sort_stats('cumulative')
        ps.print_stats(15)

        print(s.getvalue())

        return result
    return wrapper

@profile_function
def your_function():
    # Your code here
    pass

JavaScript and TypeScript Profiling

Using Chrome DevTools Protocol

For Node.js applications:

npm install v8-profiler-next

Profile CPU and memory:

const profiler = require('v8-profiler-next');

profiler.startProfiling('profile', true);

// Run your code

const profile = profiler.stopProfiling('profile');
profile.export()
  .pipe(fs.createWriteStream('profile.cpuprofile'))
  .on('finish', () => profiler.deleteAllProfiles());

Bun Profiler Integration

Bun has built-in profiling support:

bun --profile your_script.ts

This generates a Chrome-compatible profile that you can load in chrome://inspect.

Analyzing Profile Results with Claude Code

When you have profiling data, share it with Claude Code using these strategies:

Paste Summary Statistics

Copy the top functions from your profile output and ask Claude Code to explain the bottlenecks and suggest optimizations.

Share Flame Graphs

Upload flame graph SVGs and ask Claude Code to identify the hottest code paths.

Iterative Optimization Workflow

  1. Run profiler on your code
  2. Identify top bottleneck
  3. Ask Claude Code to optimize that specific function
  4. Re-profile to verify improvement
  5. Repeat until performance goals are met

How to Frame Profiler Output for Claude Code

The quality of Claude Code’s optimization suggestions depends heavily on how you present your profiling data. Vague prompts produce vague advice.

Less effective prompt: “My app is slow. Here is the profiler output.”

More effective prompt: “Here is my cProfile output for a Django view that handles file uploads. The function process_image appears at the top with 4.2 seconds cumulative time out of 5 seconds total. It is called 1 time. The function calls PIL.Image.open 847 times. Help me understand why and suggest optimizations.”

This level of specificity lets Claude Code zero in on the actual bottleneck — in this case, opening the same image file repeatedly in a loop — rather than offering general Python optimization advice.

Providing Context Alongside Profile Data

For the best results, share three things with Claude Code:

  1. The relevant profile output (top 20-30 lines by cumulative time)
  2. The source code of the hot functions
  3. A brief description of what the code is supposed to do

Claude Code can then reason about the relationship between the profile data and the code structure rather than guessing from statistics alone.

Working with Go and Rust Profilers

Claude Code handles profiling data from multiple languages, not just Python and JavaScript.

Go pprof Integration

Go’s built-in pprof package generates profiling data that Claude Code can interpret:

import (
    "net/http"
    _ "net/http/pprof"
    "log"
)

func main() {
    go func() {
        log.Println(http.ListenAndServe("localhost:6060", nil))
    }()
    // rest of your application
}

Capture a 30-second CPU profile:

go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30

Export the profile as text and share with Claude Code:

go tool pprof -text profile.pb.gz > profile_output.txt

Rust Flamegraph

For Rust, cargo-flamegraph produces visual profiling data:

cargo install flamegraph
cargo flamegraph --bin your_binary

Share the generated flamegraph.svg with Claude Code by describing the visible hot paths.

Best Practices for Profiler Integration

Common Profiler Interpretations

Claude Code can help interpret common profiling patterns:

Pattern Likely Cause Solution
High cumulative time in small function I/O or system call Batch operations or use async
Many function calls Loop inefficiency Vectorize or reduce call overhead
Growing memory usage Memory leak Review object lifecycle
High CPU in GC Too many allocations Reduce object creation
Long tail latency Lock contention Profile concurrency separately
Repeated DB calls in hot path N+1 query problem Add caching or batch queries

Frequently Asked Questions

Can Claude Code read raw .stats files directly?

Claude Code cannot open binary files directly. Convert your .stats file to readable text first using the pstats module’s print_stats() method, then paste the output into the conversation.

How much profiler output should I share?

The top 20-30 functions by cumulative time is usually enough. Sharing hundreds of lines makes it harder for Claude Code to focus on the actual bottlenecks. If a specific function deep in the list is causing concern, include just that function’s section with context.

Does Claude Code understand flamegraph SVG files?

Claude Code can analyze flamegraph SVGs when you describe the visible stack patterns. Describe which bars are widest and which function names appear most prominently — Claude Code can reason from that description even without directly reading the SVG structure.

Should I profile with or without debug symbols?

Always profile with debug symbols enabled. Release builds with optimizations can obscure which source code corresponds to which profiled function, making Claude Code’s suggestions harder to apply.

Built by theluckystrike — More at zovo.one