Claude Skills Guide

How Do I See Claude Skill Usage and Token Costs Breakdown

Understanding how Claude Code consumes tokens when you invoke skills helps you optimize workflows, manage costs, and make informed decisions about which skills to use for specific tasks. This guide covers the built-in tools and techniques for tracking skill usage and breaking down token costs.

Token Usage Basics in Claude Code

Claude Code tracks token consumption at two levels: session-level and skill-level Session-level tracking includes all tokens processed during a conversation, while skill-level tracking attributes usage to specific skill invocations. When you invoke a skill like /pdf or /tdd, the skill’s loaded instructions and any documents it processes contribute to your overall token count.

The token consumption comes from three main sources: the skill definition itself (the instructions in the skill file), context from your workspace that the skill accesses, and the output generated by the skill’s execution. Understanding these components helps you estimate costs before running intensive operations.

Viewing Session Token Usage

Claude Code does not expose a standalone CLI command for token stats. Token usage and costs are tracked at the API level and visible in the Anthropic Console under Usage. During a session, you can ask Claude directly:

How many tokens have we used in this conversation so far?

Claude will report its context window usage. This is particularly useful when you’re running long operations with skills like xlsx that process large spreadsheets.

Skill-Specific Usage Tracking

There is no built-in per-skill token attribution command. To estimate which skills consume the most tokens in a session, track session lengths manually. Invoke a skill in a fresh session, note the reported context usage, then compare across skills:

Rough skill context estimates:
  pdf (10-page doc): ~4,200 tokens
  tdd (module):      ~3,100 tokens
  xlsx (spreadsheet): ~2,800 tokens
  conversation:      varies

This breakdown helps you identify which skills are most resource-intensive for your use cases.

Tracking Usage Over Time

For projects where you want historical tracking, maintain a manual log file and append entries after each session:

# Token Log - Project Alpha

## 2026-03-14
- Session 1 (pdf skill): 15,730 tokens - Extracted tables from financial report
- Session 2 (tdd skill): 8,200 tokens - Generated tests for auth module
- Session 3 (xlsx skill): 22,100 tokens - Analyzed Q1 sales data

## 2026-03-13
- Session 4 (frontend-design skill): 12,400 tokens - Generated React components

Practical Examples with Common Skills

PDF Skill Usage Tracking

When processing documents with the pdf skill, token usage scales with document complexity:

# Process a 10-page document
/pdf
Summarize the key points from contract.pdf
# Typical usage: 3,000-5,000 tokens

# Process a 100-page document with table extraction
/pdf
Extract all tables and create markdown
# Typical usage: 15,000-25,000 tokens

The pdf skill loads document content into context, so larger documents significantly increase token consumption. For batch processing, break large documents into smaller chunks to maintain predictable costs.

TDD Skill Usage Tracking

The tdd skill generates test code based on your source files:

# Generate tests for a single function
/tdd
Write pytest tests for this authentication function
# Typical usage: 2,000-4,000 tokens

# Generate comprehensive test suite for a module
/tdd
Create full test suite with unit tests, integration tests, and edge cases
# Typical usage: 8,000-15,000 tokens

The skill consumes tokens both from reading your source code and generating test output. Providing focused code snippets rather than entire files keeps usage manageable.

XLSX Skill Usage Tracking

The xlsx skill processes spreadsheet data:

# Analyze a small spreadsheet
/xlsx
Calculate total revenue from sales.xlsx
# Typical usage: 1,500-3,000 tokens

# Process large datasets with complex formulas
/xlsx
Build financial model with projections and sensitivity analysis
# Typical usage: 10,000-30,000 tokens

For large spreadsheets, target specific sheets to reduce the context loaded:

/xlsx
Analyze Q1 sheet only - calculate growth metrics

Supermemory and Frontend-Design Skills

The supermemory skill maintains context across sessions, which affects baseline token usage:

# With supermemory enabled, each session starts with context
claude "continue working on the API integration"
# Baseline context: 500-2,000 tokens per session

The frontend-design skill generates UI components and layouts:

# Generate a component based on description
/frontend-design
Create a responsive card component
# Typical usage: 3,000-6,000 tokens

# Generate full page layouts
/frontend-design
Design a dashboard with charts and data tables
# Typical usage: 10,000-20,000 tokens

Cost Optimization Strategies

Once you understand your usage patterns, apply these optimization techniques:

Load only necessary context. Before invoking a skill, specify exactly what Claude should focus on:

# Instead of loading entire file
/tdd
Write tests for auth.py

# Load specific function only
/tdd
Write tests for this authenticate_user function only

Use streaming for long outputs. When skills generate large amounts of content, stream responses to avoid timeout-related retries that consume additional tokens.

Set token budgets. For predictable cost management, specify scope in your prompt to cap output length:

/pdf
Summarize this document in 500 words or fewer

Conclusion

Tracking Claude skill usage requires monitoring via the Anthropic Console for session-level costs, maintaining manual logs for historical analysis, and being intentional about what you load into context. The pdf, tdd, xlsx, supermemory, and frontend-design skills each have distinct usage patterns based on their function. Track context usage by asking Claude directly during sessions, and log sessions manually for long-term trend analysis. With these approaches, you can monitor costs, optimize workflows, and get maximum value from Claude Code skills.

Built by theluckystrike — More at zovo.one