The /insights command in Claude Code generates a comprehensive HTML report analyzing your usage patterns across all your Claude Code sessions. It’s designed to help you understand how you interact with Claude, what’s working well, where friction occurs, and how to improve your workflows.

It’s output is really cool and I encourage you to try it and read it through!

Command: /insights

Description: “Generate a report analyzing your Claude Code sessions”

Output: An interactive HTML report saved to ~/.claude/usage-data/report.html

But what’s really happening under the hood? Let’s trace through the entire pipeline.


The Analysis Pipeline

The insights generation is a multi-stage process:

  1. Collect all your session logs from ~/.claude/projects/
  2. Filter out agent sub-sessions and internal operations
  3. Extract metadata from each session (tokens, tools used, duration, etc.)
  4. Run LLM analysis to extract “facets” (qualitative assessments) from session transcripts
  5. Aggregate all the data across sessions
  6. Generate insights using multiple specialized prompts
  7. Render an interactive HTML report

The facets are cached in ~/.claude/usage-data/facets/ so subsequent runs are faster.


Stage 1: Session Filtering & Metadata Extraction

Before any LLM calls, Claude Code processes your session logs to extract structured metadata.

Sessions are filtered to exclude:

  • Agent sub-sessions (files starting with agent-)
  • Internal facet-extraction sessions
  • Sessions with fewer than 2 user messages
  • Sessions shorter than 1 minute

Metadata extracted per session:

  • session_id - Unique identifier
  • start_time - When the session began
  • duration_minutes - How long the session lasted
  • user_message_count - Number of user messages
  • input_tokens / output_tokens - Token usage
  • tool_counts - Which tools were used and how often
  • languages - Programming languages detected from file extensions
  • git_commits / git_pushes - Git activity
  • user_interruptions - How often you interrupted Claude
  • tool_errors - Tool failures and their categories
  • lines_added / lines_removed / files_modified - Code changes
  • uses_task_agent / uses_mcp / uses_web_search / uses_web_fetch - Feature usage
  • first_prompt - Your initial message
  • summary - Brief session summary

Stage 2: Transcript Summarization (For Long Sessions)

If a session transcript exceeds 30,000 characters, it’s chunked into 25,000-character segments and each chunk is summarized before facet extraction.

Transcript Summarization Prompt

Summarize this portion of a Claude Code session transcript. Focus on:
1. What the user asked for
2. What Claude did (tools used, files modified)
3. Any friction or issues
4. The outcome

Keep it concise - 3-5 sentences. Preserve specific details like file names,
error messages, and user feedback.

TRANSCRIPT CHUNK:

Stage 3: Facet Extraction

This is the core qualitative analysis. For each session (up to 50 new sessions per run), Claude analyzes the transcript to extract structured “facets” - qualitative assessments of what happened.

Model: Haiku (fast, cost-effective) Max output tokens: 4096

Facet Extraction Prompt

Analyze this Claude Code session and extract structured facets.

CRITICAL GUIDELINES:

1. **goal_categories**: Count ONLY what the USER explicitly asked for.
   - DO NOT count Claude's autonomous codebase exploration
   - DO NOT count work Claude decided to do on its own
   - ONLY count when user says "can you...", "please...", "I need...", "let's..."

2. **user_satisfaction_counts**: Base ONLY on explicit user signals.
   - "Yay!", "great!", "perfect!" → happy
   - "thanks", "looks good", "that works" → satisfied
   - "ok, now let's..." (continuing without complaint) → likely_satisfied
   - "that's not right", "try again" → dissatisfied
   - "this is broken", "I give up" → frustrated

3. **friction_counts**: Be specific about what went wrong.
   - misunderstood_request: Claude interpreted incorrectly
   - wrong_approach: Right goal, wrong solution method
   - buggy_code: Code didn't work correctly
   - user_rejected_action: User said no/stop to a tool call
   - excessive_changes: Over-engineered or changed too much

4. If very short or just warmup, use warmup_minimal for goal_category

SESSION:
<session transcript is inserted here>

RESPOND WITH ONLY A VALID JSON OBJECT matching this schema:
{
  "underlying_goal": "What the user fundamentally wanted to achieve",
  "goal_categories": {"category_name": count, ...},
  "outcome": "fully_achieved|mostly_achieved|partially_achieved|not_achieved|unclear_from_transcript",
  "user_satisfaction_counts": {"level": count, ...},
  "claude_helpfulness": "unhelpful|slightly_helpful|moderately_helpful|very_helpful|essential",
  "session_type": "single_task|multi_task|iterative_refinement|exploration|quick_question",
  "friction_counts": {"friction_type": count, ...},
  "friction_detail": "One sentence describing friction or empty",
  "primary_success": "none|fast_accurate_search|correct_code_edits|good_explanations|proactive_help|multi_file_changes|good_debugging",
  "brief_summary": "One sentence: what user wanted and whether they got it"
}

Goal Categories

Category Description
debug_investigate Debug/Investigate
implement_feature Implement Feature
fix_bug Fix Bug
write_script_tool Write Script/Tool
refactor_code Refactor Code
configure_system Configure System
create_pr_commit Create PR/Commit
analyze_data Analyze Data
understand_codebase Understand Codebase
write_tests Write Tests
write_docs Write Docs
deploy_infra Deploy/Infra
warmup_minimal Cache Warmup (minimal sessions)

Satisfaction Levels:

frustrateddissatisfiedlikely_satisfiedsatisfiedhappyunsure

Outcome Categories:

not_achievedpartially_achievedmostly_achievedfully_achievedunclear_from_transcript

Friction Categories

Category Description
misunderstood_request Claude interpreted incorrectly
wrong_approach Right goal, wrong solution method
buggy_code Code didn’t work correctly
user_rejected_action User said no/stop to a tool call
claude_got_blocked Claude got stuck
user_stopped_early User stopped before completion
wrong_file_or_location Edited wrong file/location
excessive_changes Over-engineered or changed too much
slow_or_verbose Too slow or verbose
tool_failed Tool failure
user_unclear User’s request was unclear
external_issue External/environmental issue

Claude Helpfulness Levels:

unhelpfulslightly_helpfulmoderately_helpfulvery_helpfulessential

Session Types

Type Description
single_task One focused task
multi_task Multiple tasks in one session
iterative_refinement Back-and-forth refinement
exploration Exploring/understanding codebase
quick_question Brief Q&A

Primary Success Categories

Category Description
none No notable success
fast_accurate_search Quick, accurate code search
correct_code_edits Accurate code modifications
good_explanations Clear explanations
proactive_help Helpful suggestions beyond the ask
multi_file_changes Successfully coordinated multi-file edits
good_debugging Effective debugging

Stage 4: Aggregated Analysis

Once all session data and facets are collected, they’re aggregated and processed through multiple specialized analysis prompts.

Model: Haiku Max output tokens: 8192 per prompt

Data Passed to Analysis Prompts

Each analysis prompt receives aggregated statistics:

{
  sessions: <total sessions>,
  analyzed: <sessions with facets>,
  date_range: { start, end },
  messages: <total messages>,
  hours: <total duration in hours>,
  commits: <git commits>,
  top_tools: [top 8 tools by usage],
  top_goals: [top 8 goal categories],
  outcomes: { outcome distribution },
  satisfaction: { satisfaction distribution },
  friction: { friction type counts },
  success: { success category counts },
  languages: { language usage counts }
}

Plus text summaries:

  • SESSION SUMMARIES: Up to 50 brief summaries
  • FRICTION DETAILS: Up to 20 friction details from facets
  • USER INSTRUCTIONS TO CLAUDE: Up to 15 repeated instructions users gave Claude

4.1 Project Areas Analysis

Analyze this Claude Code usage data and identify project areas.

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "areas": [
    {
      "name": "Area name",
      "session_count": N,
      "description": "2-3 sentences about what was worked on and how Claude Code was used."
    }
  ]
}

Include 4-5 areas. Skip internal CC operations.

4.2 Interaction Style Analysis

Analyze this Claude Code usage data and describe the user's interaction style.

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "narrative": "2-3 paragraphs analyzing HOW the user interacts with Claude Code.
               Use second person 'you'. Describe patterns: iterate quickly vs
               detailed upfront specs? Interrupt often or let Claude run?
               Include specific examples. Use **bold** for key insights.",
  "key_pattern": "One sentence summary of most distinctive interaction style"
}

4.3 What Works Well

Analyze this Claude Code usage data and identify what's working well for this user.
Use second person ("you").

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "intro": "1 sentence of context",
  "impressive_workflows": [
    {
      "title": "Short title (3-6 words)",
      "description": "2-3 sentences describing the impressive workflow or approach.
                      Use 'you' not 'the user'."
    }
  ]
}

Include 3 impressive workflows.

4.4 Friction Analysis

Analyze this Claude Code usage data and identify friction points for this user.
Use second person ("you").

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "intro": "1 sentence summarizing friction patterns",
  "categories": [
    {
      "category": "Concrete category name",
      "description": "1-2 sentences explaining this category and what could be
                      done differently. Use 'you' not 'the user'.",
      "examples": ["Specific example with consequence", "Another example"]
    }
  ]
}

Include 3 friction categories with 2 examples each.

4.5 Suggestions & Improvements

This is the longest prompt, providing actionable recommendations:

Analyze this Claude Code usage data and suggest improvements.

## CC FEATURES REFERENCE (pick from these for features_to_try):

1. **MCP Servers**: Connect Claude to external tools, databases, and APIs via
   Model Context Protocol.
   - How to use: Run `claude mcp add <server-name> -- <command>`
   - Good for: database queries, Slack integration, GitHub issue lookup,
     connecting to internal APIs

2. **Custom Skills**: Reusable prompts you define as markdown files that run
   with a single /command.
   - How to use: Create `.claude/skills/commit/SKILL.md` with instructions.
     Then type `/commit` to run it.
   - Good for: repetitive workflows - /commit, /review, /test, /deploy, /pr,
     or complex multi-step workflows

3. **Hooks**: Shell commands that auto-run at specific lifecycle events.
   - How to use: Add to `.claude/settings.json` under "hooks" key.
   - Good for: auto-formatting code, running type checks, enforcing conventions

4. **Headless Mode**: Run Claude non-interactively from scripts and CI/CD.
   - How to use: `claude -p "fix lint errors" --allowedTools "Edit,Read,Bash"`
   - Good for: CI/CD integration, batch code fixes, automated reviews

5. **Task Agents**: Claude spawns focused sub-agents for complex exploration
   or parallel work.
   - How to use: Claude auto-invokes when helpful, or ask "use an agent to explore X"
   - Good for: codebase exploration, understanding complex systems

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "claude_md_additions": [
    {
      "addition": "A specific line or block to add to CLAUDE.md based on workflow
                   patterns. E.g., 'Always run tests after modifying auth-related files'",
      "why": "1 sentence explaining why this would help based on actual sessions",
      "prompt_scaffold": "Instructions for where to add this in CLAUDE.md.
                          E.g., 'Add under ## Testing section'"
    }
  ],
  "features_to_try": [
    {
      "feature": "Feature name from CC FEATURES REFERENCE above",
      "one_liner": "What it does",
      "why_for_you": "Why this would help YOU based on your sessions",
      "example_code": "Actual command or config to copy"
    }
  ],
  "usage_patterns": [
    {
      "title": "Short title",
      "suggestion": "1-2 sentence summary",
      "detail": "3-4 sentences explaining how this applies to YOUR work",
      "copyable_prompt": "A specific prompt to copy and try"
    }
  ]
}

IMPORTANT for claude_md_additions: PRIORITIZE instructions that appear MULTIPLE TIMES
in the user data. If user told Claude the same thing in 2+ sessions (e.g.,
'always run tests', 'use TypeScript'), that's a PRIME candidate - they shouldn't
have to repeat themselves.

IMPORTANT for features_to_try: Pick 2-3 from the CC FEATURES REFERENCE above.
Include 2-3 items for each category.

4.6 On The Horizon (Future Opportunities)

Analyze this Claude Code usage data and identify future opportunities.

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "intro": "1 sentence about evolving AI-assisted development",
  "opportunities": [
    {
      "title": "Short title (4-8 words)",
      "whats_possible": "2-3 ambitious sentences about autonomous workflows",
      "how_to_try": "1-2 sentences mentioning relevant tooling",
      "copyable_prompt": "Detailed prompt to try"
    }
  ]
}

Include 3 opportunities. Think BIG - autonomous workflows, parallel agents,
iterating against tests.

4.7 Fun Ending (Memorable Moment)

Analyze this Claude Code usage data and find a memorable moment.

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "headline": "A memorable QUALITATIVE moment from the transcripts - not a statistic.
               Something human, funny, or surprising.",
  "detail": "Brief context about when/where this happened"
}

Find something genuinely interesting or amusing from the session summaries.

Stage 5: At a Glance Summary

The final LLM call generates an executive summary that ties everything together. This prompt receives all the previously generated insights as context.

At a Glance Prompt

You're writing an "At a Glance" summary for a Claude Code usage insights report
for Claude Code users. The goal is to help them understand their usage and
improve how they can use Claude better, especially as models improve.

Use this 4-part structure:

1. **What's working** - What is the user's unique style of interacting with Claude
   and what are some impactful things they've done? You can include one or two
   details, but keep it high level since things might not be fresh in the user's
   memory. Don't be fluffy or overly complimentary. Also, don't focus on the
   tool calls they use.

2. **What's hindering you** - Split into (a) Claude's fault (misunderstandings,
   wrong approaches, bugs) and (b) user-side friction (not providing enough
   context, environment issues -- ideally more general than just one project).
   Be honest but constructive.

3. **Quick wins to try** - Specific Claude Code features they could try from the
   examples below, or a workflow technique if you think it's really compelling.
   (Avoid stuff like "Ask Claude to confirm before taking actions" or "Type out
   more context up front" which are less compelling.)

4. **Ambitious workflows for better models** - As we move to much more capable
   models over the next 3-6 months, what should they prepare for? What workflows
   that seem impossible now will become possible? Draw from the appropriate
   section below.

Keep each section to 2-3 not-too-long sentences. Don't overwhelm the user.
Don't mention specific numerical stats or underlined_categories from the
session data below. Use a coaching tone.

RESPOND WITH ONLY A VALID JSON OBJECT:
{
  "whats_working": "(refer to instructions above)",
  "whats_hindering": "(refer to instructions above)",
  "quick_wins": "(refer to instructions above)",
  "ambitious_workflows": "(refer to instructions above)"
}

SESSION DATA:
<aggregated statistics JSON>

## Project Areas (what user works on)
<project_areas results>

## Big Wins (impressive accomplishments)
<what_works results>

## Friction Categories (where things go wrong)
<friction_analysis results>

## Features to Try
<suggestions.features_to_try results>

## Usage Patterns to Adopt
<suggestions.usage_patterns results>

## On the Horizon (ambitious workflows for better models)
<on_the_horizon results>

Stage 6: Report Generation

All the collected data and LLM-generated insights are rendered into an interactive HTML report.

Statistics Dashboard:

  • Total sessions, messages, duration, tokens
  • Git commits and pushes
  • Active days and streaks
  • Peak activity hours

Visualizations:

  • Daily activity charts
  • Tool usage distribution
  • Language breakdown
  • Satisfaction distribution
  • Outcome tracking

Narrative Sections:

  • Project areas with descriptions
  • Interaction style analysis
  • What’s working well (impressive workflows)
  • Friction analysis with specific examples
  • CLAUDE.md additions to try
  • Features to explore
  • On the horizon opportunities
  • Fun memorable moment

Pipeline Pseudocode

Here’s how the stages connect:

function generateInsights():
    // Stage 1: Load and filter sessions
    sessions = loadSessionLogs("~/.claude/projects/")
    sessions = sessions.filter(s =>
        !isAgentSession(s) &&
        !isInternalSession(s) &&
        s.userMessageCount >= 2 &&
        s.durationMinutes >= 1
    )

    // Extract metadata from each session
    metadata = sessions.map(extractMetadata)

    // Stage 2 & 3: Extract facets (with caching)
    facets = {}
    for session in sessions:
        cached = loadCachedFacet(session.id)
        if cached:
            facets[session.id] = cached
        else:
            transcript = session.transcript
            if transcript.length > 30000:
                transcript = summarizeInChunks(transcript)

            facets[session.id] = callLLM(FACET_EXTRACTION_PROMPT + transcript)
            saveFacetToCache(session.id, facets[session.id])

    // Stage 4: Aggregate and analyze
    aggregated = aggregateAllData(metadata, facets)

    insights = {}
    insights.project_areas = callLLM(PROJECT_AREAS_PROMPT, aggregated)
    insights.interaction_style = callLLM(INTERACTION_STYLE_PROMPT, aggregated)
    insights.what_works = callLLM(WHAT_WORKS_PROMPT, aggregated)
    insights.friction = callLLM(FRICTION_PROMPT, aggregated)
    insights.suggestions = callLLM(SUGGESTIONS_PROMPT, aggregated)
    insights.on_the_horizon = callLLM(ON_THE_HORIZON_PROMPT, aggregated)
    insights.fun_ending = callLLM(FUN_ENDING_PROMPT, aggregated)

    // Stage 5: Generate executive summary
    insights.at_a_glance = callLLM(AT_A_GLANCE_PROMPT, aggregated + insights)

    // Stage 6: Render HTML report
    html = renderReport(aggregated, insights)
    saveFile("~/.claude/usage-data/report.html", html)

    return insights

Data Storage

Path Purpose
~/.claude/projects/<hash>/ Session logs
~/.claude/usage-data/facets/<session-id>.json Cached facets
~/.claude/usage-data/report.html Generated report

Facets are cached per-session, so running /insights multiple times only analyzes new sessions.


Technical Details

Setting Value
Model Haiku
Max tokens per prompt 8192
Sessions analyzed per run Up to 50 new
Transcript size limit 30,000 chars
Chunk size for summarization 25,000 chars

Privacy Considerations

All analysis happens locally using the Anthropic API. Your session data stays on your machine - the HTML report is generated locally and can be shared at your discretion.

The facet extraction focuses on patterns in your interactions, not the content of your code:

  • What types of tasks you ask for
  • How you respond to Claude’s output
  • Where friction occurs in the workflow
  • Which tools and features you use

Tips for Better Insights

  1. Use Claude Code regularly - More sessions = richer analysis
  2. Give feedback - Say “thanks” or “that’s not right” so satisfaction can be tracked
  3. Don’t filter yourself - Natural usage patterns reveal the most useful insights
  4. Run periodically - Check in monthly to see how your patterns evolve