Posts tagged "claude-code"

FEB 4 February 4, 2026

Deep Dive: How Claude Code's /insights Command Works - The /insights command in Claude Code generates a comprehensive HTML report analyzing your usage patterns across all your Claude Code sessions. It’s designed to help you understand how you interact with Claude, what’s working well, where friction occurs, and how to improve your workflows.

It’s output is really cool and I encourage you to try it and read it through!

Command: /insights

Description: “Generate a report analyzing your Claude Code sessions”

Output: An interactive HTML report saved to ~/.claude/usage-data/report.html

But what’s really happening under the hood? Let’s trace through the entire pipeline.

[read more...]


JAN 12 January 12, 2026


NOV 26 November 26, 2025

A Mermaid Validation Skill for Claude Code - AI coding agents generate significantly more markdown documentation than we used to write manually. This creates opportunities to explain concepts visually with mermaid diagrams - flowcharts, sequence diagrams, and other visualizations defined in text. When Claude generates these diagrams, the syntax can be invalid even though the code looks correct. Claude Code skills provide a way to teach Claude domain-specific workflows - in this case, validating diagrams before marking the work complete.

[read more...]


AUG 4 August 4, 2025

Two Days. Two Models. One Surprise: Claude Code Under Limits - The upcoming weekly usage limits announced by Anthropic for their Claude Code Max plans could put a dent in the workflows of many developers - especially those who’ve grown dependent on Opus-level output.

I’ve been using Claude Code for the last couple of months, though nowhere near the levels I’ve seen from the top 5% users (some of whom rack up thousands of dollars in usage per day). I don’t run multiple jobs concurrently (though I’ve experimented), and I don’t run it on Github itself. I don’t use git worktrees (as Anthropic recommends). I just focus on one task at a time and stay available to guide and assist my AI agent throughout the day.

On Friday, I decided to spend the full day using Opus exclusively across my usual two or three work projects. Nothing unusual - a typical 8-hour day, bouncing between tasks. At the end of the day, I measured my token usage using the excellent ccussage utility and this calculated what it would have cost via the API.

Claude Code Opus usage

Then today (Monday), I repeated the experiment - this time using Sonnet exclusively. Different tasks of course, but the same projects, similar complexity, and the same eight-hour block. Again I recorded the token usage.

Claude Code Sonnet usage

Here’s what I found:

  • Token usage was comparable.
  • Sonnet’s cost was significantly lower (no surprise)
  • And the quality? Honestly, surprisingly good.

Sonnet held up well for all of my coding tasks. I even used it for some light planning work and it got the job done (not as well as Opus would have but still very very good).

Anthropic’s new limits suggest we’ll get 240-480 hours/week of Sonnet, and 24-40 hours/week of Opus. Considering a full-time work week is 40 hours, and there are a total of 168 total hours in a week, I think the following setup might actually be sustainable for most developers:

  1. Sonnet for hands-on coding tasks
  2. Sonnet + Github for code review and analysis
  3. Opus for high-level planning, design, or complex architectural thinking

I highly recommend you be explicit about which model you want to use for your custom slash commands and sub-agents. For slash commands there is a model attribute you can put in the command front matter. Release 1.0.63 also allows setting the model in sub-agents.

I would love to see more transparency in the Claude Code UI of where we sit in real-time against our session and weekly limits. I think if developers saw this data they would control their usage to suit. We shouldn’t need 3rd party libraries to track and report this information.

Based on this pattern, I don’t think I’ll hit the new weekly limits. But we’ll see - I’ll report back in September. And of course there is nothing stopping you from trialing other providers and models and even other agentic coding tools and really diving deep into using the best model for the job.


AUG 3 August 3, 2025

Claude Code's Feedback Survey - It seems this weekend a bunch of people have reported seeing Claude Code ask them for feedback on how well it is doing in their current session. It looks like this:

Claude Code feedback survey

And everyone finds it annoying. I feel things like this are akin to advertising. For a paid product, feedback surveys like this should be opt in. Ask me at the start of the session if I’m ok in providing feedback. Give me the parameters of the feedback and let me opt in. Don’t pester me when I’m doing work.

I went digging in the code to see if maybe there is an undocumented setting I could slam into settings.json to hide this annoyance. What I found instead is an environment variable that switched it on more!

CLAUDE_FORCE_DISPLAY_SURVEY=1 will show that sucker lots!

These are the conditions that will show the survey:

  1. A minimum time before first feedback (600seconds / 10 minutes)
  2. A minimum time between feedback requests (1800 seconds / 30 minutes)
  3. A minimum number of user turns before showing feedback
  4. Some probability settings
  5. Some model restrictions (only shows for certain models) - I’ve only had it come up with Opus.

Asking for feedback is totally ok. But don’t interrupt my work session to do it. I hope this goes away or there is a setting added to opt out completely.


JUL 4 July 4, 2025

After doing a coding session, I run a custom Claude Code slash command /quiz that will find a couple of interesting things in the work we just did and quiz me on it. A bit of fun and keeps the learning happening.

We've done some substantial work in this session and I would like you to quiz me to cement learning.

You are an expert Ruby on Rails instructor.

1. Read the code we have changed in this session.
2. Pick **2 non-obvious or interesting techniques** (e.g. `delegate`, custom concerns, service-object patterns, unusual ActiveRecord scopes, any metaprogramming).
3. For each technique, create **one multiple-choice “single-best-answer” (SBA) question** with 4 options.
4. Ask me the first question only.
5. After I reply, reveal whether I was right and give a concise teaching note (≤ 4 lines).
6. Then ask the next question, and so on.

When all questions are done, end with:
`Quiz complete – let me know where you’d like a deeper dive.`

I share my Claude Commands here


JUL 2 July 2, 2025

The Ground Your Agent Walks On - Every codebase is terrain. Some are smooth highways, where AI agents can move fast and confidently. Others are more like an obstacle course - still functional, but harder to navigate, even for experienced developers. For AI agents, the difference matters more than you might think.

Matt Pocock recently tweeted, “Know the ground your agent will walk on.” It’s a great metaphor. AI coding assistants aren’t just tools - they’re travelers trying to make sense of your landscape. The easier that terrain is to read, the better they perform.

The Terrain Metaphor

Think of your AI agent as a sharp, capable junior developer. Fast, tireless, and helpful - but very literal. They don’t infer intent. They follow cues.

When your codebase has clear structure with focussed models, controllers that follow consistent patterns, logic that lives in obvious places then AI agents can hit the ground running. They know where to go and what to do. But when logic is scattered across models, helpers, and controller actions - when responsibilities blur and patterns break - it’s harder. The AI has to guess, and that’s when bugs, duplication, or missed edge cases creep in.

You’ve likely seen it: in a clean, readable codebase, the AI knows where to add password reset logic. In a tangled one, it might reinvent validation from scratch, or break something that silently relied on old behavior.

The Productivity Multiplier

Well-structured code doesn’t just help AI a little. It can make them drastically more useful.

Clean abstractions give the model leverage. Instead of spitting out code you need to carefully review or fix, it can offer changes that fit right into your architecture. The AI stops being just a helpful autocomplete and starts being a real multiplier.

[read more...]


JUN 18 June 18, 2025

That Weird AI Workflow Might Just Work - Kieran Klaassen and team mates shared their Claude Code workflow a few days ago. They broke down their process, showed what they built, and yesterday Kieran posted about the (potential) API costs a workflow like this has (or would have if not for Anthopic’s Max plan). The response? While some were curious, the critical voices dominated - calling it too expensive and claiming ‘these AI folks’ aren’t building anything real (check out Kieran’s X feed to see how absurd that is).

Here’s what bothers me: Kieran wasn’t bragging. They were sharing data. They were excited about their productivity gains and wanted to show others what worked for them. And instead of curiosity or questions, they got dismissal.

The Real Problem

We all know AI is transformative. Nobody’s arguing that anymore. But there’s this weird gatekeeping happening around how people use these tools.

Someone posts about using Claude to write tests? “That’s not real testing.” Someone shares their Cursor workflow? “You’re just racking up API bills.” Someone shows how they built an app in a weekend with AI assistance? “But is it production quality?”

The critics are missing the point entirely. These developers aren’t saying their way is the only way. They’re experimenting. They’re pushing boundaries. They’re figuring out what works.

Why This Matters

Every breakthrough in development workflows started with someone trying something different and sharing it. Remember when people mocked developers for using Rails? “It doesn’t scale.” “It’s just a toy framework.” “Real developers use Java.”

Those early Rails developers weren’t wrong for sharing their excitement. They were pioneering new ways of building web apps. Some of their approaches failed. Others became industry standard.

Sure, not every experiment will pan out, and healthy skepticism has its place. But there’s a difference between thoughtful critique and reflexive dismissal.

Same thing is happening now with AI workflows.

[read more...]