May 2, 2026
Anthropic just doubled their own Claude Code cost estimate. Here is what that tells you.
Anthropic quietly updated their Claude Code docs to bump the average daily developer spend from $6 to $13, and the 90th percentile from $12 to $30. The number is interesting. The reason is more interesting.
On April 28 2026, Anthropic stealth-edited their Claude Code documentation. The estimated daily token spend for an average developer went from $6 to $13. The 90th percentile estimate went from $12 to $30. Same product, same pricing, no changelog. Just a quiet number swap.
Simon Willison flagged it. Business Insider wrote it up. Anyone using Claude Code on a metered plan should care about why this happened.
What changed and what did not
Per-token prices are the same. Claude Opus 4.7 sits at $5 per million input tokens and $25 per million output tokens. Sonnet is unchanged. Haiku is unchanged. The official price sheet has not moved.
What did change is Anthropic's own internal estimate of how many tokens a real developer actually consumes per day. They more than doubled it.
Why estimates went up
Three things are happening at once.
Agents got more agentic. The model now does more work per request. Each Claude Code session is doing more file reads, more tool calls, more multi-step planning. That means more input tokens loaded per turn and more output tokens generated. Same prompt, more compute.
Context windows are bigger by default. Claude Code now defaults to longer context windows for multi-file edits. The headline price per million tokens did not change, but the bill grew because the bill is tokens times price.
Reasoning costs are now table stakes. Models think more before they answer. Those thinking tokens count as output, and output tokens cost five times what input tokens cost on Opus 4.7. A model that thinks for 2,000 tokens before producing a 500-token answer just billed you for 2,500 output tokens, not 500.
The token count for the same Claude Code session in late April 2026 is meaningfully higher than it was in February. Anthropic updated their docs to reflect that. They did not announce it loudly because it does not look great in a slide.
What this means for your AI spend
If you have a team of ten developers and you used Anthropic's old number to budget, you were planning for $1,800 per month. The new number puts you at $3,900 per month for average usage and $9,000 per month for the 90th percentile. That is a budget conversation.
It is not Anthropic gouging anyone. The model is genuinely doing more per call. But "the model does more" is the same thing as "the model costs more" when you pay per token.
This is the entire reason cost observability matters. Per-token prices are stable; per-task costs are not. The variable that moves is how much the model decides to do on your behalf, and that variable lives in your traces, not on the pricing page.
What to actually measure
If you are running Claude Code or any other agentic dev tool on a metered plan, the questions worth answering this week:
- Tokens per session. Not per day, per month, or per developer. Per session. This is the unit that maps to a single piece of work and the only one that lets you compare last month to this month fairly.
- Output token ratio. Output is 5x input on Opus. If your output ratio creeps up, your bill creeps up faster than your input usage suggests.
- Reasoning vs final-answer split. If the model is now spending 80% of its output tokens on internal reasoning and 20% on the answer you actually keep, that is interesting and possibly tunable.
- Tool-call density. More tool calls means more round trips and more input tokens reloaded each round.
The broader pattern across every frontier provider in 2026: posted prices are stable, real costs are quietly creeping up because the work each call does is creeping up. The teams who notice this early adjust budgets calmly. The teams who notice it on the invoice have an awkward Slack thread.
The Clawback take
We built Clawback for exactly this scenario. You connect it to your OpenClaw or Claude Code session logs and we show you token usage by session, by output ratio, by tool-call density, and by reasoning split. When Anthropic doubles its estimate next time (and there will be a next time), you will already know whether your number doubled too.
If your team uses Claude Code on a metered plan, run the numbers this week. Same workload, same model, this month versus three months ago. The cost line on your dashboard will tell you more than any docs page Anthropic edits.
See your actual numbers
The calculator runs in your browser. No account needed.