Stack Check, April '26

April 18, 2026 · 877 words · 4 min read

Claude got expensive in the way that matters, so the stack moved again.

I wrote Stack Check, March '26 six weeks ago and said Claude Code was the center of gravity in my setup.

Six weeks! Apparently that is how long it takes for an entire tooling stack to change these days. I would love to blame discipline, but no, this one is mostly Anthropic's fault. Up until recently, Claude made the rest of the stack feel calm and competent. It sat in the terminal, did actual work, mostly respected the projects I gave it, and I liked the split, with Claude for bigger async passes, VS Code and Copilot for close editing, me reviewing everything before it shipped.

But there we go, the stack has moved again, mostly because Claude has gotten really dumb. I feel like I am overpaying by a lot, even with the thousands of dollars of subsidized inference Anthropic throws at subscription users. Anthropic seems so terrified of Claude being used for malware that they half-lobotomized it with the system prompt, and normal coding work gets caught in the blast radius.

Claude Opus 4.7 especially made the annoyance concrete. Anthropic says it is a direct upgrade, better at hard coding work, better at long-running tasks, better at following instructions, and priced the same as 4.6. Nice on the page! In practice, the sticker price stayed still while the effective cost moved underneath me. Exactly the "clever" bullshit that makes me want to go outside and touch grass instead of trying to build things.

The tokenizer analysis that made the rounds on HN measured the new tokenizer against real Claude Code content and found English and code getting meaningfully more expensive in token terms. The rough impact is a 20-30% worse session budget for the same work. When you use these tools as much as I do, that matters! I pay a little over 200 EUR a month for Claude, and around the same for ChatGPT. Value wise, one of them is not like the other. At this point I probably will not even renew Claude, which would have sounded absurd a month ago.

The HN threads around 4.7 share the same frustrations. Is the model better, worse, more expensive, more "thoughtful" (it's not), or just spending more tokens while feeling less predictable? That's the part that wears me down. A compiler does not wake up with a new personality when the vendor likes a new set of evals. My coding agent should not either!

That is also why VS Code and Copilot slid out of the default path. In March I wrote about Supermaven going away and how much I missed it, and that feeling has only sharpened. Cursor's completion spoiled me, because it was fast, quiet, and really good at the small continuation work that makes an editor feel helpful. Copilot in VS Code just doesn't hit that mark. It is fine, but fine is the problem! VS Code used to win me back on performance, but lately it feels sloppier, and Copilot shows up everywhere.

So I moved to Zed.

No AI in the editor. It can be disabled entirely! No Copilot chat sidebar. No ghost text. No tab autocomplete either, which I do miss quite a bit. I really had not realised how much of my typing rhythm had become "write half the boring thing, hit tab, keep moving". Losing that is a bit annoying, but at least the editor is just the editor again. I'm not sure if Zed's tab completion will do the trick, as I haven't tested it yet, but I'm thinking about it. Until then, the split feels healthier, because Zed is where I touch the code and Codex is where I ask for agentic engineering work. It handles bigger passes, repo spelunking, tests, diffs, the stuff where I want an agent to understand the project and come back with something I can review.

And Codex has been a pleasant surprise! It reads the repo, uses the tools, explains what it is doing, and generally behaves more like a pragmatic coding agent than a personality attached to a token furnace. That is the part of AI coding I still believe in.

Claude still has moments where it feels brilliant, but I no longer trust the product around the brilliance. The quota math, model changes, and vibes-based "better reasoning" story stinks. Something that I have learned the hard way with my own clients at work is that trust is cumulative and fragile. Anthropic has spent too much of mine.

And still, annoyingly, I am optimistic!

The direction is too useful to ignore. When the tool reads the code, makes a sensible plan, runs the checks, and hands back a diff I can review, it feels like the future I wanted. I just do not want every surface in my working life to become a token furnace. And for now the stack is still simple. Zed for editing, deliberately boring and dumb. Codex for agentic engineering. Claude on the edge, mainly to burn through what's left of the subscription, and while still useful, it's no longer trusted as the default.

The boring standard wins again! The best tool is the one that helps me think clearly, then gets out of the way before I start thinking about the tool instead.