I don't get it. The title says "What makes Claude Code so damn good", which implies that they will show how Claude Code is better than other tools, or just better in general. But they go about repeating the Claude Code documentation using different wording.
Am I missing something here? Or is this just Anthropic shilling?
(blogpost author here)
Haha, that's totally fair. I've read a whole bunch of posts comparing CC to other tools, or with a dump of the the architecture. This post was mainly for people who've used CC extensively, know for a fact that it is better and wonder how to ship such an experience in their own apps.
I've used Claude Code, Cursor, and Copilot is Vscode and I don't "know" that Claude Code is better apart from the fact that it runs in the terminal, which makes it a little faster but less ergonomic than tools running inside the editor. All of the context tricks can be done with Copilot instructions as well, so I simply can't see how Claude Code is superior.
I’ve been so into Claude code that I haven’t used cursor or copilot in vs code in a while.
Do they also allow you to view the thinking process and planning, and hit ESC to correct if it’s going down a wrong path? I’ve found that to be one of my favorite features of Claude code. If it says “ah, the the implementation isn’t complete, I’ll update test to use mocks” I can interrupt it and say no, it’s fine for the test to fail until the implementation is finished, so not mock anything. Etc.
It may be that I just discovered this after switching, but I don’t recall that being an interaction pattern on cursor or copilot. I was always having to revert after the fact (which might have been me not seeing the option).
Cursor does show the “thinking” in smaller greyer text, then hides it behind a small grey “thought for 30 seconds” note. If it’s off track, you just hit the stop button and correct the agent, or scroll up and restart from an earlier interaction (same thing as double-ESC in Claude Code).
For code generation, nothing so far beats Opus. More likely than not it generated working code and fixed bugs that Gemini 2.5 pro couldn't solve or even Gemini Code Assist. Gemini Code Assist is better than 2.5 pro, but has way more limits per prompt and often truncates output.
I found Anthropic’s models untrustworthy with SQL (e.g. confused AND and OR operator precedence - or simply forgot to add parens, multiple times), Gemini 2.5 pro has no such issues and identified Claude’s mistakes correctly.
Don’t sleep on Codex-CLI + gpt-5. While the Codex-CLI scaffolding is far behind CC, the gpt-5 code seems solid from what I’ve seen (you can adjust thinking level using /model).
not in the title but, one of the opening sentences is this:
> I find Claude Code objectively less annoying to use compared to Cursor, or Github Copilot agents even with the same underlying model! What makes it so damn good?
The difference between Claude Code and Cursor is that one is a command line tool and the other an IDE. You can use Claude models in both and all these techniques can be applied with Cursor and its rules, too.
Not even close. An agentic tool can be fully autonomous, an IDE like Cursor is, well it's "just" an editor. Quite the opposite. Sure it does some heavy lifting too, but still the user writes the code. They start to implement fully agentic tools and models, but they are nowhere near work as good as Claude Code does.
There is also Cursor Agent CLI which is a TUI exactly like CC. I switched to it because I don't like GUI AI assistants, but I also couldn't stand CC always being overloaded and having many bugs that were affecting me. I'm not on Cursor Agent CLI with GPT5 and happy to have an alternative to CC.
For my personal projects it has completely replaced the need for CC with Anthropic models for me. At work, I am waiting for native Windows support. I don't like using AI assistants via WSL. Since both CA and CC are Node apps and CC has since shipped native Windows support, I don't foresee it taking CA long either. Especially since it can be hacked to work that way today as I've experimented with here: github.com/TomasHubelbauer/cursor-agent-windows
not at all, it's just not a "claude model". All these companies add their own prompts hints on top. it's a totally different experience. Trying using kiro which is also a "claude model" and tell me it's the same
Am I missing something here? Or is this just Anthropic shilling?