In all of these posts there is someone claiming Claude is the best, then somebody else claiming they have tried a bunch of times and for them Gemini is the best while others find GPT-5 is supreme. Obviously, all of these are subjective narrow experiences. My conclusion is that all frontier models are both good and bad with no clear winner and making good evals is really hard.
* Gemini has the highest ceiling out of all of the models, but has consistently struggled with token-level accuracy. In other words, it's conceptual thinking it well beyond other models, but it sometimes makes stupid errors when talking. This makes it hard to reliably use for tool calling or structured output. Gemini is also very hard to steer, so when it's wrong, it's really hard to correct.
* Claude is extremely consistent and reliable. It's very, very good at the details - but will start to forget things if things get too complex. The good news is Claude is very steerable and will remember those details if you remind it.
* GPT-5 seems to be completely random for me. It's so inconsistent that it's extremely hard to use.
I tend to use Claude because I'm the most familiar with it and I'm confident that I can get good results out of it.
I’d say GPT-5 is the best in following and remembering instructions. After an initial plan it can easily continue with said plan for the next 30-60 minutes without human intervention, and come back with a complete working finished feature/product.
It’s honestly crazy how good it is, coming from Claude. I never thought I could already pass something a design doc and have it one-shot the entire thing with such level of accuracy. Even with Opus, I always need to either steer it, or fix the stuff it forgot by hand / have another phase afterwards to get it from 90% to 100%.
Yes the Codex TUI sucks but the model with high reasoning is an absolute beast, and convinced me to switch from Claude Max to ChatGPT Pro
Claude can do large code bases too, you just need to make it focus on parts that matter. Most of the coding tasks should not involve all parts of the code, right?
Personally I prefer Gemini because I still use AI via chat windows, and it can do a good ~90k tokens before it starts getting stupid. I'm yet to find an agent that's actually useful, and doesn't constantly fuck up everywhere while burning money.
Answer is a classic programming one - it depends? There are definitely differences in strength and weaknesses among them.
I run claude CLI as a primary and just ask it nicely to consult gemini cli (but not let it do any coding). It works surprisingly well. OpenAI just fell out of my view. Even cancelled ChatGPT subscription. Gemini is leaping forward and _feels like_ ChatGPT-5 is a regression.. I can't put my finger on it tbh.
In my experience gemini is good at writing specs it's hit or miss in reviewing code and it's not really usable for iterating on code. Codex is slow but can crack issues that Claude Code struggles with. So my workflow has being to use all three to iterate on specs. Have claude code work on implementation and have Codex review claude code's work (sometimes have gemini double check it).
Yeah, my take is it’s sort of up to the person using the LLM and maybe how they match to that LLM. That’s my hunch as to why we hear wildly different takes on these LLMs working for people. Gemini can be the most productive model for some while others find it entirely unworkable.
Not just personalities and preferences, but the purpose for which the AI is being used also affects the results. I primarily use AIs for complex troubleshooting along the lines of: "Here's a megabyte of logs, an IaC template, and a gibberish error code. What's the reason?" Right now, only Gemini Pro 2.5 has any chance of providing a useful output given those inputs, because its long-context attention is better than any other model's.
Capability wise, they seem close enough that I don’t bother re-evaluating them against each other all the time.
One advantage Gemini had (or still has, I’m not sure about the other providers) was its large context window combined with the ability to use PDF documents. It probably saved me weeks of work on an integration with a government system uploading hundreds of pages of documentation and immediately start asking questions, generating rules, and troubleshooting payloads that were leading to generic, computer-says-no errors.
No need to go trough RAG shenanigans and all of it within the free token allowance.
Because how good a model is is mostly just what the training data is at this point.
It's like the personality of a person. Employee A is better at talking to customers than Employee B, but Employee B is better at writing code than Employee A. Is one better than the other? Is one smarter than the other? Nope. Different training data.