I have found both perplexity and Claude.ai good enough. Since I pay for claude because of development, why not use it as search engine as well? So maybe the future is multi provider?
I really wanted to use Aider. But it's impossible. How do people actually use it?
Like, I gave it access to our code base, wanted to try a very simple bug fix. I only told it to look at one service I knew needed changes, because it says it works better in smaller code bases. It wanted to send so many tokens to sonnet that I hit the limits before it even started actually doing any coding.
Instant fail.
Then I just ran Claude Code, gave it the same instructions and I had a mostly working fix in a few minutes (never mind the other fails with Claude I've had - see other comment), but Aider was a huge disappointment for me.
I don't know about Aider, I am not using it because of lack of MCP and poor GitHub Copilot support (both are important to me). Maybe in the future that will get better if that will be relevant. I am using opencode.ai with Claude Sonnet 4 usually. Sometimes I try to switch to different models, e.g. Gemini 2.5 Pro, but Sonnet is more consistent for me.
It would be good to define what's "smaller code bases". Here is what I am working one: 10 years old project full of legacy consisting of about 10 services and 10 front-end projects. As well tried it on project similar to MUI or Mantine UI. Naturally on many smaller projects. As well tried it on TypeScript codebase where it has failed for me (but it is hard to judge from one attempt). Lastly I am using it on smaller projects. Overall question is more about task than about code base size. If the task does not involve loading too much context when code base size might be irrelevant.
Well apparently we're "big" then. About 15 years old code base. About 100 services or libraries. The service I tried to first use it on because of the "only use it on small code bases" was only about 300 files though (doesn't include tests even). But I guess Aider got overwhelmed (or rather overwhelmed the LLM with the repo map) by the entire code base's repo map (that's what it said it was updating before it proclaimed that there were too many tokens.
Monorepo.
And that was me already knowing what I wanted and just telling it to do it. I never added any files but the one service but it knows the entire repo.
Claude code on the other I tell "I need to do X thing across the code base" and it just goes and does it (sometimes badly of course) but it has no issue just "doing its agent thing" on it.
Aider wouldn't even be able to do both code and tests in one go? Like the whole idea of "only add the files you need" makes no sense to me for agentic use.
How and why would one script things like this? I thought the process was basically just talking to the agent and telling it what you want to do and reviewing the changes before they are committed.
Well. Agents use context and context window has limit. If you add too much to context you will get poor result. You can split your task in smaller ones and in result you might get much better result. E.g. migration of tests to new version of library.
Do you mind sharing your anecdote about mcp being useful? I haven't had time to experiment and I'm trying to learn from others time on the grind stone.
ML ecosystem tends to be very open source, true, LLMs a bit less so.
So far there are only a few useful open source models (mostly courtesy of Chinese companies), otherwise a lot of the models are either hidden behind paid APIs or falsely marketed as open source but come attached with terms and conditions, use policies, forbidden uses, requires signing agreements upfront and so on.
There is Gemma from Google and various niche models as well. I have not tried to measure exact amount of Chinese vs other, but I guess it is not mainly Chinese.
Not even Google calls Gemma "open source", nor would anyone with knowledge of "open source" call those models that. If something requires signing an agreement before downloading and/or have a list of prohibited use cases, it's most likely not open source.
"AI" bs generator is not just the code, it is not reproducible without the training set.
Also the "AI" software is not something that is not possible to use on machine with 100% open-sourced environment: the newest CPU supporting open-source BIOS is 3rd generation of Intel and such an ancient hardware is not able to run it.
So, that kind of LGPB+/Climate Justice/with using artificial intelligence disservices being forced to people are very not open-source. Indeed they are very commercialized.
reply