Hacker Newsnew | past | comments | ask | show | jobs | submit | daliusd's commentslogin

No MCP support (https://github.com/Aider-AI/aider/pull/3937) makes it less useful than other tools (opencode.ai, Claude Code and etc)

I have found both perplexity and Claude.ai good enough. Since I pay for claude because of development, why not use it as search engine as well? So maybe the future is multi provider?


So you want Aider, Claude Code or opencode.ai it seems. I use opencode.ai a lot nowadays and am really happy and productive.


I really wanted to use Aider. But it's impossible. How do people actually use it?

Like, I gave it access to our code base, wanted to try a very simple bug fix. I only told it to look at one service I knew needed changes, because it says it works better in smaller code bases. It wanted to send so many tokens to sonnet that I hit the limits before it even started actually doing any coding.

Instant fail.

Then I just ran Claude Code, gave it the same instructions and I had a mostly working fix in a few minutes (never mind the other fails with Claude I've had - see other comment), but Aider was a huge disappointment for me.


I don't know about Aider, I am not using it because of lack of MCP and poor GitHub Copilot support (both are important to me). Maybe in the future that will get better if that will be relevant. I am using opencode.ai with Claude Sonnet 4 usually. Sometimes I try to switch to different models, e.g. Gemini 2.5 Pro, but Sonnet is more consistent for me.

It would be good to define what's "smaller code bases". Here is what I am working one: 10 years old project full of legacy consisting of about 10 services and 10 front-end projects. As well tried it on project similar to MUI or Mantine UI. Naturally on many smaller projects. As well tried it on TypeScript codebase where it has failed for me (but it is hard to judge from one attempt). Lastly I am using it on smaller projects. Overall question is more about task than about code base size. If the task does not involve loading too much context when code base size might be irrelevant.


Well apparently we're "big" then. About 15 years old code base. About 100 services or libraries. The service I tried to first use it on because of the "only use it on small code bases" was only about 300 files though (doesn't include tests even). But I guess Aider got overwhelmed (or rather overwhelmed the LLM with the repo map) by the entire code base's repo map (that's what it said it was updating before it proclaimed that there were too many tokens.

Monorepo.

And that was me already knowing what I wanted and just telling it to do it. I never added any files but the one service but it knows the entire repo.

Claude code on the other I tell "I need to do X thing across the code base" and it just goes and does it (sometimes badly of course) but it has no issue just "doing its agent thing" on it.

Aider wouldn't even be able to do both code and tests in one go? Like the whole idea of "only add the files you need" makes no sense to me for agentic use.


At the end of the day I want what my job is willing to pay for, which is a few different flavors of AI tools


Because you can script terminal app (ideally, at least you can do that with opencode.ai) and that opens some amazing opportunities.


How and why would one script things like this? I thought the process was basically just talking to the agent and telling it what you want to do and reviewing the changes before they are committed.


Well. Agents use context and context window has limit. If you add too much to context you will get poor result. You can split your task in smaller ones and in result you might get much better result. E.g. migration of tests to new version of library.


E.g. aider does not support MCP https://github.com/Aider-AI/aider/issues/3314

I personally found MCP a must for some scenarios


Do you mind sharing your anecdote about mcp being useful? I haven't had time to experiment and I'm trying to learn from others time on the grind stone.


Yes. Some ideas. All they are especially useful if you had to work with some non public stuff:

* Search GitHub for ideas how to use API, best practices and etc.

* Convert Figma design to code

* In slack-driver development companies you can even search Slack for ideas


https://docs.github.com/en/copilot/get-started/plans

You can get some, but model selection on free tier is smaller


Thanks, might be interesting to mess around with if opencode can auth to the copilot free tier


I have tried it and it does.


It does not violate guidelines https://news.ycombinator.com/newsguidelines.html

Is it valuable? Maybe for someone it is.


Is it illegal in Texas?


AI is very open source and very commercialized at the same time. I don't understand your comment.


> AI is very open source

ML ecosystem tends to be very open source, true, LLMs a bit less so.

So far there are only a few useful open source models (mostly courtesy of Chinese companies), otherwise a lot of the models are either hidden behind paid APIs or falsely marketed as open source but come attached with terms and conditions, use policies, forbidden uses, requires signing agreements upfront and so on.


There is Gemma from Google and various niche models as well. I have not tried to measure exact amount of Chinese vs other, but I guess it is not mainly Chinese.


> There is Gemma from Google

Not even Google calls Gemma "open source", nor would anyone with knowledge of "open source" call those models that. If something requires signing an agreement before downloading and/or have a list of prohibited use cases, it's most likely not open source.


Well it is not propaganda spreading at least. Do we have truly open source model from China in that regard? But really there are niche oss models


So like any other commercial cloud based system.


The comment you're replying to did not say open source, it said "FOSS"

Completely different thing.


"AI" bs generator is not just the code, it is not reproducible without the training set.

Also the "AI" software is not something that is not possible to use on machine with 100% open-sourced environment: the newest CPU supporting open-source BIOS is 3rd generation of Intel and such an ancient hardware is not able to run it.

So, that kind of LGPB+/Climate Justice/with using artificial intelligence disservices being forced to people are very not open-source. Indeed they are very commercialized.


> People are paying for LLMs, consumers are no longer a commodity.

Ask your LLM: "How many percents of world populations is paying for LLMs? Any estimates how many will never pay for it?"


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: