Hacker Newsnew | past | comments | ask | show | jobs | submit | rtfeldman's commentslogin

Anecdotally, we use Opus 4.5 constantly on Zed's code base, which is almost a million lines of Rust code and has over 150K active users, and we use it for basically every task you can think of - new features, bug fixes, refactors, prototypes, you name it. The code base is a complex native GUI with no Web tech anywhere in it.

I'm not talking about "write this function" but rather like implementing the whole feature by writing only English to the agent, over the course of numerous back-and-forth interactions and exhausting multiple 200K-token context windows.

For me personally, definitely at least 99% all of the Rust code I've committed at work since Opus 4.5 came out has been from an agent running that model. I'm reading lots of Rust code (that Opus generated) but I'm essentially no longer writing any of it. If dot-autocomplete (and LLM autocomplete) disappeared from IDE existence, I would not notice.


Woah that's a very interesting claim you made I was shying away from writing Rust as I am not a Rust developer but hearing from your experience looks like claude has gotten very good at writing Rust

Honestly I think the more you can give Claude a type system and effective tests, the more effective it can be. Rust is quite high up on the test strictness front (though I think more could be done...), so it's a great candidate. I also like it's performance on Haskell and Go, both get you pretty great code out of the box.

Have you ever worried that by programming in this way, you are methodically giving Anthropic all the information it needs to copy your product? If there is any real value in what you are doing, what is to stop Anthropic or OpenAI or whomever from essentially one-shotting Zed? What happens when the model providers 10x their costs and also use the information you've so enthusiastically given them to clone your product and use the money that you paid them to squash you?

Zed's entire code base is already open source, so Anthropic has a much more straightforward way to see our code:

https://github.com/zed-industries/zed


That's what things like AWS bedrock are for.

Are you worried about microsoft stealing your codebase from github?


Isn’t it widely assumed Microsoft used private repos for LLM training?

And even with a narrower definition of stealing, Microsoft’s ability to share your code with US government agencies is a common and very legitimate worry in plenty of threat model scenarios.


I just uninstalled Zed today when I realized the reason I couldn't delete a file on Windows because it was open in Zed. So I wouldn't speak too highly of the LLM's ability to write code. I have never seen another editor on Windows make the mistake of opening files without enabling all 3 share modes.

Just based on timing, I am almost 100% sure whatever code is responsible was handwritten before anyone working on Windows was using LLMs...but anyway, thank you for the bug report - I'll pass it along!

The article is arguing that it will basically replace devs. Do you think it can replace you basically one-shotting features/bugs in Zed?

And also - doesn’t that make Zed (and other editors) pointless?


Trying to one-shot large codebases is a exercise in futility. You need to let Claude figure out and document the architecture first, then setup agents for each major part of the project. Doing this keeps the context clean for the main agent, since it doesn't have to go read the code each time. So one agent can fill it's entire context understanding part of the code and then the main agent asks it how to do something and gets a shorter response.

It takes more work than one-shot, but not a lot, and it pays dividends.


Is there a guide for doing that successfully somewhere? I would love to play with this on a large codebase. I would also love to not reinvent the wheel on getting Claude working effectively on a large code base. I don’t even know where to start with, e.g., setting up agents for each part.

> Do you think it can replace you basically one-shotting features/bugs in Zed?

Nobody is one-shotting anything nontrivial in Zed's code base, with Opus 4.5 or any other model.

What about a future model? Literally nobody knows. Forecasts about AI capabilities have had horrendously low accuracy in both directions - e.g. most people underestimated what LLMs would be capable of today, and almost everyone who thought AI would at least be where it is today...instead overestimated and predicted we'd have AGI or even superintelligence by now. I see zero signs of that forecasting accuracy improving. In aggregate, we are atrocious at it.

The only safe bet is that hardware will be faster and cheaper (because the most reliable trend in the history of computing has been that hardware gets faster and cheaper), which will naturally affect the software running on it.

> And also - doesn’t that make Zed (and other editors) pointless?

It means there's now demand for supporting use cases that didn't exist until recently, which comes with the territory of building a product for technologists! :)


Thanx. More of a "faster keyboard" so far then?

And yeah - if I had a crystal ball, I would be on my private island instead of hanging on HN :)


Definitely more than a faster keyboard (e.g. I also ask the model to track down the source of a bug, or questions about the state of the code base after others have changed it, bounce architectural ideas off the model, research, etc.) but also definitely not a replacement for thinking or programming expertise.

If you're feeling adventurous and would like to try Roc's new compiler, I put together a quick tutorial for it!

https://gist.github.com/rtfeldman/f46bcbfe5132d62c4095dfa687...


My understanding is that the reasoning behind all this is:

- In 1985 there were a ton of different hardware floating-point implementations with incompatible instructions, making it a nightmare to write floating-point code once that worked on multiple machines

- To address the compatibility problem, IEEE came up with a hardware standard that could do error handling using only CPU registers (no software, since it's a hardware standard) - With that design constraint, they (reasonably imo) chose to handle errors by making them "poisonous" - once you have a NaN, all operations on it fail, including equality, so the error state propagates rather than potentially accidentally "un-erroring" if you do another operation, leading you into undefined behavior territory

- The standard solved the problem when hardware manufacturers adopted it

- The upstream consequence on software is that if your programming language does anything other than these exact floating-point semantics, the cost is losing hardware acceleration, which makes your floating-point operations way slower


> I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline)

Can't help but notice you haven't tried Zed!


As of August 2025, Zed had 150K monthly active users. That was before it supported Windows; the number is much higher now (although not publicly reported).

I'd be very surprised to learn that any other Rust UI crate has more real-world usage than GPUI!

Source:

https://sequoiacap.com/article/partnering-with-zed-the-ai-po...


users is a finicky metric. When Palia came out, a game you most likely have never heard of, I wrote a desktop installer for it with Druid, which a few million people downloaded and used to install and run Palia. Only a handful of people worked on this codebase, maybe three or four while I was there, but principally me and one other engineer.

The more salient metrics would be things like how many people know how to use the framework, the variety of use-cases its good for solving, how easy it is to hire or get help with it, etc. As for Druid, Druid is already officially unmaintained, its core developer having moved on to work on Xilem instead. (my experience, for the record, was positive, I very much enjoyed working with Druid.)


Iirc Cosmic Desktop uses Iced


Kraken seems to have a desktop application for trading made in Iced as well.

I wonder if there are more Cosmic Desktop + Kraken desktop users than Zed Editor users?


You don't have to use Codex in its terminal UI - e.g. you can use it in the Zed IDE out-the-box:

https://zed.dev/blog/codex-is-live-in-zed


And also in emacs or neovim

https://xenodium.com/introducing-acpel


When I wrote Elm in Action for Manning, I talked with them explicitly about the language being pre-1.0 and what would happen if there were breaking changes. The short answer was that it was something they dealt with all the time; if there were small changes, we could issue errata online, and if there were sufficiently large changes, we could do a second edition.

I did get an advance but I don't remember a clause about having to return any of it if I didn't earn it out. (I did earn it out, but I don't remember any expectation that I'd have to give them money back if I hadn't.) Also my memory was that it was more than $2k but it was about 10 years ago so I might be misremembering!


> the language being pre-1.0 and what would happen if there were breaking changes.

Fortunately, Elm has had no breaking changes for about six years!


It's actively being worked on - there have been 8 font rendering PRs in the past 3 weeks, most recently yesterday: https://github.com/zed-industries/zed/pulls?q=is%3Apr+is%3Ac...

A downside that comes with the territory of building the rendering pipeline from scratch is needing to work through the long tail of complex and tradeoff-heavy font rendering issues on different displays, operating systems, drivers, etc.

I know it's taking awhile to get through, but I agree it's important!


A quick glance at Zed's changelog is all it takes to see that AI has been a minority of what Zed has shipped since it was rolled out over a year ago. :)

https://zed.dev/releases/stable

(And that's even with almost none of the work on the massive Windows project being included in the changelog!)


The roadmap on the other hand seems to have 4 things tagged AI on it. https://zed.dev/roadmap

The blog used to be really personal and nice but now it's all AI https://zed.dev/blog

I get it, pausing some work to ship AI integration plumbing is a good strategy to keep momentum up with competition.

I think after their Series B, looking at the roadmap, and now with this pricing featured it's pretty clear what their priorities have become.

I'm not against AI, I just feel like they can do better and their original mission is still a better long-term angle.


> I get it, pausing some work to ship AI integration plumbing is a good strategy to keep momentum up with competition.

By every metric - lines of code shipped, hours per week spent on it, number of people assigned to it, etc. - AI is a minority part of what Zed does. It's a priority, but it's not the priority.

I know there's a disproportionate amount of blogging about AI, but that's a decision about what prose gets written, not what code gets written!


Appreciate your reassurance on this, and I believe this is indeed _currently_ the case.

My worry is on the direction and positioning as it stands _now_ when looking _out_.

I guess time will tell, and I have no problems being wrong here. In fact I even desire to be wrong.

Excited to see what DeltaDB is all about and how it affects the company direction too, especially now with Sequoia pulling some strings.


We just pushed a fix! Here's how to get it:

- Start a new Claude Code Thread, which will kick off the background download of the new Claude Code ACP adapter.

- Wait a few seconds, then start another Claude Code Thread. The new thread will use the updated one.

We're working on a nicer UX for getting updated versions, which we'll definitely ship before Claude Code support leaves beta!


Thank you for the quick fix! Your steps worked perfectly.

In any case, I'd like to add that I'm hoping an ACP adapter for OpenAI Codex is in the works; I've grown pretty fond of GPT-5, and would like to be able to tap into my existing ChatGPT Plus subscription; I'd rather not use API pricing at the moment. I prefer using Claude Code vs directly hitting the Anthropic API for the same reason.

Heck, an ACP adapter for Cursor CLI (itself based on Gemini CLI, right?) would even be useful; as that would also let me pick GPT-5.


Thanks! It successfully installed this time, but now I have a new error:

Internal error: { "details": "Failed to intialize Claude Code.\n\nThis may be caused by incorrect MCP server configuration, try disabling them." }

I logged in via Claude CLI using my subscription.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: