Check out what we're doing with DevSwarm, trying to give the best of both worlds. But you might wait a few weeks, as our next update takes it to the next level.
Personally, I liked the video. So easy to focus on what's right in front instead of thinking how this will play out.
My overall reaction is “yes and no.” Yes, what we think of as “the IDE” today will not be the primary programming tool of the future (hence what we're building). And love the ideas of modularity and swarms of agents working together, as opposed to smarter models. Great analogies to context windows and divers to drive that home.
All that said, I don't think we're about to enter a world of pure “vibe coding” where professionals rarely look at code. So:
1. The IDE will evolve into a new category. Whether we call this new evolution/revolution an IDE is an open question, but it will be fundamentally different with some IDE-like features.
What we use today as an IDE is in a transitional form. The primary environment will become AI-first and workflow-first rather than file-and-buffer-first. It will have familiar IDE-like elements (navigation, inline diffs, debugging), but the foundations will be different: more about orchestrating agents, managing workstreams, and reviewing proposed changes than editing one file at a time.
At the same time, for any non-trivial, long-lived software system, I do not see a future where professional developers are not reading code, reasoning about code, and shaping architecture, at least for the next few years. The medium might be more conversational and visual, but the “source of truth” will still be something code-like that humans can inspect.
2. I do not buy “we will not be looking at code in two years.”
For toy projects, prototypes, and one-off scripts, sure, you can already vibe your way through a lot with an LLM. For production systems that have to be debugged, secured, audited, evolved, and handed to other teams, we still need:
– Understanding of invariants and failure modes
– The ability to trace execution and reason about state
– The ability to see exactly what changed and why
Even if the agent writes 95% of the code, humans still need to validate that it matches real-world constraints, regulations, and performance characteristics. That usually means reading diffs, inspecting critical paths, and being able to drill all the way down when something is off. We all need to have a clear mental model of the underlying system which you can't do if you just don't understand systems.
3. There is a real risk that senior devs dismiss the new modality.
A risk mentioned in the video is that experienced engineers treat AI as “just a fancy autocomplete” and refuse to change their habits. That may be comfortable in the short term, but it throws away a huge amount of leverage.
The interesting frontier to me is an AI-first, high-velocity engineering (or hive coding)where you:
– Decompose work into clear units that agents can tackle in parallel (think spec-kit)
– Let agents propose changes in isolated branches or sandboxes (think DevSwarm)
– Spend most of your time reviewing, steering, and integrating (again DevSwarm)
That still requires taste, judgment, and a deep understanding of systems. It is not checking out and letting the machine “just build it.” It is shifting more of your time from typing to thinking, reviewing, and guiding.
4. Students can move much faster, but only if they still learn the code and mental models.
On the other side, there is a different risk: new developers who lean entirely on AI and never build real mental models from getting into messy code. I think students can absolutely move faster with AI if (and only if) they still learn the fundamentals:
– Data structures and algorithms as ways of thinking, not just names on a list
– How control flow, state, and concurrency actually behave
– How APIs are designed, versioned, and composed
– How to debug and reason from symptoms back to causes
AI can be an incredible accelerator for learning those things, since you can explore more ideas in less time. But if you only ever paste prompts and accept whatever comes back, you are building on sand.
5. Humans stay in control, or at least they should.
For the foreseeable future, effective teams will have “human in control, AI at many levels of assistance,” not the other way around. The job of the professional developer will shift more toward:
– Framing problems
– Designing systems and contracts
– Setting guardrails and constraints
– Reviewing and integrating agent output
You might spend very little time hand-writing lines of code, but we should still have an understanding of the code. If you're reviewing someone else's code, you might be able to feel confident in it when the interface is clear and defined, when the tests are thoughtful, when it is modular, then, yeah, you don't need to read every line. We will have more of this, as clarity of test cases, and modularity is more important than ever.
I am very interested in how others here see this. In particular:
– If you write code professionally today, what do you expect your main environment to look like in 5 years?
– For people teaching CS or mentoring juniors, how are you adapting your approach in an AI-first world without giving up on fundamentals?
When people talk about IDEs, there’s almost always an hidden assumption that they are referring to something like Visual Studio or IDEA (VS Code is a lesser version of these). But no comparison has ever been made to Smalltalk, or REPL development like Slime. There’s not even a mention of programmable editors like VIM and Emacs which can leverage the Unix OS environment.
Ultimately, LLMs are a text focused technology (actually tokens). And if you take something like Smalltalk, Acme, Unix Shell (with Vi and other editors), Emacs (as a lisp machine), they are all interfaces that focus on text manipulation. And they all provide the most important capability, defining custom commands on the fly. Some IDEs allows you to define custom tools, but none make it as convenient as the above.
If we take Unix, you could have something in `~/ai/bulletify` which start with:
#!/usr/bin/env llm-cli
[prompt text]
And quickly execute it with `:!bulletify` in Vi (with the needed motion). Pretty much the same in emacs, and you can either bind it to have a faster way to invoke it. Most IDEs is about having commonly useful utilities and features bound to the concept of a project. They don't do really well in a very dynamic environment.
And Editors like VS Code and Sublime is very much a basic version of the IDE. They're familiar, but they're not that fluid.
I'm currently (mostly) living in Emacs with a few shell buffers and using gptel to interact with Anthropic, Google, and Open Ai LLMs, as well as running smaller LLMs locally. This is after decades of vi, then vim, then Emacs. Still a neophyte though given the enormity of the Emacs "ecosystem". One critical thing about Emacs: Running an Emacs daemon (for days or weeks between restarts) takes things to an entirely different level. It becomes a useful operating system. The key for me was moving from mutt to mu4e for email. Once I moved my email management into Emacs things improved dramatically, productivity-wise. Oh, and what made me switch to Emacs from Vim in the first place was org-mode.
This isn't about correctness. And it has a pretty good idea if you ask it in the right way, it can evaluate if it thinks the idea is good, but sometimes that's on autopilot.
> it has a pretty good idea if you ask it in the right way
This phrasing embeds a rather questionable assumption: That somewhere the algorithm has a mind which "can evaluate" the real truth, but its character/emotion makes it unwilling to tell you... and you all you need to do is break past its quirks to get to the juicy logic that "must" be hidden inside.
I don't think that assumption is safe, let alone proven. Our human brains are practically hardwired to assume another mind on the other side (much like how we see faces with pareidolia) and in this case our instincts are probably not accurate. No matter how much we peel the onion looking, we won't find the onion seeds.
Local Assistant/Model Capable. You can go fully local by choosing Aider & Goose and then hooking them up to your local model of choice (e.g. qwen3, gpt-oss) using the usual suspects: LM Studio / vLLM / Ollama.
Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.
> Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.