> What's the matter, you don't like cheap assistants?
I think the main reason I'm not personally excited about AI is that... no, I don't, actually.
I'm in my late 40s. I have had many opportunities to move into management. I haven't because while I enjoy working with others, I derive the most satisfaction from feeling like I'm getting my hands dirty and doing work myself.
Spending the entire day doing code reviews of my army of minions might be strictly more productive, but it's not a job I would enjoy having. I have never, for a second, felt some sort of ambitious impulse to move up the org chart and become some sort of executive giving marching orders.
The world that AI boosters are driving towards seems to me to be one where the only human jobs left are effectively middle management where the leaf nodes of the org chart are all machines. It may the case that such a world has greater net productivity and the stock prices will go up.
But it's not a world that feels meaningful, dignified, or desirable to me.
1. Those that are motivated by "building things". The actual programming is just a means to an end.
2. Those that are motivated by the salary alone and either hate the work or are indifferent to it.
3. Those that are motivated by the art of programming itself. Hands on keyboard, thinking through a problem and solving it with code.
Developers that fall into category 1 and 2 love AI. Its basically a dream come true for them ("I knocked out 3 sides projects in a month" for #1 and "You're telling me that all I have to do is supervise the AI and I still get paid?" for #2).
Its basically a living nightmare for developers in category 3.
I've noticed that founders seem to be way higher on AI than non-founders. I think a lot of founders fit into category 1.
I'm definitely not at retirement age yet, but I do have to admit that I'm hopeful I can make it to retirement while still mostly working in a way that I enjoy.
At the same time, I've realized that "let me just try to squeeze out the last of my career" is a really unhealthy mindset for me to hold. It sort of locks me into a feeling like my best days are behind me or something.
So I am trying to dabble in using AI for coding and trying to make sure I stay open-minded and open to learning new things. I don't want to feel like a dinosaur.
I've used all of the popular coding agents, including Jules. The reality to me is that they can and should be used for certain kinds of low severity and low complexity tasks (documentation, writing tests, etc.). They should not be used for the opposite end of the spectrum.
There are many perspectives on coding agents because there are many different types of engineers, with different levels of experience.
In my interactions I've found that junior engineers overestimate or overuse the capabilities of these agents, while more senior engineers are better calibrated.
The biggest challenge I see is what to do in 5 years once a generation of fresh engineers never learned how compilers, operating systems, hardware, memory, etc actually work. Innovation almost always requires deep understanding of the fundamentals, and AI may erode our interest in learning these critical bits of knowledge.
What I see as a hiring manager is senior (perhaps older) engineers commanding higher comp, while junior engineers become increasingly less in demand.
Agents are here to stay, but I'd estimate your best engineering days are still ahead.
You're missing a third option, which is actually closer to the role of managing coding agents: being a "senior engineer / architect / what-have-you". IME the more senior engineering roles (staff, principal, fellow, etc) in most companies, especially Big Tech companies, involves coordinating large projects across multiple teams of engineers. It is essentially a necessity to achieve the scale of impact required at those levels.
At that level, you almost never get to be hands-on with code; the closest you get is code reviews. Instead you "deliver value" through identifying large-scale opportunities, proposing projects for them, writing design and architecture docs, and conducting "alignment meetings" where you convince peers and other teams to build the parts needed to achieve your vision. The actual coding grunt work is done by a bunch of other, typically more junior engineers.
That is also the role that often gets derided as "architecture astronauts." But it is still an extremely technical role! You need to understand all the high-level quirks of the underlying systems (and their owners!) to ensure they can deliver what you envision. But your primary skills become communication and people skills. When I was in that role, I liked to joke that my favorite IDEs are "IntelliJ, Google Docs, and other engineers."
You'll note that is a very different role from management, where your primary responsibilities are more people-management and owning increasingly large divisions of the business. As a senior engineer you're still a leaf node in the org-chart, but as a manager you have a sub-tree that you are trying to grow. That is where org-chart climbing (and uncharitably, "empire-building") become the primary skillset.
As such, the current Coding Agent paradigm seems very well-suited for senior engineers. A lot of the skillsets are the same, only instead of having to persuade other teams you just write a deisgn doc and fire off a bunch of agents, review their work, and if you don't like their outputs, you can try again or drop down to manual coding.
Currently, I'm still at the "pair-program with AI" stage, but I think I'll enjoy having agents. These days I find that coding is just a means to an end that is personally more satisfying: solving problems.
> As such, the current Coding Agent paradigm seems very well-suited for senior engineers. A lot of the skillsets are the same, only instead of having to persuade other teams you just write a deisgn doc and fire off a bunch of agents, review their work, and if you don't like their outputs, you can try again or drop down to manual coding.
I have tried this a few times, it's not there yet. The failures are consistently-shaped enough to make we wonder about the whole LLM approach.
Compared to handing off to other engineers there are a few problems:
- other engineers learn the codebase much better over time, vs relying on either a third party figuring out the right magic sauce to make it understand/memoize/context-ize your codebase or a bunch of obnoxious prompt engineering
- other engineers take feedback and don't make the same types of mistakes over and over. I've had limited luck with things like "rules" for more complex types of screwups - e.g. "don't hack a solution for one particular edge case three-levels deep in a six-level call tree, find a better abstraction to hoist out the issue and leave the codebase better than you found it"
- while LLMs are great at writing exhaustive coverage tests of simple functionality, they aren't users of the project and generally struggle to really get into that mindset to anticipate cross-cutting interactions that need to be tested; instead you get a bunch of local maxima "this set of hacks passes all the current testing" candidate solutions
- the "review" process starts to become silly and demoralizing when your coworker is debating with you about code neither of you wrote in a PR (I sure hope you're still requiring a second set of human eyes on things, anyway!)
If you have a huge backlog of trivial simple small-context bugs, go nuts! It'll help you blow through that faster! But be prepared to do a lot of QA ;)
Generally I'd call most of the issues "context rot" in that even after all the RL that's been done on these things to deal better with out-of-distribution scenarios, they still struggle with the huge amount of external context that is necessary for good engineering decision making in a large established codebase. And throwing more snippets, more tickets, more previous PRs, etc, at it seems to rapidly hit a point of diminishing returns as far as its "judgement" in picking and following the right bits from that pile at the right time.
It's like being a senior engineer with a team of interns but who aren't progressing so you're stuck as a senior engineer cleaning up crappy PRs constantly without being able to grow into the role of an architect who's mentored and is guiding a bunch of other staff and senior engineers who themselves are doing more of the nitty gritty.
Maybe the models get better, maybe they don't. But for now, I find it's best to go for the escape hatch quickly once things start going sideways. Because me getting better at using today's models won't cause any generational leap forward. That feels like it will only come from lower level model advances, and so I don't want to get better at herding interns-who-can't-learn-from-me. Better for now to stick to mentoring the other people instead.
I've never seen someone who can do good architecture, API, or product design that doesn't deeply relish getting their hands dirty all the way down in the guts of the thing. (To be clear, I have seen plenty of people who like getting their hands dirty who also suck at design. It's a necessary but not sufficient condition.)
How can you do good design work if the only "people" who have experience with what you're designing are the AI agents you order around? I guess if you're designing an API that you only intend to be used by other AI agents, that's probably fine.
At some point, though, it's gotta feel like working at a pet food company coming up with new cat food recipes. You can be very successful by focus testing on cats, but you'll never really know the thing you're making. (No judgement if you do want to eat cat food, I guess.)
And all that with an energy requirement a lot higher than a single human just doing it right in the first place, and learning something in the process. It all seems so incredibly weird and futile to me.
An assistant is an intelligent human being who understands basic concepts, they are not a slot machine like AI is.
My experience using these is that it makes more time to reverse engineer the bloat they spill out than to write the thing myself.
God help you if you attempt to teach them anything, they will say "You're absolutely right!" and then continue churning out the same broken code.
Then you have to restart with a "fresh" context and give them the requirements from scratch and hope that this time they come up with something better.