There’s something ironic here. For decades, we dreamed of semi-automating software development. CASE tools, UML, and IDEs all promised higher-level abstractions that would "let us focus on the real logic."
Now that LLMs have actually fulfilled that dream — albeit by totally different means — many devs feel anxious, even threatened. Why? Because LLMs don’t just autocomplete. They generate. And in doing so, they challenge our identity, not just our workflows.
I think Colton’s article nails the emotional side of this: imposter syndrome isn’t about the actual 10x productivity (which mostly isn't real), it’s about the perception that you’re falling behind. Meanwhile, this perception is fueled by a shift in what “software engineering” looks like.
LLMs are effectively the ultimate CASE tools — but they arrived faster, messier, and more disruptively than expected. They don’t require formal models or diagrams. They leap straight from natural language to executable code. That’s exciting and unnerving. It collapses the old rites of passage. It gives power to people who don’t speak the “sacred language” of software. And it forces a lot of engineers to ask: What am I actually doing now?
I now understand what artists felt when seeing stable diffusion images - AI code is often just wrong - not in the moral sense, but it contains tons of bugs, weirdness, excess and peculiarities you'd never be happy to see in a real code base.
Often getting rid of all of this, takes comparable amount of time as doing the job in the first place.
Now I can always switch to a different model, increase the context, prompt better etc. but I still feel that actual good quality AI code is just out of arms reach, or when something clicks, and the AI magically starts producing exactly what I want, that magic doesn't last.
Like with stable diffusion, people who don't care as much or aren't knowledgeable enough to know better, just don't get what's wrong with this.
A week ago, I received a bug ticket claiming one of the internal libs i wrote didn't work. I checked out the reporter's code, which was full of weird issues (like the debugger not working and the typescript being full of red squiggles), and my lib crashed somewhere in the middle, in some esoteric minified js.
When I asked the guy who wrote it what's going on, he admitted he vibe coded the entire project.
The comparison to art is apt. Generated art gets the job done for most people. It's good enough. Maybe it's derivative, maybe there are small inaccuracies, but it is available instantly for free and that's what matters most. Same with code, to many people.
And the knock-on effect is that there is less menial work. Artists are commissioned less for the local fair, their friend's D&D character portrait, etc. Programmers find less work building websites for small businesses, fixing broken widgets, etc.
I wonder if this will result in fewer experts, or less capable ones. As we lose the jobs that were previously used to hone our skills will people go out of their way to train themselves for free or will we just regress?
Artistic paintings are not technical artwork like computer programs or circuit boards. Nothing falls down if something is out of place.
A schematic of a useless amplifier that oscillates looks just as pretty as one of a correct amplifier. If we just want to use it as a repeated print for the wallpaper of an electronic lab, it doesn't matter.
> When I asked the guy who wrote it what's going on, he admitted he vibe coded the entire project.
This really irritates me. I’ve had the same experience with teammates’ pull requests they ask me to review. They can’t be bothered to understand the thing, but then expect you to do it for them. Really disrespectful.
At same time, there's also a huge of annoying Tech-brothers constantly shouting at artists something like, 'Your work was never valuable to begin with; why can't I copy your style? You're nothing but another matrix.'
You miss the fundamental constraint. The bottleneck in software development was never typing speed or generation, but verification and understanding.
Even if LLMs worked perfectly without hallucinations (they don't and might never), a conscientious developer must still comprehend every line before shipping it. You can't review and understand code 10x faster just because an LLM generated it.
In fact, reviewing generated code often takes longer because you're reverse-engineering implicit assumptions rather than implementing explicit intentions.
The "10x productivity" narrative only works if you either:
- Are not actually reviewing the output properly
or
- Are working on trivial code where correctness doesn't matter.
Real software engineering, where bugs have consequences, remains bottlenecked by human cognitive bandwidth, not code generation speed. LLMs shifted the work from writing to reviewing, and that's often a net negative for productivity.
> Even if LLMs worked perfectly without hallucinations (they don't and might never), a conscientious developer must still comprehend every line before shipping it.
This seems excessive to me. Do you comprehend the machine code output of a compiler?
I must comprehend code at the abstraction level I am working at. If I write Python, I am responsible for understanding the Python code. If I write Assembly, I must understand the Assembly.
The difference is that Compilers are deterministic with formal specs. I can trust their translation. LLMs are probabilistic generators with no guarantees. When an LLM generates Python code, that becomes my Python code that I must fully comprehend, because I am shipping it.
That is why productivity is capped at review speed, you can't ship what you don't understand, regardless of who or what wrote it.
Compilers definitely don't have formal specs. Even CompCert mostly but doesn't entirely have them.
It can actually be worse when they do. Formalizing behavior means leaving out behavior that can't be formalized, which basically means if your language has undefined behavior then the handling of that will be maximally confusing, because your compiler can no longer have hacks for handling it in a way that "makes sense".
There's many jobs that can be eliminated with software, but haven't because managers don't want to hire SWEs without proven value. I don't think HN realizes how big that market is.
With AI, the managers will replace their employees with a bunch of code they don't understand, watch that code fail in 3 years, and have to hire SWEs to fix it.
I'd bet those jobs will outnumber the ones initially eliminated by having non-technical people deliver the first iteration.
Many of those jobs will be high-skill/impact because they are necessarily focused on fixing stuff AI can't understand.
I try using an LLM for coding now and then, and tried again today with giving a model dedicated to coding a rather straight forward prompt and task.
The names all looked right, the comments were descriptive, it has test cases demonstrating the code work. It looks like something I'd expect a skilled junior or a senior to write.
The thing is, the code didn't work right, and the reasons it didn't work were quite subtle. Nobody would have fixed it without knowing how to have done it in the first place, and it took me nearly as long to figure out why as if I'd just written it myself in the first place.
I could see it being useful to a junior who hasn't solved a particular problem before and wanted to get a starting point, but I can't imagine using it as-is.
Nor do they produce those (do they?). That is what I would like to see. Formal models and diagrams are not needed to produce code. Their point is that they allow us to understand code and to formalize what we want it to do. That's what I'm hoping AI could do for me.
Not who you replied to but also someone who found the comment ChatGPT-ey - it’s also the sentence phrasing and tone. “It’s not just x, it’s a whole different paradigm y” is a classic ChatGPT method.
The use of en dashes and short staccato sentences for rhetorical flourishes are a giveaway. AI writes like a LinkedIn post.
> Why? Because LLMs don’t just autocomplete. They generate. And in doing so, they challenge our identity, not just our workflows.
is what raised flags in my head. Rather than explain the difference between glorified autocompletion and generation, the post assumes there is a difference then uses florid prose to hammer in the point it didn't prove.
I've heard the paragraph "why? Because X. Which is not Y. And abcdefg" a hundred times. Deepseek uses it on me every time I ask a question.
Here’s the thing though…if you read enough of it, you’re gonna start using it a lot more often. It’s not just AI slop, it’s fundamentally rewiring how we as a society think in real time! It’s the classic copycat AI mannerisms cried wolf
Problem!
I definitely didn't "randomly" suggest it, unless you're suggesting all human actions are the result of randomness. I also just re-read the guidelines and I didn't see anything about in the letter of the law, but I agree it probably goes against the spirit. I'll take the downvotes and keep it to myself next time.
And while I don't categorically object to AI tools, I think your selling objections to them short.
It's completely legitimate to want an explainable/comprehendable/limited-and-defined tool rather than a "it just works" tool. Ideally, this puts one in an "I know its right" position rather than a "I scanned it and it looks generally right and seems to work" position.
I think if you're paying any attention to the state of the world, you can see labor is getting destroyed by capital - bad wages, worse working conditions including more surveillance, metrics everywhere, immoral companies, short contracts and unstable companies/career paths, increasing monopolization and consolidation of power. We were so insulated from this for so long that it's easy to not really grasp how bad things are for most workers. Now the precarity of our situation is dawning on us.
It kills the magic of coding for sure. The thing is. Now with everyone doing it, you get a ton of slop. Computing’s become saturated as hell. We don’t even need more code as it is. Before LLMs you could pretty much find what you needed on github… Now it’s even worse.
Now that LLMs have actually fulfilled that dream — albeit by totally different means — many devs feel anxious, even threatened. Why? Because LLMs don’t just autocomplete. They generate. And in doing so, they challenge our identity, not just our workflows.
I think Colton’s article nails the emotional side of this: imposter syndrome isn’t about the actual 10x productivity (which mostly isn't real), it’s about the perception that you’re falling behind. Meanwhile, this perception is fueled by a shift in what “software engineering” looks like.
LLMs are effectively the ultimate CASE tools — but they arrived faster, messier, and more disruptively than expected. They don’t require formal models or diagrams. They leap straight from natural language to executable code. That’s exciting and unnerving. It collapses the old rites of passage. It gives power to people who don’t speak the “sacred language” of software. And it forces a lot of engineers to ask: What am I actually doing now?