I recently wrote a blog post about exactly this, and I agree with your perspective. Vibe coding helps with showing other people your idea and get them to understand it, try it and, most importantly, help you fail fast. But as the product matures, the gains of using LLM's and agentic engineering will go from 10000% efficiency to something like maybe 30(?)% productivity gain? Which is still awesome, of course.
"The real test of Vibe coding is whether people will finally realize the cost of software development is in the maintenance, not in the creation."
It's not awesome, not for us. 30% productivity gain would be enormous. Just imagine 30% of developers losing their jobs, in addition to outsourcing and all the new graduates flooding out of colleges after CS has been hyped so much in the recent years.
I really doubt that 30% productivity gain would result in 30% developers losing their jobs. Believing this would require an assumption that businesses and economies will never grow.
It also doesn't mathematically make any sense. If you now have 130% developer capacity, then the percentage of developers you need to keep is `x` defined by 130%*x = 100%, x ≈ 76.9% implying you'd lay off about 23.1% of developers.
Percentage increases are not the same as percentage losses.
Good tooling, high level languages, faster computers and sane standards also enabled enormous productivity gains. I predict very few positions lost to LLM's, rather I'd say that just with any technical "revolution" we'll just set a new baseline for productivity, get rid of some bottlenecks, and have a new situation where we need even more engineers to maintain upkeep.
Most jobs lost to AI is just companies that want / need to lay people off and shareholders like "Replaced 30% of our workforce with AI" more than any other conceivable reason.
For me personally, both at work and in my free time, I spend _more_ time on writing things _that matter_ since I’ve freed up time by using LLM’s for boilerplate tasks.
My motto is - If it wasn’t worth writing, it won’t be worth reading.
A good example of writing where I’d recommend using LLM’s is product documentation. You pass the diff, the description of the task, and the context (existing documentation) with a prompt ”Update the documentation…”.
Documentation is important but it’s not prose. However, writing a comment on hacker news is.
If you look at the beginning of XX century, university education was much less accessible with much fewer participants, and the results were much more impressive than today across all disciplines
There was also a lot of relatively low-hanging fruit in most fields, because we basically didn't have the technology before, or simply didn't bother to look.
There's a ton of low hanging fruit now - more than ever. Every question answered raises multiple new questions. If there is a lack of opportunity to answer interesting questions, it is certainly not because the questions aren't there.
But someone needed to realise it's a low-hanging fruit in the first place. By the end of the XIX century the general agreement was that physics was pretty much done, we are now just polishing the details.
Instead of 1 reviewer, have 10; also, don't we benefit as a society when everyone is more highly educated? Sure, we have a ways to go before we get there, namely with regards to resistance to disinformation training and including more resistance to populism / fascism in the curriculum so that we have a chance to build better and more equal societies.
I guess one big component is perceived value. If the market thinks it’s valuable, it’s valuable. It’s when when the market looses value and/or interest that the true value will be evaluated. Companies that have grown organically don’t have that issue since they have proven themselves over time. It’s different for VC’s, for good and bad I guess.
"IBM Bob is IBM’s new coding agent, currently in Closed Beta. "
Promptarmor did a similar attack(1) on Google's Antigravity that is also a beta version. Since then, they added secure mode(2).
These are still beta tools. When the tools are ready, I'd argue that they will probably be safer out of the box compared to a whole lot of users that just blindly copy-paste stuff from the internet, adding random dependencies without proper due diligence, etc. These tools might actually help users acting more secure.
I'm honestly more worried about all the other problems these tools create. Vibe coded problems scale fast. And businesses have still not understood that code is not an asset, it's a liability. Ideally, you solve your business problems with zero lines of code. Code is not expensive to write, it's expensive to maintain.
While they have found some solvable issues (e.g. "the defense system fails to identify separate sub-commands when they are chained using a redirect operator"), the main issue is unsolvable. If you allow an LLM to edit your code and also give it access to untrusted data (like the Internet), you have a security problem.
A problem yes, but I think GP is correct in comparing the problem to that of human workers. The solution there has historically been RBAC and risk management. I don’t see any conceptual difference between a human and an automated system on this front
> I don’t see any conceptual difference between a human and an automated system on this front
If an employee of a third party contractor did something like that, I think you’d have better chances of recovering damages from them as opposed to from OpenAI for something one of its LLMs does on your behalf.
A human worker can be coached, fired, terminated, sued, any number of things can be done to a human worker for making such a mistake or willful attack. But AI companies, as we have seen with almost every issue so far, will be given a pass while Sam Altman sycophants cheer and talk about how it'll "get better" in the future, just trust them.
Yeah, if I hung a sign on my door saying "Answers generated by this person may be incorrect" my boss and HR would quickly put me on a PIP, or worse. If a physical product didn't do what it claimed to do, it would be recalled and the maker would get sued. Why does AI get a pass just pooping out plausible but incorrect, and sometimes very dangerous, answers?
If anything, the limit of RBAC is ultimately the human attention required to provision, maintain and monitor the systems. Endpoint security monitoring is only as sophisticated as the algorithm that does the monitoring.
I'm actually most worried about the ease of deploying RBAC with more sophisticated monitoring to control humans but for goals that I would not agree with. Imagine every single thing you do on your computer being checked by a model to make sure it is "safe" or "allowed".
>If you allow a human to edit your code and also give them access to untrusted data (like the Internet), you have a security problem.
Security shouldn't be viewed in absolutes (either you are secure or you aren') but more in degrees.
Llms can be used securely just the same as everything else, nothing is ever perfectly secure
They can be reasoned about from a mathematical perspective yes. An LLM will happily shim out your code to make a test pass. Most people would consider that “unreasonable”.
I have an issue with the "code is a liability" framing. Complexity and lack of maintainability are the ultimate liabilities behind it. Code is often the least worst alternative for solving a given problem compare to unstructured data in spreadsheets, no-code tools without a version history, webs of Zapier hooks, opaque business processes that are different for every office, or whatever other alternatives exist.
It's a good message for software engineers, who have the context to understand when to take on that liability anyway, but it can lead other job functions into being too trigger-happy on solutions that cause all the same problems with none of the mitigating factors of code.
> When the tools are ready, I'd argue that they will probably be safer out of the box compared to a whole lot of users that just blindly copy-paste stuff from the internet, adding random dependencies without proper due diligence, etc. These tools might actually help users acting more secure.
This speculative statement is holding way too much of the argument that they are just “beta tools”.
Yes. But the exploitable vector in this case is still humans. AI is just a tool.
The non-deterministic nature of an LLM can also be used to catch a lot of attacks. I often use LLM’s to look through code, libraries etc for security issues, vulnerabilities and other issues as a second pair of eyes.
With that said, I agree with you. Anything can be exploited and LLM’s are no exception.
As long as a human has control over a system AI can drive, it will be as exploitable as the human.
Sure this is the same as positing P/=NP but the confidence that a language model will somehow become a secure determinative system fundamentally lacks language comprehension skills.
Thanks! I did not sppof! I thought that since it was my local Tailnet, only devices on that net could connect. I just rebuilt the network as a precaution.
Very true! Was that Rumsfeld, right? Unknown unknowns?
And thank you! I'm glad you appreciated the humor. I'm still a novice builder, so the thought of ssh-ing to my home computer from a plane geeks me out. I'm about 20 years late but I'm here now!
It’s interesting how you end up being a part of the current trends whether you know it or not.
I actually set up a blog on the 15th. No real content yet but I’ve almost written a first real post. Seeing this made me chuckle - I thought _I_ had an original thought around missing blogs but I’m obviously just a part of the hive mind. I truly hope this trend is here to stay.
I also want to share this video on the topic ”The reason no one has hobbies anymore”, it was shared by a podcast I was listening to the other day and I think it’s well worth watching. https://youtu.be/IUhGoNTF3FI
"The real test of Vibe coding is whether people will finally realize the cost of software development is in the maintenance, not in the creation."
https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...
reply