Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its all great until

>> I thought I would see a pretty drastic change in terms of Pull Requests, Commits and Line of Code merged in the last 6 weeks. I don’t think that holds water though

The chart basically shows same output with claude than before. Which kinda represents what I felt when using LLMs.

You "feel" more productive and you definitely feel "better" because you don't do the work now, you babysit the model and feel productive.

But at the end of the day the output is the same because all advantages of LLMs is nerfed by time you have to review all that, fix it, re-prompt it etc.

And because you offload the "hard" part - and don't flex that thinking muscle - your skills decline pretty fast.

Try using Claude or another LLM for a month and then try doing a tiny little app without it. Its not only the code part that will seem hard - but the general architecture/structuring too.

And in the end the whole code base slowly (but not that slowly) degrades and in longer term results net negative. At least with current LLMs.



I've been exploring vibe coding lately and by far the biggest benefit is the lack of mental strain.

You don't have to try to remember your code as a conceptual whole, what your technical implementation of the next hour of code was going to be like at the same time as a stubborn bug is taunting you.

You just ask Mr smartybots and it deliver anything between proofreading and documentation and whatnot, with some minor fuckups occasionally


"mental strain" in a way of remembering/thinking hard is like muscle strain. You need it to be in shape otherwise it starts atrophying.


My friend, there’s no solid evidence that this is the case. So far, there are a bunch of studies, mostly preprints, that make vague implications, but none that can show clear causal links between a lack of mental strain and atrophying brain function from LLMs.


You're right, we only have centuries of humans doing hard things that require ongoing practice to stay sharp. Ask anyone who does something you can't fake, like playing the piano, what taking months off does to their abilities. To be fair, you can get them back much faster than someone that never had the skills to begin, but skills absolutely atrophy if you are not actively engaged with them.


My assembly skills have atrophied terribly, and that's ok.


Using LLMs is not moving up a level of abstraction, it is removing your own brain from the abstraction altogether


I wish, but as it stands right now LLMs have to be driven and caged ruthlessly. Conventions, architecture, interfaces, testing, integration. Yes, you can YOLO it and just let it cook up _something_, but that something will an unmaintainable mess. So I'm removing my brain from the abstraction level of code (as much as I dare), but most definitely not from everything else.


we will all become project managers. That's not removing your brain from problem


We know that learning and building mental capabilities require effort over time. We know that when people have not been applying/practicing programming for years, their skills have atrophied. I think a good default expectation is that unused skills will go away over time. Of course the questions are, is the engagement we have with LLMs enough to sustain the majority of the skills? Or is there new skills one builds that can compensate foe those lost (even when the LLM is no longer used)? How quickly do the changes happen? Are there wider effects, positive and/or negative?


I mostly referred to skills, not brain function itself.


But the mental strain is how you build skills and get better at your job over time. If it's too much mental strain, maybe your code's architecture or implementation can be improved.

A lot of this sounds like "this bot does my homework for me, and now I get good grades and don't have to study so hard!"


It's alright until you have a bug the LLM can't solve, then you have to go in the code yourself and you realize what a mess it has made.


I haven't found such a bug yet. If it fails to debug on its second attempt I usually switch to a different model or tell it to carpet bomb the code with console logs, write test scripts and do a web search, etc.

The strength (and weakness) of these models is their patience is infinite.


Well I don't have the patience of waiting for it to find the right solution ahah


Perhaps you set a very high quality bar, but I don't see the LLMs creating messy code. If anything, they are far more diligent in structuring it well and making it logically sequenced and clear than I would be. For example, very often I name a variable slightly incorrectly at the start and realise it should be just slightly different at the end and only occasionally do I bother to go rename it everywhere. Even with automated refactoring tools to do it, it's just more work than I have time for. I might just add a comment above it somewhere explaining the meaning is slightly different to how it is named. This sort of thing x 100 though.


> hey are far more diligent in structuring it well and making it logically sequenced and clear than I would be

Yes, with the caveat: only on the first/zeroth shot. But even when they keep most/all of the code in context if you vibe code without incredibly strict structuring/guardrails, by the time you are 3-4 shots in, the model has "forgotten" the original arch, is duplicating data structures for what it needs _this_ shot and will gleefully end up with amnesiac-level repetitions, duplicate code that does "mostly the same" thing, all of which acts as further poison for progress. The deeper you go without human intervention the worse this gets.

You can go the other way, and it really does work. Setup strict types, clear patterns, clear structures. And intervene to explain + direct. The type of things senior engineers push back on in junior PRs. "Why didn't you just extend this existing data structure and factor that call into the trivially obvious extension of XYZ??".

"You're absolutely right!" etc.


> Perhaps you set a very high quality bar

Yes, of course. Do you not? Aren't you embarrassed to admit that you use AI because you don't care about quality?

I would be ashamed to think "you set a high quality bar" is some kind of critique


I know what you‘re writing is the whole point of vibe coding, but I‘d strongly urge you to not do this. If you don’t review the code an LLM is producing, you‘re taking on technical debt. That’s fine for small projects and scripts, but not for things you want to maintain for longer. Code you don’t understand is essentially legacy code. LLM output should be bent to our style and taste, and ideally look like our own code.

If that helps, call it agentic engineering instead of vibe coding, to switch to a more involved mindset.


Not for me. I just reversed engineered a bluetooth protocol for a device which would taken me at least a few days capturing streams of data wireshark. Now i dumped entire dumps inside a llm and it gave me much more control finding the right offsets etc. It took me only a day.


That's not really coding though is it.

No point comparing apples with oranges, most of us don't program by reverse engineering using wireshark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: