Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Does that work as well with non-strangers who are your coworker?

Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.

> why are you even organizationally using LLMs

Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?



I don’t want to just dismiss the productivity increase. I feel 100% more productive on throw away POCs and maybe 20% more productive on large important code bases.

But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.

But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.

I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.

Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.


My point in that second question was: Is the human challenge of getting a lot of inexperienced engineers to fully understand the LLM output actually worth the time, effort and money to solve vs sticking to solving the technical problems that you're trying to make the LLM solve?

Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.


The change is already happening. People graduating now are largely "AI-first", and it's going to be even worse if you listen to what teachers tell. And management often welcomes it too. So you need to deal with it one way or another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: