Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been saying this for a while. The issue is if you don't intimately know your code, you can't truly maintain it. What happens when the LLM can't figure out some obscure but that's costing you $$$,$$$ per minute? You think being unable to have the AI figure it out is an acceptable answer? Of course not. LLMs are good for figuring out bugs and paths forward, but don't bet your entire infrastructure on it. Use it as an assistant not a hammer.


"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

- Brian Kernighan

So if there's a bug in code that an LLM wrote, simply wait 6 months until the LLMs are twice as smart?


> > What happens when the LLM can't figure out some obscure but that's costing you $$$,$$$ per minute?

> simply wait [about a quarter of a million minutes] until the LLMs are twice as smart?

...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: