Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We may be underestimating the effort that goes into cleaning up LLM messes. LLMs learn to program from code bases written by humans. Not just written by humans, maintained by humans. So the bugs that humans spot and remove are under-represented in the training data. Meanwhile, the bugs that evade human skill at debugging lurk indefinitely and are over-represented in the training data.

We have created tools to write code with bugs that humans have difficulty spotting. Worse, we estimate the quality of the code that our new tools produce on the basis that they are inhuman and have no special skill at writing bugs that we cannot spot, despite the nature of their training data.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: