Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that, yes sure, there's no reason to think AI will stop improving.

But I think that everyone is lossing trust not because there is no potential that LLMs could write good code or not, it's the trust to the user who uses LLMs to uncontrollable-ly generate those patches without any knowledge, fact checks, and verifications. (many of them may not even know how to test it.)

In another word, while LLMs is potentially capable of being a good SWE, but the human behind it right now, is spamming, and doing non-sense works, and let the unpaid open source maintainers to review and feedback them (most of the time, manually).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: