Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that it's not quite code. It's almost code, but without the precision, which puts it into a sort of Uncanny Valley of code-ness. It's detailed instructions for someone to write code, but the someone in this case is an alien or insane or on drugs so they might interpret it the way you meant it or they might go off on some weird tangent. You never know, and that means you'll need to check it with almost as much care as you'd take writing it.

Also, having it write its own tests doesn't mean those tests will themselves be correct let alone complete. This is a problem we already have with humans, because any blind spot they had while writing the code will still be present for writing the tests. Who hasn't found a bug in tests, leading to acceptance of broken code and/or rejection of correct alternatives? There's no reason to believe this problem won't also exist with an AI, and they have more blind spots to begin with.




I think often of the adage "it's harder to read code than write it". GPT gives you a lot to read. definitely a better consultant than coder imo. I've also had GPT write entirely false things, then I say "isn't that false?" and it says, "yes sorry about that" . very uncanny


And the code that GPT does write, if it is even close to correct, must be code that exists in many places already, and usually (like the case of so much react code) doesn’t need to exist at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: