Hacker News new | past | comments | ask | show | jobs | submit login

I would've expected an answer involving "an exhaustive suite of test cases still passed" - "it looks right" is a low bar for any complex software project these days.

It's the long, long, long tail of edge cases - not just porting them, but even identifying them to test - that slow or doom most real-world human rewrites, after all.




True - but you can ask the chatbot to write a test suite too.


This doesn’t really make sense? If I can’t trust the code it writes, why should I trust that it can write a comprehensive test suite?


Because you can read the test suite to check what it's testing, then break the implementation and run the tests and check they fail, then break a test and run them and check that fails too.

You have to review the code these thing write for you, just like code from any other collaborator.


Because the bugs in its code and the bugs in its test suites usually don't line up and cancel each other out.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: