Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, no, a good rule of thumb is to expect people to write good code, no matter how they do it. Why would you mandate what tool they can use to do it?


Because it pertains to the quality of the output - I can't validate every line of code, or test every edge case. So if I need a certain level of quality, I have to verify the process of producing it.

This is standard for any activity where accuracy / safety is paramount - you validate the process. Hence things like maintenance logs for airplanes.


> So if I need a certain level of quality, I have to verify the process of producing it

Precisely this, and this is hardly a unique to software requirement. Process audits are everywhere in engineering. Previously you could infer the process of producing some code by simply reading the patch and that generally would tell you quite a bit about the author itself. Using advanced and niche concepts with imply a solid process with experience backing it. Which would then imply that certain contextual bugs are unlikely so you skip looking for them.

My premise in the blog is basically that "Well now I have go do a full review no matter what the code itself tells me about the author."


> My premise in the blog is basically that "Well now I have go do a full review no matter what the code itself tells me about the author."

Which IMO is the correct approach - or alternatively, if you do actually trust the author, you shouldn't care if they used LLMs or not because you'd trust them to check the LLM output too.


The false assumption here is that humans will always write better code than LLMs, which is certainly not the case for all humans nor all LLMs.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: