Hacker News new | past | comments | ask | show | jobs | submit login

AI doesn’t understand that’s the problem.

If the training data contains some mistakes often it will reproduce them more likely.

Unless there are preprogrammed rules to prevent them.






I’ve had really good results, but of course ymmv

As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply

That’s by no means infalible, but they’ve come a long way even just in the last 12 months




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: