Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It really shows how LLMs work. It's all about probabilities, and not about understanding. If something looks very similar to a well known problem, the llm is having a hard time to "see" contradictions. Even if it's really easy to notice for humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: