Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's as much variability in LLM AI as there is in human intelligence. What I'm saying is that I bet if that guy wrote a better prompt his "failing LLM" is much more likely to stop failing, unless it's just completely incompetent.

What I always find hilarious too is when the AI Skeptics try to parlay these kinds of "failures" into evidence LLMs cannot reason. If course they can reason.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: