Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs fall over miserably at even very simple pure math questions

They are language models, not calculators or logic languages like Prolog or proof languages like Coq. If you go in with that understanding, it makes a lot more sense as to their capabilities. I would understand the parent poster to mean that they are able to ask and rapidly synthesize information from what the LLM tells them, as a first start rather than necessarily being 100% correct on everything.



Of course that's fair advice in itself, but the parent specifically equated them to a "college professor."


Maybe that should be "college art professor" then? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: