Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, the helpful LLM always has an answer to your questions - if it can't find one, it'll simply hallucinate one.


Sure, but so will SO. On most questions it seems that at least a third of the answers are just wrong in one way or another. At least with an LLM you can just tell it's wrong and have a new, hopefully better, answers 30 seconds later, all without hurting its feelings or getting into a massive online flame war.


But it won't hallucinate a vote to close as duplicate (of another language entirely, last updated in 2013).


At least it wont tell you to stfu and get better and close the chat like SO will do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: