Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Comparing human hallucinations with model “hallucinations” does not make sense to me.

Model hallucinations seems to me like a fancy way to call the model results that make no sense (ie blatant errors). Plus it makes the model more humanoid.



Most hallucinations make sense. In fact, that is precisely the problem. They make so much sense it's often difficult to distinguish. Most people refer to hallucinations as wrong and often confidently wrong details in a generated reply.

Humans are certainly better but we don't have an absolute sense of what we do or don't know either.


Humans also very often produce results that don't make sense.


Humans that produce output like LLMs are most likely to be diagnosed as schizophrenic, which I don't believe is the goal.


Confident human bullshitters seem to thrive in business environments, in media and entertainment, in politics ... in fact in any profession where the production is just language instead of doing things. They might be more on the dark triangle spectrum, but I would not call them all "schizophrenic".

The problem is we are so used to yielding to confidence we don't apply the necessary checks even when we know it is projected by machine.


I really haven’t seen much of that coming from my ChatGPT usage, it’s just someone lying with a lot of confidence, hardly a mental disorder.


Untuned LLMs are most like people with Korsakoff's syndrome. "hallucination" is a misleading term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: