Comparing human hallucinations with model “hallucinations” does not make sense to me.
Model hallucinations seems to me like a fancy way to call the model results that make no sense (ie blatant errors). Plus it makes the model more humanoid.
Most hallucinations make sense. In fact, that is precisely the problem. They make so much sense it's often difficult to distinguish. Most people refer to hallucinations as wrong and often confidently wrong details in a generated reply.
Humans are certainly better but we don't have an absolute sense of what we do or don't know either.
Confident human bullshitters seem to thrive in business environments, in media and entertainment, in politics ... in fact in any profession where the production is just language instead of doing things. They might be more on the dark triangle spectrum, but I would not call them all "schizophrenic".
The problem is we are so used to yielding to confidence we don't apply the necessary checks even when we know it is projected by machine.
Model hallucinations seems to me like a fancy way to call the model results that make no sense (ie blatant errors). Plus it makes the model more humanoid.