My point isn't about how good or bad this is being done. Humans, at least some of the time, attempt to assess truth and accuracy of things. LLMs do not attempt to do this.
That's why I think it's incorrect to say they're bad at it. Even attempting it isn't in their behavior set.