Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think it’s a stretch to say humans aren’t great at assessing the truth or accuracy of anything either.


My point isn't about how good or bad this is being done. Humans, at least some of the time, attempt to assess truth and accuracy of things. LLMs do not attempt to do this.

That's why I think it's incorrect to say they're bad at it. Even attempting it isn't in their behavior set.


Where is the organ that does that? My impression is everything the brain does is homomorphic to what LLMs do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: