Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe LLMs are entirely analogous to the speech areas of the brain. They have a certain capacity of speaking automatically, reflexively, without involving (other) memory for example. That is how you are able to deliver quippy answers, that is where idioms "live". You can see this in people with certain kinds of brain damage, if they are unable to recall certain memories (or sometimes if you press somebody to recall memories that they don't have) they will construct elaborate stories on the spot. They won't even notice that they are making it up. This is called confabulation, and I think it is a much better term than hallucination for what LLMs do when they make up facts.

I feel this analogy confirmed by the fact that chain of thought works so well. That is what (most?) people do when they actively "think" about a problem. They have a kind of inner monologue.

Now, we have already reached the point that LLMs are much smarter than the language areas of humans - but not always smarter than the whole human. I think the next step towards AGI would be to add other "brain areas". A limbic system that remembers the current emotion and feeds it as an input into the other parts. We already have dedicated vision and audio AIs. Maybe we also need a system for logical reasoning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: