I don’t know if you’re correct. I don’t think you know that our brains are that different? We too need to train ourselves on massive amounts of data. I feel like the kids of reasoning and understanding I’ve seen ChatGPT do are soooo far beyond something like just processing language.
When I talk to 8B models, it's often painfully clear that they are operating mostly (entirely?) on the level of language. They often say things that make no sense except from a "word association" perspective.
With bigger (400B models) that's not so clear.
It would be silly to say that a fruit fly has the same thoughts as me, only a million times smaller quantitatively.
I imagine the same thing is true (genuine qualitative leaps) in the 8B -> 400B direction.
We do represent much of our cognition in language.
Sometime I feel like LLMs might be “dancing skeletons” - pulleys & wire giving motion to the bones of cognition.