Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t know if you’re correct. I don’t think you know that our brains are that different? We too need to train ourselves on massive amounts of data. I feel like the kids of reasoning and understanding I’ve seen ChatGPT do are soooo far beyond something like just processing language.


When I talk to 8B models, it's often painfully clear that they are operating mostly (entirely?) on the level of language. They often say things that make no sense except from a "word association" perspective.

With bigger (400B models) that's not so clear.

It would be silly to say that a fruit fly has the same thoughts as me, only a million times smaller quantitatively.

I imagine the same thing is true (genuine qualitative leaps) in the 8B -> 400B direction.


We do represent much of our cognition in language. Sometime I feel like LLMs might be “dancing skeletons” - pulleys & wire giving motion to the bones of cognition.


But why stop there? All matter and all life is just increasingly fancy machines.

https://en.wikipedia.org/wiki/Philosophical_zombie

Editor's note: I do not promote such a worldview -- my intention is precisely the opposite.


Our brains have effects that proceeded language. Look at lions for an example.

We are much more (and a little less) than lions in terms of mind.


What reasoning have you seen coming from ChatGPT?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: