Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, my claim isn't that out there.

You can't explain how an LLM does what it does and you can't explain how humans do what we do either. With no explanation possible but CLEAR similarities between human responses and LLM responses that pass turing tests... my hypothesis is actually reasonable.

In theory, with enough data and enough neurons we can conceivably construct an LLM that performs better than humans. Neural nets are supposed to able to compute anything anyway. So none of what I said is unreasonable.



The problem I have with your claim is that it assumes humans use language the way that an LLM does. Humans don’t live in a world of language, they live in the world. When you teach kids vocabulary you point to objects in the environment. Our minds, as a consequence, don’t bottom out at language; we draw on language as a pointer into mental concepts built on sensory experience. LLMs don’t reference something, they’re a crystallization of language’s approximate structure. How do they implement this structure? I dunno, but I do know that they aren’t going to do much more than that because it isn’t rewarded during training. We almost certainly possess something like an LLM in our heads to help structure language, but we also have so, so much more going on up there.


You made a bunch of claims here but you can’t prove any of them to be true.

Also you are categorically wrong about language. LLMs despite the name go well beyond language. LLMs can generate images and sound and analyze them too. They are trained on images and sound. Try ChatGPT.


LLMs don't imitate human intelligence. They imitate machine intelligence, a form of linear symbolic reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: