Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You didn't give any answer to the question. I'm sorry you find the idea that human cognition is just an emergent property of billions of connected weights nihilistic.

Even when we know that physically, that's all that's going on. Sure, many orders more dense and connected than current LLMs, but it's only a matter of time and bits before they catch up.

Grab a book on neurology.



The irony of this post. Brains are sparser than transformers, not denser. That allows you to learn symbolic concepts instead of generalising from billions of spurious correlations. Sure, that works when you've memorised the internet but falls over quickly when out of domain. Humans, by contrast, don't fall over when the domain shifts, despite far less training data. We generalise using symbolic concepts precisely because our architecture and training procedure looks nothing like a transformer. If your brain were a scaled up transformer, you'd be dead. Don't take this the wrong way, but it's you who needs to read some neurology instead of pretending to have understanding you haven't earned. "Just an emergent propery of billions of connected weights" is such an outdated view. Embodied cognition, extended minds, collective intelligence - a few places to start for you.


I'm not saying the brain IS just an LLM.

I'm saying despite the brains different structure, mechanism, physics and so on ... we can clearly build other mechanics with enough parallels that we can say with some confidence that _we_ can emerge intelligence of different but comparable types, from small components on a scale of billions.

At whichever scale you look, everything boils down to interconnected discrete simple units, even the brain, with an emergent complexity from the interconnections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: