Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We understand the meaning that we wish to convey and then intelligently choose the best method that we have at our disposal to communicate that.

LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.



How does this intelligence work? Can you explain how 'meaning' is expressed in neurons, or whatever it is that makes up consciousness?

I don't think we know. Or if we have theories, the error bars are massive.

>LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.

How is that different than using one's learned vocabulary?


How do you know we understand and LLMs don't? To an outsider they look the same. Indeed, that is the point of solipsism.


Because unlike a human brain, we can actually read the whitepaper on how the process works.

They do not "think", they "language", i.e. large language model.


What is thinking and why do you think that LLM ingesting content is not also reading? Clearly they're absorbing some sort of information from text content, aka reading.


I think you don't understand how llms work. They run on math, the only parallel between an llm and a human is the output.


Are you saying we don't run on math? How much do you know of how the brain functions?

This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.


So do neurons.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: