Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Neural networks do not think


You're not giving information anybody on this forum doesn't already know.

Obviously they don't "speak" either. Both "think" and "speak" are used as shorthands here for what the language models actually do.


What are you upset with me for? The authors are using the misleading language, not me. Take it up with them.


Could you give a definition of "think" that NNs fail to live up to?


Abstracting immaterial concepts from physical reality and deliberately using them in analytical or deductive processes to discover truths.


Might be relevant: https://www.nature.com/articles/s41586-023-06924-6 Mathematical discoveries from program search with large language models


So basically finding ways to compress your observational history?


No, it's not "basically" that at all.


That's pretty much what it is, as you stated it. Finding abstractions that let you encode your observational history more efficiently than you previously could, or "discovering truths", if you want to be all mystical about it.


That's not what it is, and it's not what I stated.


Then could you go into more detail? Because what I just described was the progress of scientific theory.

If an AI can do that, it's not going to matter whether or not it meets your arcane definition of "thinking".


Your definition of thinking is designed to fit AI. You set the bar low, and then get giddy when it jumps over. "Progress of scientific theory" is just a meaningless phrase that makes your claim sound authoritative when it isn't.


I'm still not hearing your definition of thinking, but given how hallowed you seem to find it, it must be truly brilliant.

Progress of scientific theory is plain to see. At each step, e.g. Kepler equations -> Newtonian mechanics -> General relativity, we encode our observations of the physical world ever more efficiently.


I gave it above, you're just too dense to even begin to try to understand what I mean. If you have a specific question I'm more than happy to try and answer it.


>I gave it above

Yeah, something about "abstraction" and some hand-wavy magic, which computers are already doing.

Can you state, specifically, what part of thinking you presume computers can't do?


They cannot handle immaterial concepts such as goodness, truth, justice, etc. because such concepts are not reducible to material components. They cannot abstract at all, because to abstract something is to consider it in a universal way, leaving the material parts behind. Computers are fundamentally material, and so cannot handle any kind of immaterial concepts.


Do neurons think? Do a bunch of neurons?

Is this semantics?


basically this: https://en.wikipedia.org/wiki/Sorites_paradox

One neuron doesn't think. Three neurons don't think. Billions of neurons think. Somewhere between one neuron and billions of neurons, thinking starts happening. Probably also true for neural networks. The main problem is that people throw around terms like: "Thought", "Intelligence", "Will", "Reasoning", "Knowledge", "Consciousness", etc like they are very well defined and well understood terms and they very much are not.


My point precisely. Those are all vague terms. Saying that "neural nerworks do not think" is as meaningless as any equivalent (or opposite) statement on any other system including any number of neurons, a whole brain or a person.

It's all semantics.


Your claim is that saying "people think" is meaningless? Maybe you don't think (seems to be the case), but I certainly do.


It is meaningless to me because the term is imprecise. Is it precise when I do something bone headed and say "sorry I wasn't thinking"? Do cats think? Do worms?

Depends on what you mean by thinking in that particular case.

In my opinion what LLMs do is close enough to thinking for some definition of the word.

And, by the way, I think there's no need to be unpleasant to state your point.


There's a difference between a term being imprecise and being equivocal. Yeah, we use the term "think" in different ways. But it primarily refers to the operations unique to the human intellect with which we're all familiar.. deduction, analysis, synthesis, judgment, deliberation, abstraction, etc.

The thing that's special about these operations is that they deal with immaterial concepts. What's right? What's good? What's true? And so on. Computers have never achieved this and they never will, because they are only material arrangements, and so can only handle things on the material level. The human mind is clearly not reducible to that, because it can handle purely immaterial concepts.


I see where we differ in opinion. I do believe that the human mind is just a very particular material arrangement. Therefore while different in scale to an LLM and likely also differing in structure, I do think both systems are fundamentally in the same category.

Given that, we could never agree. But that's fine.

And by the way, I actually hope you're right.


Yes that is the key contention. I don't think that our minds are material, because if they were then we couldn't handle any immaterial concepts. Although obviously our brains are involved in the operation of the mind, they cannot be an exhaustive explanation of what the mind is, given that he mind is perfectly comfortable to leave the material world behind.


Billions of neurons don't think, people do.


...with what?


With their minds


There are no real neurons in a neurons in a neural network.


I don't understand the downvotes - you are correct.


It's fair to talk about thinking in a handwavey "you know what I mean" way. This is not a philosophy paper. It's a fine point if that's what you want to discuss, but doesn't change anything about the issue at hand and is needlessly pedantic. It's the "what you're referring to is actually GNU/Linux" of discussions about the tech side of AI.


It pretends to be a philosophy paper. If they wanted to talk about computation, they would use terms that communicate that clearly. But they're using words that confuse the two fields. I didn't do that, the author did.


I think people just get mad when they're reminded of this obvious fact. They want computers to prove that our minds are an illusion, the product of a "meat computer".


Read some Daniel Dennett!


Are you serious?


You're very grumpy I think you need some food and a nap :-)


I think you need religion




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: