That's pretty much what it is, as you stated it. Finding abstractions that let you encode your observational history more efficiently than you previously could, or "discovering truths", if you want to be all mystical about it.
Your definition of thinking is designed to fit AI. You set the bar low, and then get giddy when it jumps over. "Progress of scientific theory" is just a meaningless phrase that makes your claim sound authoritative when it isn't.
I'm still not hearing your definition of thinking, but given how hallowed you seem to find it, it must be truly brilliant.
Progress of scientific theory is plain to see. At each step, e.g. Kepler equations -> Newtonian mechanics -> General relativity, we encode our observations of the physical world ever more efficiently.
I gave it above, you're just too dense to even begin to try to understand what I mean. If you have a specific question I'm more than happy to try and answer it.
They cannot handle immaterial concepts such as goodness, truth, justice, etc. because such concepts are not reducible to material components. They cannot abstract at all, because to abstract something is to consider it in a universal way, leaving the material parts behind. Computers are fundamentally material, and so cannot handle any kind of immaterial concepts.
One neuron doesn't think. Three neurons don't think. Billions of neurons think. Somewhere between one neuron and billions of neurons, thinking starts happening. Probably also true for neural networks. The main problem is that people throw around terms like: "Thought", "Intelligence", "Will", "Reasoning", "Knowledge", "Consciousness", etc like they are very well defined and well understood terms and they very much are not.
My point precisely. Those are all vague terms. Saying that "neural nerworks do not think" is as meaningless as any equivalent (or opposite) statement on any other system including any number of neurons, a whole brain or a person.
It is meaningless to me because the term is imprecise. Is it precise when I do something bone headed and say "sorry I wasn't thinking"? Do cats think? Do worms?
Depends on what you mean by thinking in that particular case.
In my opinion what LLMs do is close enough to thinking for some definition of the word.
And, by the way, I think there's no need to be unpleasant to state your point.
There's a difference between a term being imprecise and being equivocal. Yeah, we use the term "think" in different ways. But it primarily refers to the operations unique to the human intellect with which we're all familiar.. deduction, analysis, synthesis, judgment, deliberation, abstraction, etc.
The thing that's special about these operations is that they deal with immaterial concepts. What's right? What's good? What's true? And so on. Computers have never achieved this and they never will, because they are only material arrangements, and so can only handle things on the material level. The human mind is clearly not reducible to that, because it can handle purely immaterial concepts.
I see where we differ in opinion. I do believe that the human mind is just a very particular material arrangement. Therefore while different in scale to an LLM and likely also differing in structure, I do think both systems are fundamentally in the same category.
Given that, we could never agree. But that's fine.
Yes that is the key contention. I don't think that our minds are material, because if they were then we couldn't handle any immaterial concepts. Although obviously our brains are involved in the operation of the mind, they cannot be an exhaustive explanation of what the mind is, given that he mind is perfectly comfortable to leave the material world behind.
It's fair to talk about thinking in a handwavey "you know what I mean" way. This is not a philosophy paper. It's a fine point if that's what you want to discuss, but doesn't change anything about the issue at hand and is needlessly pedantic. It's the "what you're referring to is actually GNU/Linux" of discussions about the tech side of AI.
It pretends to be a philosophy paper. If they wanted to talk about computation, they would use terms that communicate that clearly. But they're using words that confuse the two fields. I didn't do that, the author did.
I think people just get mad when they're reminded of this obvious fact. They want computers to prove that our minds are an illusion, the product of a "meat computer".