Cells are machines, (our) minds are running on cells, therefore minds run on machines. It doesn't have to be more complicated than that. Unless you are making an argument for supernatural influences. It's debatable whether minds are machines, at least in a sense that satisfies us, but that's another subject.
> "Therefore the machine cannot produce the corresponding formula as being true. But we can see* that the Gödelian formula is true."*
Essentially, the other side's argument boils down to the idea that minds can reason in a way that machines cannot - they can intuitively look behind the Gödelian curtain so to speak.
However, this contest between "mind" and "machine" only works as long as you tie the machine's hands behind its back while letting the mind roam freely: a mind forced to operate within the same formal system would come to the same conclusion as the machine, and conversely there is no reason to assume that a machine allowed to apply intuitive reasoning would not come to the same realization as the mind.
This entire argument is a philosophical parlor trick that only works as long as you don't pay too close attention to the unequal restrictions imposed on the two participants.
I definitely agree that much of this argument boils down to a set of poorly fleshed out definitions. Though I would argue that you’ve misrepresented the article’s definitions a bit. The article is asking whether the formal system in question can fully explain human intelligence. Which we all agree it can’t. It’s not arguing that no machine in the sense you or I would understand that word can explain human intelligence, the author just assumes implicitly that any machine will consist of a formal symbolic system of the type Gödel studied. Obviously if you challenge that assumption, the argument loses its potency.
But in defense of the OP, if we discard the perhaps unhelpful term “machine,” the argument does raise an interesting question: what distinguishes the formal, mechanistic logic of pure mathematics from what you describe as “intuitive reasoning?”
I think a worse flaw in the argument is the assumption that formal symbolic completeness is somehow required for human intelligence. I’d say the opposite if anything is true. If modern psychological research has provided anything of value, it’s pretty convincing evidence that human brains don’t require any sort of logical completeness or even internal consistency to function.
> the author just assumes implicitly that any machine will consist of a formal symbolic system of the type Gödel studied. Obviously if you challenge that assumption, the argument loses its potency
Daniel Dennett, in his refutation of Lucas' argument (which I think is in his book Brainstorms; I have not been able to find a version of it online), makes a point related to your point here in a very stark manner: by showing that, in fact, a physical realization of a formal system can produce a proof of that sytem's own Godel sentence! Dennett's argument makes use of a critical observation: in order to view any physical system as a realization of a formal system, we have to interpret the inputs and outputs of the physical system in a particular way. But there is nothing in the laws of physics that rules out the possibility of multiple interpretations of the same inputs and outputs. So it is perfectly possible for a given set of outputs of some physical system to have one interpretation as sentences of formal system A, but also another interpretation as sentences of formal system B, with the latter sentences including a proof of the Godel sentence of formal system A.
Put abstractly this way, this seems unlikely, but in fact we experience this sort of thing all the time with computers. As I am typing these words, my computer is taking a series of inputs and producing a series of outputs which have multiple interpretations: as sequences of bytes, as machine instructions, as typed characters from the keyboard and displayed characters on the screen, as changes in voltage across transistors, etc. And there is nothing in principle that prevents, for example, writing a program that runs on my computer's CPU and proves the Godel sentence of that CPU, considered as a formal system. Or writing a program in, say, Python that proves the Godel sentence of the gcc compiler used to compile that instance of the Python interpreter, considered as a formal system.
So in fact the author of this article is not just assuming that any machine will consist of a formal symbolic system, he is assuming that there is one unique interpretation of any physical machine as a formal symbolic system. And we now have abundant evidence that that assumption is false.
Somewhat tangentially, from Dennett's review of Lucas' "The Freedom of the Will"
"He obtains some of his pet conclusions by patent non sequiturs, aided by a great deal of hand waving, and not a few elementary logical confusions and slips (e.g., pp. 39, 61, 72, MO). But he gives us a splendid display of his command of Latin, Greek, and Middle English, and, except for failing to give any reference for a crucial argument of Wolfgang Pauli's that he mentions (p. 112), his footnotes are erudite."
> The article is asking whether the formal system in question can fully explain human intelligence.
No, it's asking whether any formal system whatever can fully explain human intelligence. And it gives the wrong answer; see below.
> Which we all agree it can’t.
No, we don't, because the article misrepresents what "fully explain" means. The article assumes that, for a formal system to "fully explain" something, it must produce proofs of all sentences that that something could produce. But that condition is too strong. All the formal system needs to do is produce, by hook or by crook, the same outputs as the something (a human, or whatever) does. It doesn't have to produce proofs of all those outputs as theorems. Humans certainly don't produce formal proofs of all the things we claim we can intuitively see are true.
You just hand waved away the entire problem of whether materialism/physicalism is actually the correct position with regards to the mind. This is not the case; there are plenty of arguments against it.
> Cells are machines, (our) minds are running on cells, therefore minds run on machines. It doesn't have to be more complicated than that.
I'd like to propose a new universal human law, which states that the materialists of any age always posit that reality just happens to be created in a manner similar to the most widespread or powerful technological metaphor of the time. In the 17th century, the world just happened to be similar to a watch, designed by an all-powerful watchmaker. In the 20th and 21st century, the world just happens to be similar to a computational machine.
The truth is that reality and the mind is probably far more complex than we realize and reductive approaches are almost by-definition wrong and incomplete.
> just happens to be created in a manner similar to the most widespread or powerful technological metaphor of the time.
I don't see how this is supposed to lend support to an anti-reductionist program. That we historically identified the workings of nature with the most technically or computationally sophisticated machinery of the time just shows that we intuitively recognize the power of mechanical processes to "implement" the structure and dynamics of other systems. That our computational metaphors get more sophisticated doesn't somehow invalidate the prior metaphorical reduction. All it does is further flesh out and substantiate the reduction.
I agree that none of these metaphors, including the digital computer one, have come close to providing anything close to an explanation of consciousness, but if the premise that a sufficiently-detailed digital simulation of a brain would have a mind turns out to be correct, then, by Turing equivalence, there is a sense in which the medium does not matter. Of course, none of the pre-Turing materialist claims were predicated on this insight!
Thinking of these metaphors as independent attempts at explaining nature is missing what was valuable about them. Their value isn't measured by how successful they were at an explanation, they were successful in orienting our thinking towards using lower level mechanism to explain higher level phenomena. But this effort isn't undermined as our understanding of mechanism becomes more sophisticated.
> You just hand waved away the entire problem of whether materialism/physicalism is actually the correct position
It's true that I don't lend credibility to supernatural hypotheses, which I'm sure isn't the most popular position to have, I understand that. However, given the fact that everything we can actually observe so far seems to follow natural laws, I posit that the burden of proof is on the dualists to show me something that cannot possibly work without supernatural intervention (or, preferrably, directly demonstrate such intervention).
Which of course brings us to the reason why this article is written in this way and why it's being discussed here in the first place: because people use the Gödel argument as such a demonstration. However, the argument does not support the claims it generates.
Am I right or wrong to assume a naturalistic default position on this? Whichever way you answer on this will most likely determine your position on the meaning of the Gödel thought experiment.
> It's true that I don't lend credibility to supernatural hypotheses, which I'm sure isn't the most popular position to have, I understand that. However, given the fact that everything we can actually observe so far seems to follow natural laws, I posit that the burden of proof is on the dualists to show me something that cannot possibly work without supernatural invention (or, preferrably, directly demonstrate such intervention).
I'm not sure what you are defining as 'supernatural' but 'everything that isn't observable' seems like an exceptionally poor definition. Throughout history there have been plenty of unobservable forces which nonetheless turned out to exist. Even today, our scientific knowledge is such a minuscule fraction of the universe; the number of unknown-unknowns is perhaps infinite. Even something like dark matter is still a known-unknown. To make broad statements like "reality/cells are just machines" based on this tiny portion of information seems pretty foolish.
The flaws of your position are shown quite well in the "Blind men and the elephant" proverb.
> "The flaws of your position are shown quite well in the "Blind men and the elephant" proverb."
Once again, that's quite an assertion you make against me there. At no point did I argue that we understand everything about nature. My point is just that every single thing in nature has turned out to be eventually accessible to rational modeling and understanding. To carve out a niche for certain areas and argue that these will eventually turn out to not follow that pattern has been a losing strategy throughout the history of science.
> To make broad statements like "reality/cells are just machines" based on this tiny portion of information seems pretty foolish.
The onus is on you to show me how exactly that is foolish, given the fact that every single scientific observation we have made in this area supports the hypothesis that there is nothing more than physical law involved.
> At no point did I argue that we understand everything about nature. My point is just that every single thing in nature has turned out to be eventually accessible to rational modeling and understanding.
I'm having a hard time following this explanation / statement - it seems to contradict itself and yet is used to justify your subsequent point:
> To carve out a niche for certain areas and argue that these will eventually turn out to not follow that pattern has been a losing strategy throughout the history of science.
However, we have the obvious examples of pre-big bang ontology, as well as the origins of life and the evolutionary process, as "things in nature" that arose and yet do not appear accessible to our methods of understanding.
The observation "if you don't know how to do it, then you don't know how to do it with a computer" comes to mind here. I've yet to see any research indicating that we have or are on the verge of having a theory that describes a consciousness-generating mechanized process; assertions to the contrary seem to rely upon enlarging the definition of the word "machine" to a scale of abstraction that renders it unintelligible.
The onus is not on me, as I am not making broad unscientific claims about the nature of reality or the human mind, simply because we have a small portion of knowledge. It is precisely the Elephant problem that I linked to; you are making assumptions about the fundamental structure of the mind based on our current narrow understanding of it. This is almost by definition unscientific.
The justifiable position would be to say, "based on our current understanding of the mind, our metaphorical model X is the best model we have created, but it is merely the best answer up until now, not the ultimate solution."
> you are making assumptions about the fundamental structure of the mind based on our current narrow understanding of it
I'm trying to think of a way that would not have us go in circles ad infinitum. So I'll try one last time to convince you that this is a misrepresentation.
My assumptions are indeed, given the body of data we have gathered so far, that what lies ahead will fit into what we already know - as opposed to being something extraordinary that invalidates most of our current understanding.
Whenever there are unknowns in our scientific understanding (and I'm sure we'll have more than enough of those for aeons to come), it has so far not been fruitful to offer magical external influences as the explanation. It has so far always turned out that natural forces were at work, forces we can reason about rationally. From my perspective, there is nothing unusualy going on with minds in particular that would warrant a different approach.
Our current understanding may be narrow by your standards, but it doesn't follow that people just get to make up whatever lies outside and then insist on those doubting them being unscientific.
> "based on our current understanding of the mind, our metaphorical model X is the best model we have created, but it is merely the best answer up until now, not the ultimate solution."
That we can agree on. What we don't seem to agree on is the amount of credibility we should assign to claims that lie far outside of our physical expectations. Just because there are unknowns doesn't make it reasonable to assign extraordinary values to those unknowns.
> The onus is not on me, as I am not making broad unscientific claims
If you make a claim, especially one that falls widely outside of extrapolated scientific understanding, the burden of proof does indeed fall on you. By contrast I am making the claim that whatever gaps we have in our theory about minds will eventually turn out to be an extension of what we already know, as opposed to some revolutionary but as of yet unseen aspect of the universe. My claim is small, yours is quite large and also seems vague to me.
> My assumptions are indeed, given the body of data we have gathered so far, that what lies ahead will fit into what we already know - as opposed to being something extraordinary that invalidates most of our current understanding.
Would this describe the world of physics prior to Einstein and Quantum Mechanics? I'm not a physicist, but my impressions is that it absolutely was "something extraordinary that invalidates most of our current understanding."
> Whenever there are unknowns in our scientific understanding (and I'm sure we'll have more than enough of those for aeons to come), it has so far not been fruitful to offer magical external influences as the explanation.
Sure, offering magical solutions is not any better, and explanations based on current scientific understanding are likely more correct than magical explanation, but - that doesn't mean that the scientific explanation is the truth, but rather that it's simply a more useful theory.
This is also not mentioning the problem of induction, which is an entirely different and deeper conversation.
> By contrast I am making the claim that whatever gaps we have in our theory about minds will eventually turn out to be an extension of what we already know, as opposed to some revolutionary but as of yet unseen aspect of the universe. My claim is small, yours is quite large and also seems vague to me.
Again, the history of science would show otherwise. Did anyone in 1800 predict quantum mechanics? It seems naive to assume that science will never upend our fundamental view of the world.
My claim is simply that of the scientific method itself: observational knowledge is useful for constructing our current best view of the world, but it is simply that: our current best. Extrapolating that into statements about reality itself is a non-scientific act and is only common because agnosticism is such a nebulous difficult position for humans to hold.
> Would this describe the world of physics prior to Einstein and Quantum Mechanics? I'm not a physicist, but my impressions is that it absolutely was "something extraordinary that invalidates most of our current understanding."
I am (or was) a physicist. Even scientific revolutions like Relativity and Quantum mechanics don't invalidate 'most' of our prior understanding. Newtonian mechanics continues to be a very accurate model on the length scales that were testable at time it was conceived (apples and trees). The Relativity and Quantum Mechanics revolutions explained why classical mechanics breaks down at the scale of planets and atoms respectively. All that is invalidated is the idea that the current model is universal, but that would be naive thing to believe at any time. Scientific progress does not mean the prior results are wrong, it's an additive process, if done correctly.
Thanks for the input. I was just quoting the parent’s phrasing of ‘invalidates’. I’d agree that science is absolutely an additive process. However with regards to the mind, we may be in the position of physicists circa 1880. I suppose we simply can’t know.
Neither Lucas' argument here, nor any other anti-materialist argument, has offered any hint of what is missing from the materialist position, so the absence of adequate knowledge is at best symmetrical.
It is not unscientific to speculate on how things may be - without that, there would be no hypotheses.
Lucas's argument claims that materialism is false, so Udo is within his logical rights to assume otherwise -- it would be begging the question to insist he cannot do that.
None of this makes any progress on the question of whether Lucas' argument is valid and sound.
This argument is twisted out of shape. Basically, you're saying that keiferski is making the claim "we don't know whether cells are machines", which he should prove. When, in fact, we actually don't know and it is you who is proposing the added hypothesis that they are. I understand that it seems to you very likely that cells are machines, but that's just your feeling. You're the one making a claim and your arguments are not convincing to everybody, so that's that, there's not much more to add really.
> Basically, you're saying that keiferski is making the claim "we don't know whether cells are machines", which he should prove.
No. He/she challenged me, repeatedly, to prove my assumption that we're talking about machinery, which he/she called foolish and handwavy. I take the position that my assumption is in line with current understanding, and that the entirety of modern biochemistry is based on and working with that assumption. keiferski holds that I'm ignorant for taking that position, implying that wide and extraordinary counter claims of something paranatural going on with minds should be considered reasonable until explicitly disproven. The basic difference of opinion is - as I understand it - what the default position should be.
> Maybe the reductionists are getting closer and closer to reality, the more powerful the metaphors become?
Assuming technology progresses toward gaining more knowledge of the universe, then this would be the case. However that doesn't mean they are actually accurate metaphors, merely that they are slightly better than ones before them. 1% is a hundred times more than 0.01%, but it's still just 1%.
You are confusing brains and minds. Our minds extend beyond our brains into our bodies; people reason and act because of hunger and thirst. Our minds extend beyond our bodies, we act on the basis of our friends and society.
That's a baseless assertion. In fact, I made it extra clear to distinguish between minds and the hardware they run on. By necessity, we need to talk about the "interface" between both though.
> Our minds extend beyond our bodies, we act on the basis of our friends and society.
How does that counter the argument that minds (already) run on machines?
An "interface" like Descartes' pineal gland? Or would you prefer something RESTful?
While technical metaphors applied to the good ol' philosophical mind body problem look so modern they are also totally inept. There is no archticture or VM to port your consciousness to, nor can you upload it to AWS.
I would say that the article is based on a flawed premise that constraints of Gödel theorems do not apply to human mind.
The fact that some theorems undecidable in a formal system can be 'seen true' or informally proven is explainable by a simple fact that formal systems are 'step behind' intuition.
We have an intuitive grasp of matematical structures and inference and we created neat and comprehensible formal systems that tries to reflect that, but they are behind in some cases. We can iteratively improve them by adding more axioms and more general inference methods, but that is kind of futile as second Gödel theorem shows that it will never make them complete.
But there is no reason to assume that intuition is complete in the same sense. It seems likely that if we did that step enough times, we would end in situation where we couldn't progress as intuition would fail on undecidable theorems (either by being silent, or inconsistent between people, or by being incomprehensible).
>Nor could we make its inconsistency a reproach to it---are not men inconsistent too? Certainly women are, and politicians; and {53} even male non-politicians (121) contradict themselves sometimes, and a single inconsistency is enough to make a system inconsistent.
Not 'love' in an agreeing sense. Boy are we in different times now.
Agreed, I see no justification in assuming that human minds are consistent, in fact that seems self evidently not to be the case. Also just because a system is inconsistent, that doesn't mean it's incapable of generating or 'simulating' consistent systems. So we can conceive of a consistent system, apply it to problems and achieve all the advantages on consistency within some limited domain, while at the higher level being ourselves inconsistent. I see no reason why an automated system could not also do the same. This seems to me to break the argument completely, though I confess I'm no great philosopher and it's highly likely I'm missing something.
A classic! Seems obviously wrong (see Udo's comment), but nobody can quite seem to agree exactly what goes wrong. Probably Lewis' responses are best – https://philpapers.org/rec/LEWLAM-2
I didn't realise people were making these arguments back in the sixties. I came across this in Penrose's much more recent books "The Emperor's New Mind" and to a lesser extent "The Shadow Of The Mind" which make similar arguments to this paper, appealing to incompleteness (and in the second book quantum mechanics) to argue against strong AI. Both books are well worth reading, but since I understand Turing machines are able to stimulate both quantum and classical machines I found it hard to find them truly convincing.
It is a question of efficiency, our brains run on about 15 Watts.
Quantum computers offer many orders of magnitude of efficiency by a (handwavy) 'searching the whole solution space' sort of ultimate parallelism.
The new field of Quantum Biology[1] shows that many biological processes exploit quantum effects, e.g. enzymes exploiting quantum tunelling to unknot proteins, robin's direction sense, photon pathfinding to reaction centre in photosynthesis.
It is therefore not entirely unlikely that brain's exploit various quantum effects to make thinking much more efficient.
Even though Goedel proved no logical or material system can be 100% complete, you can probably get close enough for practical purposes of being intelligent.
Good point. In any case, those arguments suffer from the same weakness.
Intelligence is not about formal systems proving theorems; It's about navigating the real world as an organism, attempting to survive, thrive, and procreate.
> "Therefore the machine cannot produce the corresponding formula as being true. But we can see* that the Gödelian formula is true."*
Essentially, the other side's argument boils down to the idea that minds can reason in a way that machines cannot - they can intuitively look behind the Gödelian curtain so to speak.
However, this contest between "mind" and "machine" only works as long as you tie the machine's hands behind its back while letting the mind roam freely: a mind forced to operate within the same formal system would come to the same conclusion as the machine, and conversely there is no reason to assume that a machine allowed to apply intuitive reasoning would not come to the same realization as the mind.
This entire argument is a philosophical parlor trick that only works as long as you don't pay too close attention to the unequal restrictions imposed on the two participants.