"Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. "
I have not read the rest of the article but in the introduction it's stated:
"The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world."
Wiktionary defines tacit as "Not derived from formal principles of reasoning" [1].
So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
This is a dualist line of reasoning and, in my opinion, is nothing more than theology dressed up in philosophy.
I would much rather the author just flat out say they are a dualist or that they reject the Church-Turing thesis.
Tacit knowledge is knowledge that results from adapting to experience, like learning to tie shoelaces with practice, rather than something like finding the derivative of sin(x) by the usual mathematical proof method (reasoning step by formal step).
Every deep learning system has tacit knowledge: it knows a chair when it sees one but can't explain how it knows. It just adapted its connections to training until it got it right most of the time.
So computers are provably capable of what, in humans, is defined as tacit knowledge and can be given sensors and actuators to learn from. A car can learn to parallel park with practice. It can't explain how it does it, but you can copy the trained system into a new car.
I don't see why you couldn't produce a combination of sensors and actuators that vastly exceeds what any human is capable of.
But AGI isn't that. It's a variety of information processing techniques (algorithms) deployed as a toolbox managed by meta-techniques (more algorithms) that know how to deploy the others in various combinations. We don't yet know much about the management algorithms, but I don't see any reason in principle why we couldn't eventually find some and invent others.
> A traditional computer program that can find the derivative of sin(x) also can not explain how it knows.
Oh but it could, and that's the point. Some computer differentiation techniques just follow the same rules you learned when you took calculus. They typically don't show you which rules they followed, but they easily could. Other differentiation techniques are more exotic but there's no reason they couldn't show you the chain of computations and/or deductions they went through to arrive at that derivative. Such programs can easily justify their results and even teach humans calculus if configured properly.
Contrast that with the chair example. It is impossible right now to write a program that can show a human the chain of reasoning it went through to decide some image is a chair, because no such chain exists. There's a giant iterated polynomial with nonlinear threshold functions and a million coefficients, but there's no chain of reasoning.
I'm not sure a human can explain how they know a chair is a chair, either. They can come up with a post-hoc rationalisation, but that's not guaranteed to really represent the decision-making process they went through.
At best you get an answer that describes one or more conscious decisions and leaves the unconscious decisions out, such as "it looks a lot like a stool because it's low to the ground and has three legs, but it has a back, so I think it's a chair"; when the real answer is that they have a bunch of pattern-matching visual neurons, and those neurons feed into other neurons that detect more complicated patterns, and the concept of a chair eventually emerges.
That lack of a chain of reasoning just doesn’t feel any more significant to me than the fact that a human can also not endlessly regress upwards explaining every bit of knowledge they have or every reason they made a decision. Likewise the computer algebra software can only answer “how did you know that?” so many times in a sequence.
People can't explain how they know something either. They know it has something to do with their brains, but they don't know how exactly the mechanism works.
At a certain level, "knowledge" is baked into the execution hardware.
Hubert Dreyfus in his 1986 book "Mind Over Machine":
> The digital computer, when programmed to operate by taking a problem apart into features and combining them step by step according to inference rules, operates as a machine—a logic machine. However, the computer is so versatile it can also be used to model a holistic system. Indeed, recently, as the problems confronting the AI approach remained unsolved for more than a decade, a new generation of researchers have actually begun using computers to simulate such systems. It is too early to say whether the first steps in the direction of holistic similarity recognition will eventually lead to devices that can discern the similarity between whole real-world situations. We discuss the development here for the simple reason that it is the only alternative to the information processing approach that computer science has devised. [...] Remarkably, such devices are the subject of active research. When used to realize a distributed associative memory, computers are no longer functioning as symbol-manipulating systems in which the symbols represent features of the world and computations express relationship among the symbols as in conventional AI. Instead, the computer simulates a holistic system.
Further down, this is quite a good summary of Dreyfus general argument:
> Thanks to AI research, Plato's and Kant's speculation that the mind works according to rules has finally found its empirical test in the attempt to use logic machines to produce humanlike understanding. And, after two thousand years of refinement, the traditional view of mind has shown itself to be inadequate. Indeed, conventional AI as information processing looks like a perfect example of what Imre Lakatos would call a degenerating research program. [...] Current AI is based on the idea, prominent in philosophy since Descartes, that all understanding consists in forming and using appropriate representations. Given the nature of inference engines, AI's representations must be formal ones, and so commonsense understanding must be understood as some vast body of precise propositions, beliefs, rules, facts, and procedures. Thus formulated, the problem has so far resisted solution. We predict it will continue to do so.
I think that Dreyfus has unfortunately set back the cultural understanding of computers by decades, by confidently declaring certain tasks impossible for computers to do, because minds have "insight" or "tacit knowledge" or are "holistic", each of which functionally lets a mind be a ghost in the machine.
A lot of the rhetorical momentum comes from pointing at the progress of technology at various stages in human history, especially the fits and starts of AI/Language research in the mid 20th century, and remarking at how little progress has been made.
And the terms used to define how computers were are also vague.
>Given the nature of inference engines, AI's representations must be formal ones
When AI trained on images of dog faces "dreams" on an image, and progressively twists flowers and purses into dog faces and noses, is the connection made between patterns and dog faces "formal" ? Are the images generated by ThisPersonDoesNotExist informal? The ways computers work on data now deals with abstractions & fuzziness in a way that I think Dreyfus did not imagine to be possible. I think Dreyfus wanted to say that the higher-level methods that we now employ to generate images, human-like language, transpose art styles and create nearly photorealistic faces are on a foundation of principles that are new and distinct from the characteristic principles that he understood to be central to computing. But all of our new progress is implemented on a foundation of silicon and bits, too, which simulate neural networks, meaning those are just as computational as the desktop calculator app. I think Dreyfus just couldn't imagine that 'computing' could include all this extra stuff, and, to take a term from Dennet, Dreyfus mistook his failure of imagination for an insight into necessity.
He is talking about AI as presently conceived when writing. The quote I posted has him explicitly imagining what you say he could not imagine. His critique was in fact INFLUENTIAL for the currently successful approaches.
It's him imaging things that he think couldn't be done on computers under one definition, based on vaguely defined terms. Dreyfus was open to another, more expansive definition that included things like 'holistic' and 'tacit' knowledge, which he believed were outside the scope of what computers of a certain sort could do. That distinction turns out to be moot because all the 'new' stuff: e.g. neural networks, GAN, GPT-3 etc are, while in some sense new and innovative, ultimately are running on foundation the same old of logic gates, zeros and ones, and ultimately are, really are, computable in the classical turing machine sense, which is exactly what he had spent his whole career denying. It was a limit of Dreyfus' imagination that he didn't understand that computation, even the kind he criticized, could model the higher order conceptual structures he thought were inaccessible to classical computers. He's not wrong to think that something called 'tacit' knowledge would be important, and would call for specialized approaches and new concepts. Where he went wrong was in veering to the insane, overconfident extreme of denying that these were computable.
Computers can’t heal themselves. Our bodies do that on their own without conscious prompting.
Humans evolve the complexity of a computers electron states, not the computers own inherent properties. My use doesn’t force transistors to evolve into better transistors. It has no self regeneration.
You don’t see the issue in principle but here you have an article by an expert pointing them out.
Perhaps be a better listener?
A computer has observable, literal limitations relative to a humans mechanical functionality.
You can’t scrape away literal reality to arrive at some reductionist idea of what a consciousness is.
Our only known good model for a machine that can create our consciousness is us. It took billions of years of the universe churning at random to accidentally generate us. We have no clue how to replicate that scale.
A computer literally lacks a whole lot of literal information that’s embedded in the hardware and software of a person.
Watch this and tell me the last time your computer reconfigured it’s literal shape when you altered its electric field properties: https://youtu.be/RjD1aLm4Thg
There’s something to “life” we’ll never be able to jam into silicon.
This is exactly the sort of conflation of completely unrelated concepts as I found in the article. What on Earth has healing got to do with reasoning? You even say our bodies do it without conscious prompting, in other words it’s a completely irrelevant issue.
Yes computers aren’t biological, they aren’t life, but so what? Why does that constrain their ability to interact with and learn from the world?
AGI is defined as human like intelligence, human go to the toilet, computers don’t go to the toilet, therefore AGI is impossible. See? It’s easy to “prove” AGI is impossible. That’s really all their argument boils down to. I kept reading the article expecting to hit some essential argument, not to find one. Very disappointing.
For all the pontificating about souls as a means to discredit OP's beliefs, you all seem to avoid this statement:
> Our only known good model for a machine that can create our consciousness is us. It took billions of years of the universe churning at random to accidentally generate us. We have no clue how to replicate that scale.
Would you please provide a counterargument? Do you understand how difficult this really is? We can't even comprehend the physical constraints.
That quoted sentence argues that we do not have AGI now. I have no counterargument against that. We do not have AGI now. That sentence on the other hand fails to argue it is impossible to develop AGI.
Someone before the invention of the aeroplane could have said:
Our only known good model for flying is birds and insects. It took billions of years of the universe churning at random to accidentally generate birds and insects. We have no clue how to replicate that scale.
And yet we know that it's not impossible to create flying machines.
Flight and consciousness are not in any way compareable concepts. We literally cannot perceive the constraints of the latter's underlying physical system.
Sometimes I feel computer scientists choose to misunderstand physicists because it would make them feel stupid if they did understand.
>Flight and consciousness are not in any way compareable concepts.
For pete's sake, it's an analogy, not a direct comparison, and it is perfectly valid as such when interpreted with due charity.
You can say "Brains are complicated. They took time and evolution. That sure is hard. See how hard it is?" The same can be said of flight at a certain level of abstraction as a valid analogy, which can be charitably interpreted as such without the need for claiming anyone is purposely choosing to misunderstand physics.
The fact that the examples of brains and of flight given to us by nature sure seem complicated doesn't establish as a matter of principle that their salient properties can't be modeled in machines, and that's the real thing that's at stake. Disputing that requires a different kind of argument than saying "gosh it sure is complicated", and that's what the analogy is pointing out.
This is interesting. Is aeroplane flight akin to bird or insect flight? Rolling down the tarmac, peering out the window, the planes look more like elongated fish bodies than soft bird bodies, or compact insect bodies. Our planes rather swim in the air than fly in it, I think.
Our flight is some other kind of thing (whatever we uncovered the model-able, salient properties of flight to be). Computer consciousness might similarly be some other kind of thing. And that’d be ok.
But can they be equated? Only at some abstraction level. A plane is obviously not a bird or an insect or a fish. Aeroplane flight is not bird or insect flight either, nor is it swimming. But it is safe travel through the air, from one earth-bound destination to another.
Technological progress is many orders of magnitudes faster than evolutionary change; assuming the 'magic' of the brain is indeed in the neural circuity, hardware is expected to be powerful enough to simulate that on a timescale of O(100) years in the future instead of O(10e9).
Of course, it's not a given that simulating connected neurons with action potentials or whatever is sufficient to capture the relevant features of the brain (perhaps long-distance em interactions are relevant? quantum magic? do we need to drop to the molecular level?) - but without proof to the contrary, we'll just have to wait and see.
This is a well-balanced take. Maybe we will see emergent properties at that scale. If that were the case, then we could catch enough of a glimpse of what is really happening...
But another part of me says that is silly. We had an expectation of what the Higgs was before we found it. Here, we are shooting in the dark.
We had an expectation of what the Higgs was before we found it.
However, the standard model (which formed the basis for the prediction of the Higgs boson) was created to bring order to the chaos of unexpected experimentally discovered particles. In the words of I.I. Rabi on the discovery of the muon, "who ordered that?"
It's not entirely a given that consciousness is needed for intelligence.
It's also not clear what consciousness is. Plenty of animals are self-aware, but they don't have human level intelligence.
We only have a single example of general intelligence. It seems possible that there could be other kinds of general intelligence that don't require consciousness.
We don’t understand something. It’s a complicated phenomenon. So it’s impossible to replicate? I think the “argument” is so silly it doesn’t need to be disproved. If one wants to prove something impossible they’d better avoid logical fallacies. A priori we can’t say that it’s impossible, nor that it’s certainly doable.
Except that like flight it’s already been done - by evolution. I’m very confident that people will be ‘proving’ that it’s impossible like this article, right up to the day we actually do it.
I meant that it's dubious whether we can reproduce a brain-like machine until we'll understand the matter well enough to either prove it possible or impossible.
Anyway, I feel like you. If it's been done once, it can't be impossible in any meaningful sense.
Further, the argument from lack of understanding would work only if knowledge (and science) could not advance any further; but that's tough to prove -- not to say that it's used over and over in faith vs. reason debates to the point it's become annoying -- so the argument is quite weak.
You are taking huge liberties in forming equivocations which mislead your conclusion.
The tacit knowledge description only talks about the acquisition process of the knowledge, not that the knowledge itself is outside bounds of rationality or the physical world. By the same token when you claim "humans have intelligence that is impossible to express through reason or codification" the entire argument hinges on the actual meaning of impossible. Is it impossible in itself, or ever, because human intelligence is non-reason based or are we using a meaning of impossible that at this day and age the process of doing that is still intractable for all intents and purposes. If you're claiming the former, that itself is such a strong claim it requires its own strong proof. If it's the latter, well we have been developing psychotechnologies for several millennia to be able to express ourselves and our cognitive processes and getting better at it, you just need to be patient.
If you found a working x86 chip in the wild, but all documentation and knowledgeable people were wiped out for some reason, I bet the process of finding out how that x86 worked would look quite similar. It wouldn't make x86 otherworldly or in need of a soul.
This is a good point, but I will add, from close discussions with Dreyfus, his position was that it's impossible because it's fundamentally impossible, not because it's intractable. (see my other comment for more)
Interesting to learn about Dreyfus, I was aware of this line of thought from Francois Chollet's article. He is the creator of Keras and some pretty advanced research papers in the nature of intelligence.
He states that the environment and embodiment are crucial for the development of intelligence. Even for humans, 'our environment puts a hard limit on our individual intelligence'.
The implausibility of intelligence explosion (2017)
..
In essence we need simulators on par with reality to train human-like intelligence. BTW, take a look at ThreeDWorld, just came out: 'A High-Fidelity, Multi-Modal Platform for Interactive Physical Simulation'. We're getting closer, and AI scientists are aware of the environment problem.
I have been interested in this debate for perhaps a decade now, and to me one of the most important things to get clear is whether skeptics are just claiming X is really hard, or whether they are claiming it's impossible as a matter of principle, which are two very different things. I think this discussion is about the latter rather than the former, but that many people talk about the former as if it's relevant to the latter.
I don't think it has anything to do with souls. Most "knowledge" deep learning systems have seems tacit to me: it would be practically impossible for people to write programs that articulate and incorporate that knowledge without machine learning (people tried, for decades), and it certainly isn't "derived from formal principles of reasoning" in anything but a tangential mathematical sense.
(I too have not read the whole article; I'm just replying to this comment.)
I have read it and you’re not missing anything. One of the examples of tacit knowledge they give is walking and that therefore it is impossible to teach a computer to walk. They should watch one of the videos from Boston Dynamics.
That's not teaching a computer to walk, that's building a walking machine. Subtle difference, but it's easy to prove; no matter how well you build a Boston dynamics robot, it will never like walking in the same way your TV will never like entertaining people. There's no "I" there to learn.
Way to change the goal posts. We built a machine than learned how to walk. Mission accomplished. The assertion in the article was either wrong or irrelevant, pick one.
Then a bald assertion, computers will never X. Says you. Just because we’re not there yet is no proof it’s impossible.
My son learned how to stand up at 8 months, pretty soon he was walking and even running. No one had to teach him anything, he did this by observing the world around him and drawing his own conclusions.
This is not about definitions, since we don't even know what exactly we're chasing there are no ways to express the difference unambigously. What we get instead is one side trying to (unsuccessfully) define the difference and the other pretending it doesn't exist.
We're a long, long way from an AI learning how to walk by itself. Neural networks and machine learning is one piece of a puzzle, expert systems are probably in there somewhere as well. Perhaps one day we will identify all the pieces but we're definitely not even close.
Cows can walk minutes after they are born. I guess they're even smarter. You might argue that the walking calf is not intelligent because it came preprogrammed to walk, but human babies will reflexively start making stepping movements when you hold them upright and let their feet touch the ground. I think it's ridiculous to argue that your son somehow learned to walk through observation and reasoning alone.
"Liking things" is just one part of a biological reward system not a metaphysical event, it is possible to create it just not with our current tech (so saying "never" is a stretch imo)
>That's not teaching a computer to walk, that's building a walking machine.
I think that's redefining teaching so that teaching, whatever it is, includes a subjective human 'ghost' inside of it.
But Dreyfus wasn't just saying that machines can do those things, only without a soul. Dreyfus was arguing that things such as walking are clever and subtle in ways that depend on tacit knowledge to execute successfully, and things that depend on it simply aren't even achievable by machines at all, because the nature of those tasks is such that they require a special magical soul. Being able to do the task at all, with or without a special magical soul, stands as a counterpoint to the argument Dreyfus had been making for half of the 20th century.
> it would be practically impossible for people to write programs that articulate and incorporate that knowledge without machine learning
That could be true, but I think that's a different argument. That's more like claiming that it is impossible for a computer to become intelligent unless it can experience its environment and remember those experiences. That seems like a much more plausible and less dualist claim.
> Most "knowledge" deep learning systems have seems tacit to me
it's an interesting question if they're actually tacit in the human sense or not. At the base level even deep learning systems certainly rely on digital manipulation. It's almost certain that the human-brain due to speed constraints of biochemical processes doesn't run on tons of matrix-multiplication or loss functions, so it's an open question I guess if deep learning systems really just resemble the tacit capacity of humans or if there's something fundamentally different in the architecture of organic systems that cannot be replicated at least in today's machines. Which I think is actually fairly likely to be honest and I always wonder why it's disregarded.
It's funny that the top comment of this chain asserts dualism, but I think dualism is overwhelmingly common among CS folks, who almost seem to treat intelligence like some sort of platonic thing, completely ignoring the stuff it's made out of.
I'd argue that deep learning is derived from formal principles of mathematical reasoning in a very concrete sense. Deep learning learns a predictive function of the features that minimizes the loss function (with some caveats). If the loss function and training data are well chosen, that minimizes the probability of being wrong.
The Church-Turing thesis only states that the lambda calculus and Turing machines can compute the same functions. It has nothing to do with materialism or dualism and it certainly doesn’t state that the universe is a Turing machine.
"Every effectively calculable function is a computable function" [1]
The definition is a little terse so we have to expand on what "effectively calculable function" and "computable function" mean.
By a "computable function", we mean a Turing machine. The term "effectively calculable function" is a little unclear but one definition that I think is the closest to the intent is "it can be done by a human without any aids except writing materials." [2].
In other words, the Church-Turing thesis is saying:
"All physically computable functions are Turing computable"
That is, the physical world, including human cognition, can be realized by a Turing machine.
While one formulation might be cast in terms of lambda calculus, this is hiding the underlying assumption that lambda calculus is used as a proxy to simulation of the physical world and, as a subset, human cognition, effectively saying "if it can do lambda calculus, it can do the physical universe and can do human cognition".
"That is, the physical world, including human cognition, can be realized by a Turing machine."
How did you get to the conclusion that all physical world is computable? Rather a huge jump i would say. Sure some things are, but "all of it" would rather be a BIG assumption. Physical theories of the world are limited to our current state of observations and knowledge of the world, they AREN'T the actual world. Who is to say this continuous search, observation and refinement of theories will ever end and we'll have a FINAL theory of everything that we can then plug into a computer and simulate?
Sure you can now say "I don't require a theory of everything, I just need a "sufficient" amount of theory to simulate the part of the world from which I can have my intelligence & cognition emerge". Sure you can say that but that would again hinge on the assumption that such cognition is reducable to these "sufficient" laws.
Likewise saying that the whole physical world can be realized by a Turung machine is a bit rich when we don't even know if such a complete reduction of the physical world is possible and when such reduction to physical laws is surely not yet complete.
How did you get to the conclusion that all physical world is computable?
That’s not the conclusion, that’s the whole thesis. The whole point of it is that yes, it’s not provable, but so far we haven’t seen anything to suggest the contrary. All of our current physical theories are very much computable, for example.
Church-Turing thesis is that for every algorithm you can compute
you can define a turing machine that can compute it too. You still
have show that you can compute answers to your questions about
physical world or human cognition.
And we know that we can define an infinite amount of problems that can
not ever be computed (or else you can for example solve the halting
problem). So there would be infinite amount of questions about the
world that we can not ever answer.
And there would be more questions that we can not answer (they are
uncountable) than we can answer (they are countable). So if you have a
question - chances are such that it can never be answered.
My point is not whether all physical laws are computable, I have no doubt they are insofar as these laws are expressed mathematically. My point is rather that this search for laws might never finish and a complete ruleset never come about.
Like I said all current theories are based on current state of observations. Who is to say we don't observe something in the future for which these laws need to be revised? Who is to say this doesn't keep happening indefinitely? If such a "bottoming out" cannot even be conceived of, saying that the physical world, even a part of it, is exactly computable AS IT IS (all aspects of it) is utmost arrogance.
What do we know about the brain? how does it generate cognition? why does the color 'red' look like the way it does to you; does it look like the same to me? Unless the nature of such cognition and such a being which embodies this cognition is known one cannot say it is reducible or "emergent" from the CURRENT set or even any FUTURE set of physical laws we know or will know about.
The map is not the territory however minute in detail it becomes. Sure this map can become the territory itself by becoming it but then we cease to call it a "map". Say you want to understand how pendulums work, you make a pendulum, play with it. No one's going to call this actual pendulum a "simulation".
This belief is widespread, but mistaken. It tries to slip in a conjecture as an axiom in the unstated leg of the enthymeme. It’s the worst sort of hand waving, but the conclusion apparently flatters the biases of those who wish to believe it such that they happily overlook the sloppiness.
Also our physical theories are only computable with an arbitrary halt thrown in at some level of accuracy deemed good enough. And there too theoretically questionable trickery like renormalization is used to reduce the problem to a tractable size.
In might be computable in principle but not in practice for a long time, similar to how it is possible to simulate all molecules of air in one cubic meter (neglecting quantum effects), but not actually feasible in practice. Any argument for computability all needs an argument why a "coarse graining" or "effective model" of the underlying physical system exists.
In the case of transistors that is because all that matters are binary stable states, which reliably abstract over the complicated device physics. In the case of biological cells and neurons in the brain it is much less obvious what the reliable abstraction is. Right now a lot points towards "it is just a bunch of linear algebra and lots of data", but especially when we come to things like memory, online and few shot learning, the answer becomes far less obvious.
It's funny, because I'd argue that the belief that "the universe is a Turing machine" is a kind of secular religion. Hardcore physicalists often betray themselves as not only being bad materialists but actually just idealists in denial.
An amusing retort, to be sure, but the "creator" could be an entirely automated process, akin to instantiating a VM or container.
In other words, God has been a replaced with a very small shell script.
So, even if an intelligent creator is ultimately responsible for our plane of existence, there may not be much in the way of intent or even observation associated with that responsibility at any scale we would find meaningful.
Heck, who is to say that our particular simulated universe isn't just a honeypot of some sort?
>So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world
No, in other words, humans have tactile, empirical, emotional, social, etc intelligence that is perfectly physical but not available to mere software in a computer.
It might be available to software running in humanoid robots, that can see, walk around, hang with other humans to learn, etc.
But even in that case, it won't be codified in any axiomatic way "through reason". Think more of neural networks and less of an 1960s AI program...
This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence? We don't know for a fact that human intelligence depends on these things.
I'm personally skeptical that we'll see AGI any time soon but I don't think we know enough to say this definitively.
>This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence?
My comment doesn't say that "computers will never realize similar intelligence".
It says that they will never realize it through reasoning - and rule based systems, 1960s-1990s AI style.
Which isn't the way we realize it either, even if we don't fully understand how we do realize it yet.
The complexity that makes human intelligence possible is already accessed by a brain-in-a-box through a limited set of interfaces; it's just a mushy organic brain and bone box rather than an electronic brain and steel box.
It's probably reasonable to argue that an AGI would require interfaces to all of the outside world's complexity to be self aware, but there's nothing stopping us from building it those interfaces.
There is a major league difference, and that is the closed loop our body, the aspect of being, missing from everything we have made so far.
In a technical, not inclusive sense, I agree with you. Brain in a box is part of the story.
I do question "limited"
Again, in the technical sense, we do build interfaces that offer superior capability. But, they are nowhere near as robust and integrated.
I am not saying complexity itself makes us possible, though I do believe it is a part of the story.
Higher functioning animals display remarkable intelligence, yet they are simpler than we are in many ways, including the intelligence itself.
We feel, for example. Pain, touch, etc. And when we pay close attention to that, we can identify where, how, when, and map all that to US, what we are and know it is different from others, and the world overall.
Pain is quite remarkable. There are many kinds. Touch is equally remarkable as is pleasure.
Ever wonder why pain or pleasure is different depending on where we experience it? Why a cut on my leg feels different from one on my foot, or hand? Same for a tickle, or something erotic.
I submit these kinds of things are emergent, and happen when the whole machine has enough complexity to be self aware. Even simple creatures demonstrate this basic property.
Beings.
We have not made a being yet. We have made increasingly complex machines.
As we go down that road further, I suspect we will find emergent properties as we get closer to something that has the potential to be.
Not just exist.
I realize I am hand waving. That is due to simple ignorance. We all are sharing that ignorance.
Really, I am speaking to a basic difference that exists and how it may really matter.
Could be wrong too. Nobody is going to know for some time yet. Materials science, our ability to fabricate things, all are stones and chisels compared to mother natures kitchen.
We are super good at electro mechanical. We are just starting to explore bio-mechanical, for example.
The latter contains intelligence that we can see, even if we do not yet understand.
The former does not. Period.
Could. Again, nobody knows.
There are things stopping us, and I just articulated them.
But not completely!
Scale may help. If we did build something more on par with a being, given our current tech, it would end up big.
And every year that passes lowers the bar too.
We can make things today that were science fiction not so long ago.
One other pesky idea out there too:
There may be one consciousness.
A rock, for example, literally is an expression. It has a simple nature, no agency due to low complexity. But, it's current state is what happened to it, how it formed, where it moved. And it is actually changing. The mere act of observing it changes it in ultra subtle ways.
Now, look at bees, ants. Bees know what zero is, appear to present far more complexity in how they respond to the world, what they do, than their limited, small nature might suggest.
Why is that?
What we call emergent may actually be an aggregation of some kind. Given something is a being, perhaps a part of that is a concentration of consciousness.
I am not a believer in any of that. I just expressed our ignorance.
But, I find the ideas compelling and suggestive.
They speak to potential research, areas where we could very significantly improve our ability to create.
Doing that may open doors we had no idea even existed.
We may find the first intelligence we end up responsible for is an artifact, not a deliberate construct.
In fact we may find a construct is not possible directly. We may find it just happens when something that can BE also happens.
Anyway, I hope I have been successful in my suggestion there remains a lot to this we flat out do not know.
> So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
No, it just means there is no simple symbolic path that is human understandable towards human level intelligence. We need systems like neural nets and simulators to 'learn' things that can't be directly formalised. And we have tried for 50 years to formalise intelligence in symbolic representations.
>there is no simple symbolic path that is human understandable towards human level intelligence
I imagine that, piece by piece, if we really wanted to, we could look at the 175 billion parameters in GPT-3, test which are 'active' when this poem is written, which are active when that imitation of copypasta is active, and through a torturous interrogation, find that one particular parameter, say, is for weighting the 0.001% likelihood that you would use an accented é during a particular rhetorical flourish in certain contexts. Perhaps another parameter codes a meta-meta-meta abstraction about how a meta-meta rule governs a meta-rule for how to use a sometimes-used linguistic rule.
And the totality of those 175 billion parameters could in principle be uncovered and described in ways that are satisfactory to humans. It would be tedious and unproductive, and akin to the project of archeologists patiently, tediously uncovering a dinosaur.
But the point is it would be practically difficult, not something forbidden as a matter of principle.
More importantly though, is that I don't think the supposed incomprehensibility to humans has relevance to anything. What's the argument supposed to be? Humans don't depend in any explicit, conscious way, on having conscious grasp of our own tacit knowledge. I don't know why I unconsciously shift my weight a certain way when going up stairs. This doesn't stop me from walking up stairs. And it doesn't stop us from making machines that could walk up stairs.
We need neural nets? Okay, sure, we need them. But we can run those on machines that, at the end of the day, are silicon and 0s and 1s, which are every bit the brute, formal systems that supposedly can't model intelligent things. Weren't we supposed to have encountered a barrier to what computers can do at some point in this thought exercise? Because it appears that what began as an extremely bold claim, that computers can't do X, Y, and Z, ends in a whimper, as a vague exhortation to appreciate that neural nets are in some sense structurally different from logic design. Nothing about that latter claim is making any bold statements about the limits of what computers can or can't do, which makes me feel like the argument forgot what it was supposed to be about halfway through.
That's a bit of a straw-man. It does not follow that the existence of knowledge that isn't derived from formal principles of reasoning proves the existence of a soul.
Replace the word soul with some essential quality of reason than humans have that isn’t teachable, and the article is arguing that whatever that is can never be acquired by a computer.
You don't need a soul to justify the existence of knowledge that can't be reasoned. Qualia, emotion, feeling and knowledge soley from sensory observation fill that gap too. All of these things are perceived as a meaning by our brains well before they are rationalized. Feelings and emotions in particular break down under direct reasoned examination.
The mission to simulate a human mind seems more like a cultural precept for programmers. The physical task to create a human mind simulation is a fool's errand. The human body is incredibly complex and I doubt we'll stumble over the ability to sythensize that system any time this century or next.
Why does general AI have to be like human intelligence?
Free will might have something to do with it as well. Seld awareness is the basis of our desire to find our place in the world and the meaning of reality relative to self. This desire might be a core driver of human intelligence, the intial lack of purpose,self awareness and the need to survive by solving arbitrary problems is important as well.
Computers may not live in the real world but a virtual world of games, coupled with self aware program that trains subprograms to adopt and solve solutions to survive might be an interesting approach.
If by "soul" you mean an emotional core, then you'd be correct. Emotions allow us to short-circuit the processing required to ascribe value to a thing or situation.
But that has nothing to do with dualism or theology.
I don't think it is dualist, or at least I didn't get that impression reading Dreyfus. It is more that knowledge is part of a system as a whole and cannot be captured piecemeal, which is how we'd have to do it if we created AI by hand.
But, I don't think that argument is especially strong. Knowledge doesn't seem to be tied up in the system, otherwise we wouldn't have abstract subjects like math. Additionally, if knowledge is a function of a certain process of developmwnt, we can at least in theory reproduce any physical process computationally.
Yes, completely agree and was thinking along these lines as I read. I'm really interested in a good argument against AGI. Turing had the right intuition in thinking that if our brains are processing information then they can be simulated by a Turing machine. Everything points to our brains doing information processing at all levels.
The article still has a point about general intelligence without a body, but I think this can be solved by developing robotics and AI together, and I've seen some research in this area (I think Japan).
> So the main argument is that humans have intelligence that is impossible to express through reason or codification
I think what it means is not about codifcation, but that to achieve GI machines would need to go through human experiences that are not possible to a machine.
I don't agree, but I think that's the point the paper makes.
I'm not sure. If we think about it mechanically, and the output of the a computer system is a function of it's input, then perhaps we are woefully underestimating the role of the body as both a monitor and input system.
I don't think an artificial intelligence has to resemble anything we would recognize as human intelligence, while still being "intelligent". The motivations of humans and the motivations of computers are very different. If we can give a computer motivation, as well as a way to act and react within an environment, and a framework for learning (neural networks?) then what it "thinks" will be determined by the experience of its own existence. I'm not sure we're there yet.
At this stage at least, the recreation of the interaction between that artificial body and the environment will be impressionistic at best...
Like we don't even have a full conscious understanding of what we are made from, or what we need to survive.
How do we 'install' those ideas and that imperative in an agent, when we don't fully understand it ourselves?
I'm not entirely supporting one side or another, but I think it's reasonable to bet against the imminent arrival of AGI at this stage, unless some radical discovery comes to light soon.
And if (when?) that discovery eventually comes, I'd suspect it to be a biological one. But then, who knows...
What is "a body" other than inputs and outputs to interface with the environment, and perhaps a mechanism of perceiving the environment and remembering what was perceived?
The capital T-truth is that no one really knows since we're all observing the system subjectively from the inside.
Many who practice meditation and/or experiment with psychedelic drugs will tell you that there's something in there that's not a computer.
We could assume, as many do, they're fooling themselves; since we can't measure it. Or, we could trust our authentic experience of the real world even when it can't (yet) be measured.
The author of the article specifically says this: I have earlier said that neural networks need not be programmed, and therefore can handle tacit knowledge.
Also you are misunderstanding what tacit means. It doesn't mean anything mystical - merely that it is gained from observations rather than logically reasoning about something.
You might be right but I'm not inclined to give the author the benefit of the doubt. From the abstract, the author clearly says:
"The article further argues that ... computers are not in the world."
The specific quote you mention is in a larger paragraph which says:
"Computers are not in our world. I have earlier said that neural networks need not be programmed, and therefore can handle tacit knowledge. However, it is simply not true, as some of the advocates of Big Data argue, that the data “speak for themselves”. Normally, the data used are related to one or more models, they are selected by humans, and in the end they consist of numbers."
My reading of this is that "tacit" is used as a kind of dog whistle to dualists. It's ambiguous enough so that the author can claim they meant "learned" while still suggesting an underlying dualism.
Regardless of the meaning of "tacit" in this context, the author pretty much flat out says they're a dualist by repeatedly claiming "computers are not in our world" and "in the end they consist of numbers".
I think you're giving the author too much credibility that they aren't making a plea to mysticism.
I don't think the author makes a strong argument, and I disagree with their conclusion.
But the author's argument actually appears to be mostly that science itself is insufficient to understand the world. This argument is outlined in the section starting "But the replacement of our everyday world by the world of science is based on a fundamental misunderstanding. Edmund Husserl was one of the first who pointed this out, and attributed this misunderstanding to Galileo."
I think it's a pretty weak argument and I'm surprised Nature published it - the author wouldn't last 2 minutes trying to defend it on HN.
I do think that a better articulated version of his argument would be something like this (which attempts to capture what he means by "not in the world"): "despite all the advances in neural networks encoding tacit knowledge, it still takes a deeper human-level set of tacit knowledge in a wider context to make these neural networks useful. While we have surpassed human skills in-the-small, science based benchmarks, we seem no closer to achieving embodied human-level intelligence from machines in-the-large."
Again, I think you're giving the author the benefit of the doubt when it's not warranted. Your paraphrasing "science itself is insufficient to understand the world" is code for dualism.
I forgot to add the reference in the comment above but tacit means what I said it meant. I quoted directly from Wiktionary [1]. I'll do so again here:
Adjective
tacit (comparative more tacit, superlative most tacit)
1. Expressed in silence; implied, but not made explicit; silent.
tacit consent : consent by silence, or by not raising an objection
2. (logic) Not derived from formal principles of reasoning; based on
induction rather than deduction.
I chose the "logic" interpretation as it seemed the most appropriate given the context.
I don't have any strong opinion about if the author is a proponent of dualism. I'd note that Quantum Bayesianism[1][2] (discussed the other day on HN) seems much more mystical than this, and yet is usually considered within the realms of science.
I build neural networks in my day job. They encode tacit information because they are "based on induction rather than deduction". But that's not anything mystical - it's just learning from data, and it's not a dog whistle towards mysticism either.
I have a perspective on this, since I took a rather engaging philosophy class from the late Prof Dreyfus at Berkeley.
As the article hints at, his line of argument was a completely valid criticism of AI based on a set of rules written with symbolic logic. He arrived at this conclusion after studying Heidegger's concept of dasein. The best way to describe dasein is through a classic example [1]:
"...the hammer is involved in an act of hammering; that hammering is involved in making something fast; and that making something fast is involved in protecting the human agent against bad weather. Such totalities of involvements are the contexts of everyday equipmental practice. As such, they define equipmental entities, so the hammer is intelligible as what it is only with respect to the shelter and, indeed, all the other items of equipment to which it meaningfully relates in Dasein's everyday practices. "
In Heidegger's mind, meaning is distributed across the web of interrelationships between objects and their various uses and ideas to humans. Intelligence was thus a process of knowing those relationships, and being a part of them. Being-in-the-world (dasein is a German word related to 'being') is a result of us as humans being 'thrown' into the web of meaning, in fact we are born already finding ourselves in it.
Dreyfus' objections to AI stemmed from the idea that computers are thrown into the world differently from us. They are the hammer in the anecdote, and not the human. During the class, we argued a lot with him about whether humans are the only creatures that have 'being-in-the-world'. We asked about the idea of the soul, and why humans are unique in this paradigm. My feeling after the end was that his entire philosophy comes directly from Heidegger. And since Heidegger didn't mention animals, or the lack of souls, his conclusion was "dunno".
Related to this, Dreyfus had an understanding of physics that was quite antiquated and classical. One example was the concept of time. According to Dreyfus, the physicists' notion of time was a series of discrete observations along a timeline, whereas humans experience time in stretches and long segments. I remember telling him that the notion of a single instant of time is poorly defined in quantum physics, and that we do recognize that every event has some time uncertainty. I remember asking him why we couldn't model human experience with some complicated function. For him, everything came back to Heidegger. To him, we physicists were being too reductionist, and there is indeed something about humans that cannot be described using physics. He stopped shy of calling it a soul, but it was essentially that.
[ETA: I now remember talking to him about Conway's game of life, and how a simple set of rules could result in complex, often hard to model behavior. My point was emergent systems exist within physics, and the way we describe them is different from single particles. His reply suggested that he was certain that human experience wasn't just 'emergent' - it was fundamentally different from anything in physics, no matter what.]
The class was on physics vs philosophy, and we disagreed. I don't think he really understood what I was trying to tell him. This was in 2013, before most of the deep learning revolution, but I think he would have the same objections with what we have now. Here are two possible directions we can go from here:
1) Argue with continental philosophers about reductionism and whether humans have a unique essence that cannot be modeled.
2) Understand Heidegger's and Dreyfus' thoughts about being and time, and drive AI research in better directions.
I prefer (2), because I already tried (1) with Dreyfus and it wasn't successful or productive.
I think understanding and modeling the graph of inter-relationships between objects and humans is exactly where we need to improve when it comes to AGI. It's probably going to need a good degree of embodiedness, an idea of a computer being thrown-into-the-world. Dreyfus would tell us that it's fundamentally impossible, and that we should all just give up on it. I think he came to the wrong conclusion. What I get from Heidegger is not "AGI is impossible" but rather "hey, this is what we should be worrying about. This is how humans see the world".
TLDR: Dreyfus had some good ideas about AI. I think it's extremely insightful to pay attention his thoughts. Don't waste time worrying about his arguments against physicalism.
Thanks for the comment and confirmation that Dreyfus is essentially a dualist/believes in souls/etc.
I've idly had similar thoughts about the web of interconnections of concepts. Our idea of "cat" is not a single unit but an array of different ideas of body shapes, fur colors, sensory touch experiences, sounds, and other concepts that are rolled into one word.
I've often argued that general artificial intelligence will probably look like a complex series of individual 'utilitarian' components working on concert to achieve what looks to us like consciousness. A kind of "unix philosohpy" of AI: small, targeted tools working in concert to create a larger "operating system" of consciousness.
I've taken a few undergraduate philosophy classes myself in addition to talking to many philosophy grad students. It pains me to see people waste so much time on this subject. Philosophy itself is wonderful. Academic philosophy is a hollow shell where all the significant subjects have branched off into their own disciplines, leaving the husk of theology parading as science. At least theologians have the honesty to admit they study theology.
I would rather learn from engineering lore and draw from their wisdom on how to build complex systems than listen to academic philosophers.
Maybe Dreyfus's course already did this, but it's worthwhile reading other perspectives into Heidegger. Rorty kind of shoehorns him into a pragmatist, while Graham Harman does, well, Graham Harman philosophy. His book "Tool-being" cannot be recommended enough.
I was actually introduced to Heidegger via "Tool-being" and read Dreyfus's "Being in the world" later. I think honestly I didn't pay much attention to Dreyfus's arguments since I had just read Harman's book, which is argumentatively and anticipatively of objections etc. to the extreme.
I'm not sure how you can say computers have no body and no childhood. I think someone who doesn't know computers very well just doesn't see it, but they actually have both.
Their body is their hardware, sensors, interfaces and means of changing the world through controlled devices which can include robot arms and legs or vehicles. Computers can learn and be trained to do things and get better at them, such as using genetic algorithms and machine learning. They’re rudimentary, but real.
Sociopaths can pretend to be like normal people. An AI with non-human intelligence can do its own thing and just mimic human behavior with sufficient fidelity to fool us. Dumb machines can already employ usable conversational interfaces. In 100 years, less dumb machines will be far more convincing.
This is possibly the worst article I have ever seen in Nature.
It is barely at the level of an undergraduate essay on AI (or AGI or ANI, as the author prefers). A hand-waving argument about "computers not being in the world" that relies on not bothering to define "computers", "being in" or "the world". Convenient avoidance of almost every more subtle anti-Searle/anti-Dreyfus critic (notably Dennett). Almost wilfull narrowing of a much broader argument about the nature of reality and the connection between causality and conventional parametric science, when these things lie at the heart of the author's argument (such as it is).
Note, is not in the the flagship journal, it's an affiliated journal under the Nature imprint. There are about 40 other such: https://www.nature.com/siteindex (see under "N").
It's a Springer joint. Note that the counterpart journal, Science, is published by the AAAS, and is not for profit the way Nature is. If you're interested in science, Science is a great one-stop shop and it's about $120 per year.
Sometimes one wonders whether the chase for exciting stories affects some Nature publications.
I would be interested in keeping up the latest cutting-edge science developements, but I expect a Nature subscription is too heavy-duty for that. Not necessarily because of the price, but I don't think I would end up using it.
Instead, what would fit my interests better would be a science newsletter that, say, once a month summarizes the most interesting recent developements and sends them to my inbox. I would then use that as a jump-off point and even read the full articles I care about in Nature.
How about one of these e-letters [0] backed up by a Science digital subscription to actually read the content?
Referring to a recent TOC [1], the "Research Articles" would likely be more in-depth than you want, but there are also summarizing "Perspectives", "Features", and "Reviews" that I generally find OK for medium-difficulty reading.
Sure...the bar for Nature is really high...friends at my work have been through the editorial/revision process there and it's generally very demanding.
You're demanding that they exactly define what can't be exactly defined, and it's not going to happen ever. We're probably going to get closer and closer, but in the end we're still observing ourselves and reality from the inside.
I believe the goal of the article was to describe the differences between computers and humans, from the viewpoint of humans. What seems to be more popular these days is to start from the viewpoint of a computer and pretend humans are the same thing.
I'm not demanding that they are successful at "defining what can't be exactly defined". I would just like them to make an effort to do so.
The author's whole claims about "being in the world" and "tacit knowledege" are quite amenable to further, useful definition and expansion. For example, do they mean by "tacit" "not subject to an implementation in a way that could include an electronic digital computing device?" Or do they mean "not subject to explication by the human that carries said knowledge" ? By "being in the world" do they mean being a causal object or is merely being subject to the behavior of other causal objects sufficient? There are so many dimensions to both of these central concepts, and yet the author barely explores them at all.
The author also makes no effort to differentiate between computers and robots, even when such a distinction seems quite important if you're going to make claims about "being in the world".
The stated goal of the article was to show "Why General Artificial Intelligence will not be realized".
Show me a human who learned how to ride a bike by watching others.
My impression is that the general dynamics robots are already on a path that isn't hard to imagine becoming quite animal-learning like in the reasonable near-term future. Their body (as with any animal) is a mostly-given, as are the available control systems. Couple this to a NN-style learning process that takes place inside the robot rather than over there in the programmer's development system, and I'm not really sure I see an important distinction given the parameters of your question.
The fundamental problem this argument has is that computers are in the world. Probably not to the degree required and certainly not up to AGI. But, self driving cars and Boston dynamics exist. The gap between "in the world" and not isn't some insurmountable barrier. It's an engineering problem that's been solved to varying degrees by engineers and hobbyists.
There are so many great books on theory of mind, consciousness, embodied intelligence, and general/strong AI that it's hard to know where to begin.
For me, even though it is somewhat old at this point (e.g. definitely no big-data/ML approaches even considered) would be the anthology "The Mind's I" edited and annotated by Dennett and Hofstadter. It's not going to directly rebut this author's point, but definitely gives a deep and rounded overview of many of the issues involved in thinking about computers and minds as somehow related to each other.
But there are many, many others, almost any of which will be better than this article.
> In the book [Dreyfus] argued that an important part of human knowledge is tacit. Therefore, it cannot be articulated and implemented in a computer program. [...] [...] [...] These skills cannot just be learned from textbooks. They are acquired by instruction from someone who knows the trade.
This is a classic example of the Mind Projection Fallacy [1], where a property of how you think is assumed to be a property of reality.
It's true, for humans, that it is simply not possible to be told how to ride a bike and then be good at riding a bike. No matter how carefully and completely you explain to a human what they will have to do, when you put them on a bike for the first time they will struggle.
The mistake is assuming that this "have-to-really-do-it" effect is a limitation intrinsic to bike-riding-knowledge instead of a limitation in human learning and communication mechanisms. The mistake is assuming this property will generalize to all bike riding systems.
In a computer system, what would be tacit knowledge for a human is no longer tacit. If you create a computer program that can successfully control one bike riding robot, that program can be copied to a freshly built bike riding robot of the same make. The new robot will then successfully ride a bike the first time it is placed on it, without any hint of the human "have-to-really-do-it" struggling phase.
It can be intuitively useful to imagine computers as having the ability to "super communicate" in a way that humans simply can't. That has its advantages, and its disadvantages. If you had super communication you could super-explain to a blind person what it was like to see and if they ever did gain their sight there would be no "Oh so that's what you meant" moment. On the other hand, a heroin addict could super-explain being addicted to heroin to you.
The point isn't bike riding. Bike riding is an example of a class of problems, and your solution begs the question.
The point is that humans and computers operate differently. The human approach is based on adaptive experiential heuristics.
The computer approach is based on explicit formalism. (Even in neural networks, there's still a formal model. It's just made of weightings instead of logic paths.)
The epistemology of these approaches is completely different. The problem isn't getting a computer to ride a bike, it's getting a computer to learn to ride a bike how a human learns.
Why would anyone do this? Because adaptive experiential heuristics are far more flexible and generalisable than explicit formalisms. And - it suggests here - you can't have real AGI without them.
So the problem then becomes unpicking what "adaptive" and "experiential" really mean. Both rely on huge accumulations of tacit knowledge and tacit motivations.
If this isn't obvious, consider that a human child will learn how to ride a bike and then go and have a lot of fun with it. An ideal bike-riding computer doesn't even have a concept of fun.
The human experience of fun is a complex system of experimentation, exploration, reward, and challenge, combined with physical, emotional, and mental correlates.
This matters because play in childhood helps develop the heuristics that adults use for problem solving, and for personal motivation and satisfaction.
Even more simply, the problem is the difference between building a workable but dumb bike riding machine and building a machine that will improvise bike riding as a goal for itself, will "enjoy" the experience, and will generalise from that to mastery of other domains.
> The problem isn't getting a computer to ride a bike, it's getting a computer to learn to ride a bike how a human learns.
This is just a more sophisticated way of assuming that there's something human-intrinsic about bike riding. Human-like is not the only way to approach doing or learning. Whatever works, works.
I could not agree more there is no evidence of any kind that there is some process that is human-intrinsic from perspective of neuroscience and computer science. Furthermore learning is just subset of intelligence, it seems to me most arguments are about concept of intelligence where such concept has different meaning for each participant.
>Human-like is not the only way to approach doing or learning.
I would also say that we shouldn't agree that there's an irreducibly human-like way to do things that only belongs to humans. The things we think 'belong' to humans, such as our intrinsic bike-riding ability, may well turn out to be not intrinsic at all, and able to be modeled in all salient ways by a machine.
I think if we allow that distinction to be made, and proceed to argue that machines can learn in different ways, it allows a very strange human-essentiallism to go unchallenged.
But humans learn by themselves, from observing the world around them and drawing conclusions. Which is a major advantage and a core problem in strong AI.
The only reason it works is that a lot of humans put a lot of effort into more or less figuring out exactly how to ride a bike. It doesn't generalize at all, teaching the same robot to ride a skateboard would mean doing it all over again.
Yes, and not just hypothetically; more generally, transfer learning is definitely a thing, and has been shown to reduce training requirements for unrelated tasks.
Very true. It seems closely related to the problem with qualia, which I've always thought of similarly.
Instinctively it seems impossible to describe the color green to someone who has never seen it. Seeing it seems to add "something" that is impossible to acquire in any other way. But this is because of our physical limitations, rather than the existence of some ephemeral quality. If we would have total knowledge and full ability of introspection, I don't see why we wouldn't be able to accurately predict the subjective experience of any input, including colors.
A friend of mine was the project manager for a project to develop one of the first color matching systems for paint stores. My friend is colorblind. At least for him, the project was very much a matter of programming knowledge that he could not possess.
I've gotten color matching done at Home Depot, to get paint for repairing my house. It's uncanny.
True. For instance there's knowledge that can't be directly perceived by any one individual, given variations in our sensory apparatus. Even among so called "normal" sighted people, the sensation of red varies from person to person. I have a normal sighted colleague, and there are "red" LED wavelengths that seem very bright to me, but he can barely see them. The color matching equipment is partly based on an accommodation of those variations.
>Being unable to perceive color and color relationships does not equate to being unable to have knowledge of color and color relationships.
Yeah, I think that was their point. Whatever 'knowledge' we think we have with qualia is actually something that, functionally, we seem to be able to get along without and do just fine. Which makes you wonder what functional purpose we don't have by not having 'qualia'.
> It's true, for humans, that it is simply not possible to be told how to ride a bike and then be good at riding a bike. No matter how carefully and completely you explain to a human what they will have to do, when you put them on a bike for the first time they will struggle.
I'm not sure that's automatically true. The way we learn riding bikes, as young children, yes of course. But someone who was already an expert skateboarder, inliner, equestrian, fighter pilot etc, somehow without ever having ridden a bike, likely wouldn't struggle. Balance, lack of fear and general trust in your instruments definitely transfers.
I posted the article because it is thought provoking, although I disagree with it. The reason is for example statements like this "The real problem is that computers are not in the world, because they are not embodied."
"And I shall argue that they cannot pass the full Turing test because they are not in the world, and, therefore, they have no understanding."
First I think that whether something is embodied or not doesn't matter. Our senses in the end could be likely approximated by arrays of numbers fed to a computer, so I don't think lack of body is such an issue.
Regarding understanding by machines, that is clearly the issue for current AI, but at least based on what I know about modern machine translation, there is already something that works with concepts/abstract terms and their relations, which looks to me like a beginning for abstract reasoning...
As others have said, I think this is a better argument against machines emulating humanness than it is against the broader category of "general intelligence".
For me it always comes down to: If we assume the human mind to be a physical, deterministic phenomenon, then an informational, deterministic system could simulate it. It's as simple as that. To take the brute-force route, every atom in a human body could be simulated, which would include its intelligence.
Now, that says nothing about time-scale or practicality. It doesn't even say anything about whether electronic computers as we know them can achieve the physical density necessary to simulate something so complex, given the natural resources we have available. But in principle, unless you believe in the metaphysical, there's no denying that AGI is something that can exist.
Why do we need the assume a deterministic system? As long as it physical and we believe physics can be described with computation, we can simulate it. Some randomness doesn't matter.
Thanks for posting, I also disagree with it. I think using this authors high level stance is that only the human physical experience can lead to general intelligence. By this logic a blind person or quadrapeligic might not be generally intelligent.
Not quite. The unconscious management of the internal systems that keep us alive are as much a part of that physical experience, as our motor-sensory apparatus. In fact, some neurologists suggest that these unconscious urges are the most likely governors of 'conscious' action. That's a gross oversimplification... Dr Antonio Domasio explores the idea in fine detail in his books. Particularly 'The Self Comes to Mind'.
Absolutely not. People with disabilities still experience the world. See Helen Keller. Your reasoning would only apply to people with no sense and no way to communicate at all.
I'd recommend Daniel Dennett's talk If Brains are Computers, Who Designs the Software? as a counterpoint to this article [1]. Dennett's talk is a bit long but I mostly enjoyed it, especially when he points out the computerphobia of many of his colleagues.
Of course, it's easier for me to agree with Dennett than for many other people, because I'm a computationalist in the philosophy of mind and suspect that consciousness is an emergent property of certain computations. It might in theory even be possible to realize that by analytical insight - though probably not in practice. I'm generally not a fan of the philosophy of mind, though, because it mostly consists of speculation and intuition pumping.
> I'm a computationalist in the philosophy of mind and suspect that consciousness is an emergent property of certain computations.
I'm not a philosopher of any variety and this always seemed like a much more plausible scenario to me than some kind of consciousness magic sauce. It is pretty amazing to watch the intellectual contortions people will twist themselves into to avoid reaching this conclusion. Like deep down they are certain their lived experience can't possibly be an effect of purely physical processes, but they need to couch that belief in sufficiently convoluted words that they can convince themselves they aren't just talking about souls.
It doesn't seem to be required that an animal have sensory information. [1] I'm not convinced by the argument that a computer can't think just because it doesn't experience the world. If anything, experiments like (1) show that we may be able to better understand how brains work if we get rid of all the "noise" from external stimulation so we can better understand the signals.
My hunch agrees with the paper. Because computers operate on the rational numbers instead of the real numbers, they can never be embodied. They are always in a simulation...
Finally, saying that digital computers can't compute functions of the form Real -> Real is like saying that a computer can't compute functions of the form (Int -> Int) -> (Int -> Int). In other words, it's like saying that higher order functions are impossible.
Maybe I don’t understand your point, but our senses are also compressed to some extent. Your synapses fire or don’t fire, providing digitized input for some of your senses like hearing. At any point there is a discrete amount of hormones in your body. etc etc. That doesn’t seem categorically different from any man made sensor.
Our sensorium could be indistinguishable from how it'd be in a simulation without impacting my point. Software can hop from host to host (all Turing machines can simulate each other) but living intelligence is embodied.
Because of this, no matter how "smart" computers get, we can't trust them not to do stupid stuff. They are smart like that kid in high school who comes up with a formula to turn the world into silly putty, and is stupid enough to use it. There is more than one kind of intelligence, here the argument is that some kinds of intelligence require having skin in the game.
This is like saying CD players will never be as good as vinyl record players because the first use digital signals (rational numbers) and the second analogue (real numbers).
In that case, since we are here and it's Friday and everyone needs to take it easy from time to time, here's my proof that we don't live in a simulation: according to [1], ray tracing is computationally undecidable. For example, say you are in a museum at night, and a light bulb turns on in a room on the third floor, is it still dark in some room on the first floor? This problem is undecidable using the axioms of geometric optics.
Of course, geometric optics is like real numbers, and the actual world is discrete: light is a finite collection of photons. So, you can do the simulation by simply emitting a number of photons from each light source and tracing them all the way until they are all absorbed. We can still live in a simulation, but one without algorithmic shortcuts, a simulation where you simulate every single photon.
Now, I'm sure you can come up with your own version of proof that we can't live in a simulation using Nyquist's theorem. If you do, and your Friday is not too busy, please share.
People who vehemently agree that vinyl is better than digital audio are a big part of the reason "audiophile" is basically derogatory nowadays. Reasons being:
- Vinyl gets dust and scratches that degrade signal.
- Dragging a needle along a surface is an inconsistent and limited mechanism for signal transfer.
- Vinyls are always derived from digital files these days anyway! Except for pre-digital prints, studio rate digital is as "pure" as it gets.
P.S. Tube amps are not better than chip amps either, it's a complete myth. Every measurement a tube amp does well at is beatable with a cheaper chip amp design. People think we can do 10 gigabit Ethernet and 4g LTE radio bandwidth with chips, but not accurately control a speaker producing audible frequencies? Please.
Audiophile products have a weird way of capturing people's hearts despite having no merit. I think it's partly because people hear their favorite music through a device and misplace their feelings for the music onto the device, and partly because they are biased towards rationalizing purchases even when they've been conned by some pseudoscience or empty marketing.
They can rather easily discern between the two in double blind tests. The vinyl recording has a noticeable amount of noise in it. Vinyl record players can't exceed the kinds of signal to noise ratios that human hearing can detect. The more ridiculous claims are the audiophiles who pretend to distinguish between ultra high sample rate audio and 44100Hz or 48000Hz sampled audio.
You are comparing the wrong things in the test. Play the vinyl record, recording it in cd quality, then compare that recording with the original vinyl. They won’t be able to tell the difference.
> Because computers operate on the rational numbers instead of the real numbers,
Computers are just devices that perform computations. It's perfectly possible to construct computers that operate on values that are continuous and not discrete. In fact, such computers have been constructed before. They are know as analog computers and were used for things like numerically solving differential equations.
Another application is the Android calculator, which computes over exact real numbers in the sense described by Computable Analysis.
Finally, it's debatable whether analog computation achieves true computation over the real numbers. The presence of physical noise, and the fact that many physical quantities (like electrical charge) are ultimately discrete at the quantum level, implies that it doesn't really work AFAICT.
It seems likely that absolutely nothing is analog. The best theories about physics are all quantized. The only things that we don't have strong quantum theories about are things we barely understand, like time and gravity. And that's just because we haven't found the quanta yet.
It has been a minute since I picked up my information theory book.
However if I remember correctly continuous signals (that is functions of the reals) can be represented without loss with a finite (but sufficient number of) discrete samples.
Of course! I meant my comment to be tongue in cheek, but saying that some numbers are uncomputable is not incompatible with symbolic math on a computer. Note that for example Pi is computable, despite having an infinite decimal expansion.
"Real numbers" have some problems. Information density, for one. Imagine a real number, say 0.27777..., represented as a point on a piece of string, with zero being one end and 1.0 being the other. That point represents infinite information density. It also would seem to run afoul of some of quantum mechanics.
Embodiment being a killer feature for AGI seems weird to me. Not only is "what is a body" vague, but it is entirely possible to build a body. So why would it prevent AGI?
We do know that embodiment is a sufficient condition for general intelligence: if we assume humans have general intelligence then it is clearly a sufficient condition. But the question of necessary is more interesting because we have to actually ask what embodiment means.
Is a computer a embodied machine? What about a robot that can explore its environment? What about a simulation with an environment? If no, what makes us distinct? If yes, does embodiment even matter?
To me it is clear that feature space is the more important issue. It is also clear that embodiment helps with creating a more complex and rich feature space. The ability to move around and interact with your environment greatly expands the complexity of the environment.
I think the bigger question is about our ability to create rich enough environments to generate intelligence. Even if we can get machines in bodies, can we get them into the complex and evolving environmental pressures that we experienced over millions of years (without robots living for similar timescales)? It is reasonable to think that at some point in time we'd be able to have that kind of computational power. It is also possible that the learning function is incredibly difficult. With a large and complex feature space there are many local extrema and it may be possible that general intelligence is only possible with a few of these (essentially we can have an estimation similar to the Drake Equation). But overall, I'm not sure there really is any issue that means AGI is impossible. Maybe at current knowledge and computational limits, maybe for all of the foreseeable future! But I don't see any limitations in physics that are killers.
> We do know that embodiment is a sufficient condition for general intelligence: if we assume humans have general intelligence then it is clearly a sufficient condition.
I'm not sure what you mean by "sufficient condition" here. Consider:
"We do know that having a moustache is a sufficient condition for general intelligence: if we assume moustachioed humans have general intelligence then it is clearly a sufficient condition."
You’re rightly confused because the GP formulated a non-sequitur. Embodiment is if anything a necessary, but not sufficient condition for the human brain to develop intelligence. It’s not a sufficient condition on its own for general intelligence; cats are embodied too.
I would not agree that embodiment is a necessary condition for human level intelligence.
The reason I use sufficient is more broad. A cat does have intelligence. Human level? No. Intelligence? Yes. As I explained in the post, embodiment enables a rich feature space, which is what makes it a sufficient condition. It isn't just the simple act of having a body, but the ability to interact with the environment creating a more rich environment. I cannot think of any creature (by definition all having bodies) that doesn't have some form of intelligence. But we need to distinguish "human level" intelligence from "intelligence" and "human level" from "human like." These are different things.
Embodiment being a killer feature for AGI seems weird to me.
For the record, I have just argued in a different thread, that this stuff is not needed for "AGI", at least not the way I perceive it as being defined. And I believe that way is consistent with others in the field.
But... if you'll allow my use of the distinction between "human level" intelligence and "human like" intelligence, then I will say that I think embodiment is important to the latter. Why? Because I believe a lot of our learning is experiential, and especially the learning that yields a lot of our very basic "model of the world" ideas. Take our "intuitive metaphysics" - there are objects in the world. Objects can't be in two different places at the same time. Two objects can't be in the same place at the same time. Etc. etc. And likewise our "intuitive epistemology" which we use to decide what things are true, and so on. I believe that it will be very difficult (although perhaps not impossible) to give an AI very human like equivalents to these things, as well as "intuitive physics" (things fall over when they're off balance, you can't stand a pencil up on it's sharpened point, etc.) without having it "experience" a lot of these things.
Now the really interesting spin on that is whether or not a virtual body in a simulation would suffice to a degree. If you built a really hyper-realistic "fake world" using a really advanced game engine with somewhat realistic physics and what-not, and "put" the AI "in" that world... maybe it would learn some, or most, or even all, of what we learn. I doubt it would be "all", but who knows?
> the distinction between "human level" intelligence and "human like" intelligence
I 100% agree with this and think it is a important distinction. I'm glad you brought it up. One of my hobbies has been reading a lot of linguistics and about languages. There's the whole linguistic relativism topic at hand. When you learn just a little about linguistics you find that embodiment is embedded into our language, as the last sentence gave an example of (the use of "hand," which was likely unnoticed). There are lots of cultural references (especially with Americans) that make things more difficult too. Much of our language is dependent upon this multi-agent factor (I'd argue that language itself was born because we are social creatures). There's general language patterns that arise because of embodiment, environment, and culture. This affects the way we think. So I think this distinction between "human level" and "human like" is an important one. If AGI is not trained in a similar fashion to human growth and history it would have very different thinking styles, wants, and needs. But that wouldn't prevent it from being hyper-intelligent. I'll leave with an overly simplified saying
> If a lion could speak, I would not understand it.
I think if not necessary it's probably the fastest way to AGI, given that AGI includes "common sense" in order to interact with other humans, ala Turing Test.
In order for a machine to have this "human interface", we will have to share the same environments in which we learn, and simulating the real world is more expensive than building a robot with some kind of software AI that together simulate a human. In other words, it's easier to actually use the world instead of simulating it (at least to the same degree of detail, so at least it becomes increasingly cheaper).
Given the epic title I was pretty unimpressed by this line of thinking. The argument seems to rely on (among other things) certain presumed implications of “Computers not inhabiting the world” - a comical line of reasoning that cites a lack of a childhood among other absurd reasons for why computers can’t be intelligent. The author assumes that intelligence can only be derived from experiencing the world in full... which seems to imply that paraplegics are inhibited from intelligent thought. Taking that further - are we not capable of intelligent thought during dreams? Fjelland goes on to argue that with no way to fully interact with the world computers are barred from intelligence. Firstly there’s no reason for us to believe that computers will stop broadening the methods by which they connect to, measure, and actuate the world around them. Secondly it seems plausible that a person locked in a room with a radio to communicate with the outside world would eventually acquire intelligence without ever fully experiencing the world.
“Secondly it seems plausible that a person locked in a room with a radio to communicate with the outside world would eventually acquire intelligence without ever fully experiencing the world.”
Strongly disagree. If that was all he experienced in his life, the brain would not host a mind we would recognize as a healthy human.
A human is not a computer. A human life is not decomposable to purely language. This is not a ‘spiritual’ statement, it’s merely an observation that the mind takes in input and processes it far beyond in ways which we are able to describe in terms of language. Any modern terminology, at least. The mind needs experiences.
On lack of human contact: they’ve tried this in orphanages. Babies that don’t get human attention generally wither and die.
Are simulated experiences not experiences? What if they are indistinguishable from the real thing? Also see my response below - the claim is not that this guy locked in a room will surely be well adjusted and normal, just that it’s possible (though maybe not probable) for him to acquire intelligence.
> Secondly it seems plausible that a person locked in a room with a radio to communicate with the outside world would eventually acquire intelligence without ever fully experiencing the world.
I'm not sure about that. A person locked in a room with a radio would probably go crazy or recess into some primitive mental state. Also, the article specifically claims that General Intelligence has this requirement of embodiment, not just intelligence as you imply.
However I agree with your first conclusion that AI will eventually include bodies and it won't take long to integrate software intelligence and embodied intelligence to get something greater that could resemble AGI.
Even if I agree that he would “probably” go crazy, I just need more than 0 of my millions of AGI candidates to successfully navigate the perils of their relative sensory deprivation in order to have achieved AGI.
This is an interesting remark, but she still had touching and was also able to perceive vibrations. She was therefore able to _interact_ with other people. More importantly other people were willing to interact with her. Also I believe the fact she was not born deaf and blind but was able to experience the world normally at least for the first 19 months of her life is of extreme importance.
> that AI will eventually include bodies and it won't take long to integrate software intelligence and embodied intelligence to get something greater that could resemble AGI.
Probably but there will be at least two really difficult challenges:
- the hardware part (building the actual body) is far from easy, and still out of reach as of today.
- the “intelligence” needed to control a “body” is super super hard too. Having children gives you some insight on the relative difficulty of problems. Facial/object recognition is learned in months, body control requires years to learn. And we're talking about fundamental problems for living animals, so you'd guess it's been pretty well optimized by natural selection.
It is moving goal post; "Turing test", "The real Touring test", "the super real real Turing test"..... At end there will be a test for presence of "soul" :)
> Secondly it seems plausible that a person locked in a room with a radio to communicate with the outside world would eventually acquire intelligence without ever fully experiencing the world.
Without fully experiencing the outside world, such an agent will still have gaps in intelligence. Knowing the specific wavelength of the color red (564–580 nm), and how the optic nerve processes color is different than the experience of seeing a red flower in the world for the first time.
Any article making such a claim need to start with explaining what evidence they have that human brains are not computing devices that can be replicated or simulated.
Because absent evidence that there is something fundamental preventing us from one day copying the structure of a human brain and ending up with a working device, whatever claims they make are hand-waving.
There is quite a bit of evidence that your brain is analog, not digital. There is lots of evidence that it is both chemical and electrical in nature And while it is capable of logic, it does not rely strictly on any specific form of logic.
In other words, the only working models we have for "intelligence" are nothing like the silicon binary switches we are attempting to use to replicate it.
Analog can be represented in digital: we can mathematically prove this. It's not at all reasonable to say that AGI cannot be achieved because it doesn't use a certain type of material.
So if the brain is an analog computer, then it seems reasonable to believe that we might be able to someday construct an equivalent analog computer (or a digital equivalent thereof).
You make the assumption that any computer we'd try to use to replicate it will inherently be digital and electronic. If we prove unable to replicate a brain this way, nothing is stopping us from using biochemical systems instead.
Analog computers exists, and we have used bio-mechanical systems for computation... Heck, the first "computers" were humans.
Getting hung up on the current preferred paradigm of computation as the only possible one is one of the biggest flaws of the article.
>Any article making such a claim need to start with explaining what evidence they have that human brains are not computing devices that can be replicated or simulated.
If X and Y appear dissimilar, the burden of proof is on he who would argue they are similar.
If one contends a brain and a computer and the functions of each appear similar, then one is being disingenuous.
Surely the burden of proof is on someone claiming something is impossible? I skimmed the article but I didn't see any support for his argument beyond pointing to the limitations of existing technology, and asserting that these limits were insurmountable.
If I said I can fly by flapping my arms quickly I would assume the burden of proof would be on me to prove it as opposed to others proving that it's impossible
The more analogous claim would be "it's impossible to flap two arm-like appendages quickly and achieve flight," and the burden of proof would indeed be on you for making that claim. (Of course, in that case it would be easy to disprove by pointing to birds or even with computer simulations and toys if birds did not exist).
You can give argument though. It is concievable it is possible to prove things like: a turing machine (which is an abstract mathematical model, which we absolutely can prove negatives. See for example rice's theorem) can never achieve "human intellegence" (if you come up with a concrete definition of human intelligence). From there you can make the statement: Any physically realized device that is faithfully modelled in terms of computational power by a turing machine, cannot have "human intelligence".
Sure you can't prove that silicon devices behave like turing machines, or even that they really exist, but for the sake of this discussion, what does that matter?
We can and absolutely have proven things to be impossible.
What would you consider to be an example of that? And how does that square with the Problem of Induction which is the hole in our entire system of empirical knowledge generation?
You can’t make a Turing machine which can solve the halting problem or an algorithm which can determine whether any given mathematical statement is true.
You can prove that 2+2!=5. You could even say that, given the rules governing math, it is ‘impossible’ that 2+2=5. The domain, however, is synthetic and composed of a system of axioms and rules.
If I change the underlying axioms and rules, I could certainly prove that 2+2=5, just as I can prove that The sum of a triangles interior angles exceed 180 degrees, or that two identical number squared can equal -1. (Redefining what a straight line means for the former, and inventing an imaginary number system for the latter.)
Proving what can and cannot follow given a set of rules, however, is not what philosophers mean when they speak or impossibility in the real world.
Yes to apply that to the real world, we have to use some assumptions like, the universe isnt a giant trick, the sun will rise tomorrow, we dont live in the matrix, etc. However given the context of this discussion those are fairly reasonable assumptions.
I'm not arguing that it can't. I was merely pointing out the proper 'burden of proof.' The article was criticized for failing to demonstrate that something can't be done... that's not fair. The burden would be on the proponent of the proposition that a machine can attain general AI. That's all.
Perhaps there could be general AI... I'm not saying it can't be done. I would point out, though, that IF it is to be done, it certainly won't be by copying a brain. Nobody even knows the hell the brain works...
Maybe you are stuck on the notion of a computer as a silicon chip. Biological entities are just a special case of machine ergo it is already proved that a machine can attain general AI.
I contend the brain by definition is a computer, and so that any claim that we can not produce an artificial one implies that we'll forever be unable to replicate a process that is repeated over and over by nature through simple biochemical processes.
A duck is an entire organism with a known genetic code, suspected lineage in the tree of life and defined characteristics.
A brain is an organ within another known organism in a differing position in the tree of life with different characteristics.
A computer is a device which takes in input via some means and uses input to transform elements that together represent its internal state based on input and prior internal state resulting in output that is the result of both internal state and input arranged in a fashion such that an actor with sufficient knowledge of functioning can manipulate input in order to achieve desired output.
A programmer is such an actor.
In such a context it appears that a brain is merely programmer and computer and AI is merely the achievement of a sufficiently complex and capable system as to represent the same thing in silicon or whatever medium you prefer.
Arguing that such is impossible seems to be merely a failure of imagination.
You imagine a 'thinking' machine all you want. Doesn't change to reality of the situation that no digital device comes close to thinking.
Some find this frustrating. So, they reduce the idea of a brain to a digital computer.
Ita a confusion of the Model with that being Modelled.
A photo of you, is a representation -- or model -- of you. But, it would be silly to become so enamourwd with the photo that you begin to think YOU are a representation of the photograph.
> You imagine a 'thinking' machine all you want. Doesn't change to reality of the situation that no digital device comes close to thinking.
Doesn't it though. Someone from the 1800s would probably absolutely think it does. We have computers that can identify what is in a photo, computers that can do complex math problems (pretty sure historically, the ability to do logic was considered first and foremost what made humans intelligent and not simple "animals", its only recently with the rise of computers this has taken a back seat), computers that can translate between languages, etc. That's not the same as being human, there is no sense of self or independence of action (nor are we anywhere close to having that) but we've made amazing, almost unconcievable strives, in only 100 years. So i think its unfair to say computers dont think at all.
>Doesn't it though. Someone from the 1800s would probably absolutely think it does.
Convolutional networks and neural nets in general are cool, but hardly magical. It’s just glorified curve-fitting. A person from the 1800’s would not be all that impressed with the idea. (What’s impressive is the bread-and-butter of it... namely, the technology that enables the millions of simple calculations per millisecond.)
I'm not redefining anything. I'm applying reasonable definitions of a "computer" to the brain.
Indeed, the term was originally used about people - our electronic computers today have the name because they were taking over functions carried out by human brains.
The difference is that a brain is provably capable of computation, while a brain is not provably capable of flapping its wings and migrating between Canada and Mexico every year (without the help of the meatsuit it's driving, at least).
The word "computer" was a profession before it was a machine.
Programmers effectively emulate a compiler in their minds when they are programming. I don't see why it's so hard to accept that the brain has many "computer" capabilities, even if it's implemented with different materials.
Appearance is vague to the point of being an argument without merit. If someone can SHOW something to be dissimilar in kind not degree then it would fall upon the recipient of that argument to refute it. It is not enough to point and not explicate. One could point at our pre civilized ancestors and point out hunter gatherers can't fly to the moon or build a computer and would never do so. One could go further back yet to the prior species that would someday become humans and point out that their present limitations and claim that one could never produce the other. Both assertions would be obviously wrong only because of the advantage of hindsight.
I do not accept that there is a difference between computation and thought without some meaningful definition of thought.
> do not accept that there is a difference between computation and thought without some meaningful definition of thought.
If I know everything about the nature of X, yet I know very little about the nature of Y, it would be illogical to say "I will assume Y is like X until proven otherwise."
I think computation as a model of cognition is a perfectly reasonable hypothesis that will bear fruit and at least it IS a hypothesis instead of hand waving.
It is worse to imagine that there is a physical process that exists wholly in the physical world that happens in the world within reasonable parameters cannot possibly be engineered to happen in a controlled fashion.
This is a proof that would require a major shift in physics and math and yet we are expected to accept it purely based on intuition without even a compelling theory of how it does work.
OP should have said "counter-evidence". There is plenty of evidence the brain is basically an information processing organ. It's integrating sensory data that you can interrupt and hack (i.e. visual illusions, ghost limbs), also damaging certain parts reliably affects us in the same ways like going blind or losing motor control.
It reduces to the following:
Given X, if I could create a Y such that Y=X, then X would equal Y!
That adds nothing.
It isn't a given that it is even possible to 'copy' a brain. Why would one think it is? Does not quantum mechanics preclude the possibility of copying something perfectly at the atomic scale? Indeed, even if you could perfectly copy the 'material' aspect, you will still need to copy the configuration of electrical charges that existed at the moment of copying. This too is, from what I gather from QM, inherently impossible.
Does this preclude general AI? No. But it does demonstrate the absurdity of arguments that begin, "if I could copy the brain atom by atom, then..." as such a thing is impossible.
I think you missed the main point, replicated economically. Given unlimited budget, I think simulating the human brain might be possible if one could use a powerful enough computer for evey neuron.
The article title makes an unqualified claim about realisation, and so it was what I responded to, but the article itself also makes the same strong claim:
> A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.
This is not an argument about cost - the article argues that it is "in principle impossible".
Any argument about cost I think is also irrelevant: We know from the existence of the brain and how a brain is produced that it is possible to produce one relatively cheaply via biological processes.
It seems highly unlikely that our ability to produce an artificial brain will not eventually approach the cost of growing one, because the "worst case" scenario is for us to find ethical ways of growing brain matter via biological processes and hook them up to computers, and it would seem unlikely that we will not eventually find cheaper ways of doing so than growing full mammalian bodies with it, and that we can not find any ways of optimising the process.
How the brain grows and develops in humans is not very well understood. We mostly use animal models to understand the nervous system, not humans. As an example of why this is problematic: mouse embryos pretty much turn inside out at day 3, and are the only known mammal to do this. All others, including humans, don't. Why and how are unknown, as are the effects on long term development. Basically, mice are better subjects than zebrafish, but not very good being humans.
The limiting factor is not budget or time, but 'stomach'. How elastic are your ethics?
It's not very well understood, but the process is known to exist and work. To assume it can't even in principle be replicated would be an extraordinary claim. Yet the article suggests artificial intelligence can not exist even in principle.
I mean, from a biochem perspective, it's flabbergasting that intelligence exists at all. The brain is so noisy.
To me, it's not far fetched to say that you can't make it happen again. Though I think it's technically possible.
We're missing something big with intelligence. We know neurons and synapses a little bit. We know a few dozen neurons a little, though it's complexity is crazy big.
But that gap between a normal brain down to a few dozen neurons is just boggling right now. There are just so many questions that need to be studied ethically.
It may not be possible to answer them all in a reasonable time window.
We know it happens again millions of times a year just for human brains. As such the complexity of the brain is irrelevant - what is relevant is the complexity and reliability of the machinery that constructs it.
We know the volume of the machinery that constructs it, which allows us to compute an upper bound on the informational complexity of a system capable of constructing human brains.
To me it is totally unreasonable to suggest we will never be able to at the bare minimum mimic that process and grow whole brains.
To me most of the opposition to the idea of artificial intelligence seems to come down to people assuming we're bound to only try to do this with software on a digital computer. But if that proves fruitless, there's no reason to assume we won't try analog systems, or if all else fails try biochemical systems, all the way down to genetic manipulation and tricking cells into growing into brains for us to hook up to computers.
Counter perspective: The process by which the brain produces intelligence might not be complex at all. Maybe it is just a simple algorithm which, when applied en masse, produces it. See for example the neocortex is made up of billions of cortical columns which are all basically the same architecture.
A few generations of biologists have tried to reason it out. I think it's safe to say that whatever is going on is either very complex or we just do not have the right tool to study it right now.
Something really new, like radios were to communication, or steel hulls to shipping, is needed in bio to get things moving apace. The tools we have are really great, but they are a bit slow it seems.
Also, the neocortex is somewhat stereotypical, but those long range projections that come in/out at every layer are the things that make the brain so hard to understand. Everything is flying everywhere all at once.
Oh yes! Developmental biology is just bonkers tough.
Take all the hard parts of bio normally, and now add in a ton of mitosis, long range movement of cells, hard to detect chemical gradients, cell-to-cell junctions, and pure random chance. All just going bananas fast for 'normal' bio processes.
It is a crazy tough field to get work done in. No wonder no one uses mammals to do anything.
Ah, but this point has yet to be proved to even a small degree. Neural models of very simple organisms do tend to show that adequate connectome models appear to be enough to produce similar behaviors, but we know for a fact that there's a ton of biochemistry going on in the brain that we will not be able to replicate with just the connectome.
If I were a betting person, I would wager that the connectome is enough to get something like intelligence, but that without the biochemistry the entire system is unstable in some way. The chemistry that we see in the brain is far more complicated than what would be required to minimally sustain the cells. There's a reason all of this chemistry is going on, and unfortunately I think we're going to find that intelligence just cannot exist as we know it without the chemistry. If that's true, then we're talking hundreds of years before we have computers powerful enough to model the chemistry at the requisite level.
The authors main point is bunk if his only defense of that point is that humans are smart.
The "learning" problem is solved. We've developed numerous algorithms to take a neural network and have it learn a task. We've solved specific problems, including image recognition, and text synthesis. Companies have figured out how to make sensory equipment including cameras, microphones, touch sensors...
Just because no one has managed to put together all the pieces doesn't mean it will never happen. People for thousands of years did not think it was possible to build a flying machine, until someone did.
The Halting Problem definitely puts bounds on what an algorithm can know about another one. At least this should backup the empirical evidence why analytical approaches haven't gotten so far yet.
I don't know why you think the halting problem is relevant here. The halting problem equally applies to human computation, and it does not preclude our existence.
I came to this article expecting a logical proof of why AI was impossible, but instead found the following tautological argument: hard AI, which I define as intelligence with tacit knowledge, which in turn I define as knowledge that cannot be ran in algorithms, is impossible to be ran in algorithms.
Disappointing.
My counterpoint: is there anything that cannot be ran in algorithms? The way I see it the only thing stopping me from simulating every atom in a brain is computational power.
> is there anything that cannot be ran in algorithms? The way I see it the only thing stopping me from simulating every atom in a brain is computational power.
I am not for sure I understand the specifics of what you mean by "is there anything that cannot be ran in algorithms?", but there are plenty of things that are not computable or decidable in computation. These are theoretical constraints. There are also practical constraints that effectively increase the list of these things.
I think the neighboring thread nailed it. Tacit knowledge boils down to Cartesian dualism, and like the article claims there is something special and unreachable by computation. I agree this is not the case, everything points to our brains being information processing organs.
The primary thing stopping you from simulating every atom in a brain is a lack of knowledge about atoms and the brain. AFAIK we can't even simulate hydrogen and oxygen atoms with enough detail to get the emergent properties of water.
If you accept Dreyfuss’ argument that a computer can’t acquire some kinds of knowledge without a body, there’s a fairly obvious workaround. Give it a robot body. And in fact, many research projects are doing this.
The Dreyfussians may retreat to their motte and say, well, a robot body can’t give it the experience of walking barefoot through the grass so it can’t be fully general because there’s something people know that it can’t.
But it’ll surely know things we can’t, and we’ll have to admit that humans don’t have fully general intelligence either. Wr just think we do because we can’t think of the ideas we can’t think.
Every pronouncement that AGI will NEVER be achieved is equivalent as saying that AGI does not exist in the universe. So, I guess this biological AGI entity writing this note does not exist.
Right. Because if a carbon based / biological machine like a human can be an AGI, what reason - in principle - is there to say that the same can't be done with a different (perhaps silicon centric) machine? To suggest otherwise feels, to me, like indulging in a form of mysticism which holds up human life as something being "beyond" science.
Or, at the very least, that we can't artificially create a carbon-based machine capable of general intelligence?
Like, even if there's something magical about neurons that can't be replicated through conventional electronics, we could certainly put a bunch of actual neurons in chips and use those instead.
I'm wondering if that's just the philosophers form of thinking "there is no way a machine could ever do my job" which exists in virtually all professions.
I think human intelligence can't be recreated but this is not neccessary. It's good enough if artificial intelligence is general enough to solve problems autonomously. I imagine AI as a different sentient species. A species which doesn't have instincts about food, excretion, reproduction and avoiding being eaten by predators. AI will be always different from humans because it is not useful to recreate all human instincts.
What is "religious" is this inclination to treat human intelligence as this magical thing that works outside the laws of nature, as if our neurons and our brain were something other than just chemistry doing very complex things.
That would be quite an extraordinary claim that requires equally extraordinary evidence.
Life in general and your brain in particular may very well be more than "just chemistry". We really don't know.
What we do know at this point is that no amount of "just chemistry" has ever managed to create even the most rudimentary form of "life" from inert chemicals.
And the only working models we have for "intelligence" at this point are inextricably bound to life.
Don’t beat around the bush. Are you implying life is supernaturally sustained? That would be quite the claim.
If not, then the history of science strongly suggests we can eventually learn the mechanisms of life and intelligence and then find a way to imperfectly yet sufficiently reimplement them.
Are you suggesting life is supernaturally sustained?
No. I am simply suggesting that creating life (and intelligence) from inert materals may involve more than "just chemistry". In the same way that lead can not be turned into gold by "just chemistry".
I welcome knowledge and proof to the contrary but thus far, there is none.
Gold can be made from lead (and other inputs) using knowledge that is generally classified as nuclear physics. It’s not very practical or economical, so no one does it — but it can be done. It’s true that this isn’t “chemistry”. Not sure if you were making a deeper point.
Perhaps to truly understand life we need to look deeply at physics, geology, fluid dynamics, and many other specialized fields, in addition to chemistry. And maybe we will need some new discoveries in these areas than have not yet been made. Is that all?
I don’t think there is really much contention between that and the idea that life is “just chemistry”. I would bet most people saying that are totally fine with what I wrote above and are likely just trying to be concise with their words.
Unless you mean more than you are plainly saying, this seems like splitting hairs about the taxonomy of scientific understanding. Understanding is understanding, regardless of how you categorize it.
Ok, fair enough. I was pushing back so strongly as it seemed you might have been implying that there is a certain "magic" we can never investigate or understand, which I find to be a very unjustified and regressive position.
Much of our progress in medicine, science, and technology has been made due to the idea that the world around operates in an orderly fashion and can be investigated and understood.
We have lots of evidence of how the processes of life are chemically driven: Photosynthesis captures energy from sunlight, metabolism of sugars produces ATP which is then used in a variety of ways by the cell's proteins, the DNA->RNA->Protein pipeline builds proteins from the sequences of base pairs stored in a cell's DNA, and DNA copying enzymes pass that information from generation to generation. Since you probably already know about that stuff, I guess I must be misunderstanding what kind of proof you are looking for. What sort of discovery would count as "proof" to you?
What sort of discovery would count as "proof" to you?
The ultimate proof is actually doing it. Start with nothing but inert materials and produce a new, unique form of living organism using "just chemistry".
I'm not saying it can't be done, I am saying that thus far, it hasn't been done. And the reason it hasn't been done might be because "just chemistry" is an overly simplistic description of what is needed to actually achieve it.
let's say he believes that. even then his belief would be as understandable & justifiable as yours. Wagging a finger and saying history of science suggest this and that is just that, belief. Just because something happened before you think it'll happen again and again. I do sympathize with your pov and even agree with it, but let's not beat around the bush as you say and let's just call this what our povs are: faith in science yet to come. Ours is no more righteous than the other.
I very strongly disagree that we can't label some beliefs as more compelling and reflective of reality than others.
I am happy to admit that doing a good job of differentiating is tricky business indeed, and that we all make many blatant errors and are prone to developing distorted ways of seeing things.
Nonetheless, I do believe that humanity has acquired some real understanding of the world and will continue to do so.
No amount of "just chemistry" has yet managed to create...
Keep in mind that no amount of rocketeering has ever put a man on Mars, but that hardly constitutes a proof, or even serious evidence, that it is impossible.
I haven't yet managed to fit inside a toothpaste tube. But you believe in the impossibility of this right? There is no absolute belief or disbelief. It is all a continuous gradation. Some things we can bring ourselves to belief in if there is sufficient cause. Not arguing for or against the original point but just your argument and condescension.
Stating that it is "just chemistry" has no implications on whether it can be done, which was the claim being made.
The fact that it has not been done with "just chemistry" is no proof but it does tend to skew the probabilities toward the view that substantially more may be required.
> Life in general and your brain in particular may very well be more than "just chemistry". We really don't know.
That claim has roughly the same epistemological status as Last Thursdayism ("the world was created last Thursday in a state where we have memories and see evidence of past events but those events aren't actually real.").
> What we do know at this point is that no amount of "just chemistry" has ever managed to create even the most rudimentary form of "life" from inert chemicals.
How do we know that? Are you just talking about humans practicing chemistry?
Emerging field of quantum biology if fairly new, there numerous processes in neural networks for which current understanding of chemistry and mathematics is not sufficient.
For example, how transfer of functions emerges after hemispherectomy (half a brain is removed), to best of my understanding is unknown.
If you agree that the universe is governed by the laws of quantum mechanics, that these laws are local, and that any given region of the universe can be described with a finite dimensional Hilbert space (informally speaking, it contains a finite amount of information, which is implied by the Bekenstein bound), then yes. And of course there are some theories where these assumptions don't hold where the answer is still yes. See this paper for more details: https://arxiv.org/pdf/1102.1612.pdf
(There are a few more technicalities which amount to forcing the simulator to solve an uncomputable puzzle before we tell them the laws. Let M be a turning machine that whose halting behaviour is undecidable. Even a spin-1/2 system might be "uncomputable" if all I tell you is that its Hamiltonian is (1 0; 0 1) if M halts and (1 0; 0 2) otherwise. Since a spin-1/2 system is one of the most blatantly computable things out there, this objection doesn't have much force.)
I'm not exactly sure what you mean by "binary logic," but there is the Church-Turing-Deutsch principle which states that a "universal computer" (essentially a working quantum computer) can simulate all physical processes. But I suspect it's not a "proof" that would satisfy skeptics.
It's a Strong Church-Turing so you're correct that sceptics just say they don't believe this.
Actually it's not uncommon for AI "sceptics" (the sort of people who write these articles not just people who suspect we'll have another AI Winter) to just not accept Church-Turing at all, or to insist upon squinting at it in a peculiar way that renders it tautological.
I'm consistently disappointed that people who feel they're quite sure general AI isn't possible mostly have bad intuitions about Computer Science. Twice so far I've been recommended books by sceptics "which will show you why you're wrong" and been disappointed with the poor quality of argument deployed which sometimes is little more than a show of incredulity or a resort to mysterious dualism.
A more robust attempt at this sort of argument was put up by Stevan Harnad, who taught an undergraduate class I sat in on many years ago now (one of the privileges of a post-graduate is that they're entitled to attend relevant undergraduate lectures and so I did). Harnad thinks† you need to build a robot because the intelligence will need some means to experience the universe for itself. But Harnad doesn't disbelieve that general AI is in principle possible, he just thinks our present methods can't get there.
† In general with people who've made a career of such thing they are very careful with words, and so I have doubtless mis-characterized (or even misunderstood) the details and you should blame me for that not Stevan.
Is your problem digital logic? Analog computers predate digital computers. I doubt that's the sticking point but if it is, it's not an impossible hurdle.
Someone won an award for mathematically proving that a simple general purpose computer could be built from a handful of color coded wooden switches powered by a hand crank and programmed by punched tape. This simple analog device could run any possible binary logic program that could ever be created --- albeit much slower than a modern digital computer.
In other words, this rudimentary analog logic playback device is every bit as "smart" as any other binary logic computer will ever be --- which is to say, not at all.
A computer is not just a playback machine like a mechanical loom.
What is it then? Magic?
The fact that you don't understand how it works doesn't make it magical.
Every program ever written for your desktop computer was ultimately "compiled" into a long series of simple binary logic operations that your computer blindly repeats at very high speed. For every set of inputs, it recreates the same output --- kinda like a loom mechanically weaves a particular pattern at high speed once it has been "programmed" using a set of punch cards not unlike those once used to program computers.
Extraordinary claims require extraordinary evidence. In light of our limited understanding of how the brain works, claiming it is not possible to recreate its functions inside a computer programming is a much stronger claim than to assume that it’s possible and we just don’t know how. Unless we see a mathematical proof that it’s impossible, I think people who claim that it’s impossible are the same as people who claimed that only birds can fly and that it is impossible for machines to roam the skies.
This is just flipping the burden of proof around. What is your proof that it can be done? "But planes" is not proof.
I'm of the opinion that we don't even know what questions we are trying to answer.
What is intelligence? What is consciousness?
Can you explain them in concrete terms that are acceptable to everyone and don't have any notable exceptions? That's going to be the first step to being anywhere close to realizing general a.i..
The strong indication that it can be done is that brains exists and grow guided by machinery that is computationally limited by the fact it takes up exceedingly little space, and we by extension can put an upper bound on the computational capacity of that machinery.
That there is a tough engineering job involved in reproducing such machinery so we can grown brain matter at will, and that we'd prefer our computers to be less squishy does not mean it is likely to be impossible to reproduce it.
"It exists, therefore we can make it." I don't find that line of thought compelling.
Although you can argue that we already do it because we, as a species, reproduce.
It could be quite possible that our level of intelligence is not reproducible by the means we have chosen. It could be that there is something inherent in our entire composition that allows for our particular expression of intelligence and consciousness.
I'm not saying it's impossible. I am saying that first, we don't even know what we're looking for. And when pressed as to what that is, the answer comes down to basically, "You know..." while gesturing broadly.
It's indeed quite possible it is not reproducible by building digital computers out of silicon, sure, but that does not prevent us from picking another way of building them.
I'm very much sympathetic to the idea that there could potentially be aspects about the specific structure of a brain that is necessary to produce intelligence, or at least necessary for sentience. But does not inherently make it impossible to reproduce. For it to be impossible to reproduce a brain, there would need to be something in the cellular machinery used to construct a brain that we can not pick a party and replicate.
As a reminder, the argument made in the article is that it is impossible even in principle, not that it is impossible to do it using a digital computer of typical current architecture.
We know that machines with AGI are possible - humans are such machines.
Maybe you could discuss whether classical computers could achieve AGI, but I think overall the quest is to build machines with AGI, not necessarily in the form of classical computers.
Humans are animals. Part of the paper defends the notion that a lot of human intelligence is tacit based on being embodied in the world as a living organism. The idea that humans are biological robots is only one that came about as metaphor when we created machines and some similarities were noted.
from what I understand being "embodied" doesn't necessarily imply movement, but I am afraid even I do not understand it fully to say a computer isn't and an animal is.
What does "embodied in the world as a living organism" even mean? Being embodied as a machine wouldn't be sufficient? Living organisms are "just" machines. I also don't see any reason why embodyment should be a prerequisite for thinking.
> We know that machines with AGI are possible - humans are such machines.
Are we? How do you know this?
Define "general intelligence". Then prove we have it. Then demonstrate that this definition that includes us doesn't include something commonly accepted to not have the same qualities.
Also, if we don't have "general intelligence", what does that mean?
To claim otherwise is to claim that there is some barrier beyond which scientific inquiry is not allowed to cross or is incapable of crossing. The epistemological status of such a claim is roughly equivalent to explaining something about the Universe by saying "it's because God did it."
No. It is not necessarily a claim that it can't be crossed, it is a claim that we currently lack the knowledge and resources needed to cross it.
We haven't figured out how to travel faster than the speed of light. That doesn't necessarily mean it can't be done. But we currently lack the knowledge and resources needed to do it.
Machine comes from mechanic and there is doubt, that general artificial intelligence can be achieved with a mechanical base.
But if you define any complex systema machine, then yeah sure, we humans are machines. Therefore GAI can be achieved with machines. Tautological proof.
> Most of us are experts on walking. However, if we try to articulate how we walk, we certainly give a description that does not capture the skills involved in walking.
because it's largely automatic for us, we need to know where to go but not how
with AI this means we can automate more than we need to genuinely understand, often times approximation gives good enough solution
we humans often fool ourselves on how capable we actually are, but then yet without training in dangerous situation we simply act on instinct and impulse
Author of linked article asserts that computers cannot achieve general intelligence by: redefining general intelligence as precisely and only the specific experience and behavior of human beings.
Author has incorporated the argument's conclusion into its premise, a tautology.
I’ve always been quite skeptical of philosophers arguing that a given potential scientific achievement is impossible; logic alone cannot be used to answer practical questions like these, much though philosophers would like to claim this field.
On to concrete criticisms, most of this article is irrelevant; you can skip straight to the last few paragraphs as the arguments there are largely unsupported and stand on their own:
> Conclusion: computers are not in the world
The main thesis of this paper is that we will not be able to realize AGI because computers are not in the world.
I think it’s a valid criticism of the current breed of ANI algorithms, and is a problem that we will need to address (though it might turn out to be less of a problem than the author thinks). But to claim that computers will _never_ inhabit the world, and are logically incapable of doing so, seems trivially refutable to me.
Why can’t an AGI have a childhood wired up to a robotic body, where it interacts with people and the world, thereby learning a tacit model of physical and social causality? Currently this might be science fiction, but to say it cannot happen in a hundred, or a thousand years seems arrogantly certain to me, and to claim it is logically impossible is to be epistemologically confused.
I disagree with Dreyfus's view about strong ai. It's like saying that all lifeforms in the universe are carbon based, because we are carbon based lifeforms. But that's not necessarily true. I am a huge fan of bio-inspired engineering, however, and I think imitating/simulating "growing up" could be a reasonable approach to human-like intelligence. This phase of life is key for our understanding of the world. We define and refine our values and evolve from a blob of cells to a conscious being by observing our surroundings and acquiring huge amounts of knowledge. If this concept (refined by evolution over the last million years) is good enough for humankind, it is certainly good enough for human made intelligence. The biggest problem here is efficiency and complexity. We still don't know all mechanisms of our brain and we are not capable (yet) to simulate 100 billion neurons, each connected to up to 30.000 other neurons. That's the main problem here!
Why do we think computers can achieve intelligence? Where did this idea ever first come from? it sounds almost childish if one traces the history and wild misconception it must have emerged from. The idea maybe dates to Turing? maybe even Leibnitz? What was the idea based on? On seeing that some machines can "behave" i.e. do things a human does? And they started thinking if ultimately that machine might be able to do everything a human does? That a human is nothing but a " complicated machine" in the sense that we can formulate complex but finite rules for its behavior? maybe even simple rules which when applied on appropriate substrates would lead to "emergence" of such behaviors? That is a BIG assumption to make really. Intelligence and cognition here are treated as lists of rules which can be applied to any substrate. An electronic computer is just one substrate. i should also be able to do this on a sufficiently giant mechanical computer.
Even physics seem to have this rule-obssessed assumption. Can we really simulate the universe AS IT IS? Sure we can some parts of it. But is there a theory of everything really? After such a theory we would need to know nothing. physics would be pretty much done with. This theory would explain all observations in the past, present and future of anything we make in the universe.
Even in our search for the smallest particles can such a "bottoming out" ever take place?
If I get Level-5 driving, so I really care how it was achieved? In order to get Level-5 driving, the car must be able to identify a construction worker directing traffic, versus a random homeless guy, drunk, and directing traffic into to current water main construction pit. Does that require general AI? Given enough examples of drunk homeless people directing traffic into water main construction, maybe we don’t. It’ll be 100 years into the future before we’ve amassed that level of knowledge, but maybe it can happen.
There is a better explenation: there's no practical point to GAI.
Humans are automatons - so we see a 'single unit' of intelligence.
The Internet is a vastly connected system.
A 'basic robot' in a factory in China can have access to 'all the world's information'.
The power of 'masses of data, services, systems' all combined, means that 'the Internet' itself, in the broadest sense, will be much more intelligent than any GAI anyhow.
Example: Is Siri 'AI'? We don't think of 'Siri' as a thing, rather a service, a front end to a lot of things.
Well - 'Siri' is going to get really, really smart and be able to do a ginormous number of things in the future, including have 'human level' conversations with you, predict your needs and moods. She'll be talking to a billion people at once! Isn't that even 'beyond' GAI?
The factory will be able to take a design, command robots to prepare, place orders for parts, design work schedules for humans, prepare shipping, anticipate problems. The factory is waay smarter than a human, is it 'GAI'?
Siri, the Factory, your car, the grid of traffic, the financial system, distribution networks - it's all working together to do things utterly beyond any individual 'GAI'.
And as these things develop, there really never is a real economic driver for a true, atomic style 'GAI' like you see in the movies. There's no reason for a company to spend $500B building a 'Data' from Star Treck - because everything he could do, would otherwise be performed much more efficiently, cheaply and intelligently by a system or group of systems oriented towards those tasks.
A more useful question is "what can't machine learning do"? Each generation of AI technology has solved some problems, then hit a wall. (I had the unfortunate experience of going through Stanford CS in the mid-80s, when expert systems had hit a wall but the faculty was in denial about that.)
Is there some way to get "common sense", defined as "knowing if something bad is going to happen before you do it", from machine learning? So far, no. DARPA is funding work on it, though.[1]
The Allen Institute even has a competition.[2] Both are verbal, though; they work on text statements.
It's a very unconvincing argument that culture, bodies or childhood are required for intelligence. Moreover, that computers could not have any of them (or functional equivalents, if in fact they are of any functional use at all).
I don't buy that the single data point that is Human intelligence has much to say about the possible bounds or limits of general intelligence.
Computers inhabit a world. They have interactions. Seems sufficient to me. As for culture - they have enough learning material of human culture, and it's not impossible to train multiple AI intelligences at the same time to create a culture of their own. Surely at some point in history Humans developed their culture from nothing? As for childhood - attempts at AI already have training periods where they are given training data, made more plastic and develop against simple situations before moving on to more complex situations.
The (also terrible and wrong) Chinese room argument is more convincing.
I wonder what the GPT-3[1] has to say about this. While it can't exhibit reasoning, I wouldn't bet that it is completely off the path getting there. Call me crazy but I think we are about a year from first general reasoning algorithms.
Whatever the criticisms of this paper may be, I get where it and Dreyfus is getting at. Even Hofstader had a lot to say about this but the guy was ignored to hell after his first book itself, I guess he has lost faith in the entire enterprise.
One problem with computer scientists and AI researchers is the level of arrogance that they display about their state of knowledge.
It would have had been good for us to rather engage with the larger body of philosophical work (which has a long tradition of thinking about how the mind works, how meaning and cognition emerge, how such embodied cognition interacts with the world) and then conduct empirical research trying to verify a philosophical paradigm rather than just going at it blindly by sometimes assuming mind can be reduced to logical systems and sometimes trying to simulate a reductionist and incomplete model of neural networks.
Imagine you tried to replicate your PC by creating a simulation of the CPU. Sure you could probably simulate some of the functions without too much difficulty, but the value of the PC is not in it’s CPU alone, it’s in the interaction of all the components, the screen, the ram, the disk, the keyboard, the GPU. Isolated these components provide a fraction of the value they do when they are combined and simulating the whole PC is an order of magnitude at least greater than simulating only the CPU because you also need to simulate the environment it’s in.
I think this is what the author of the article is trying to say, not that it’s totally impossible just that we are nowhere near doing what needs to be done to effectively recreate the highly emergent properties (strong AI) of the human body.
> the value of the PC is not in it’s CPU alone, it’s in the interaction of all the components, the screen, the ram, the disk, the keyboard, the GPU
That's fine. We can (and do) do those too. Pretending otherwise is weird. Authors of these kinds of articles always seem to start from positions that woefully misunderstand or misrepresent present scientific theory and capability. Like there's this weird information barrier where historians and philosophers of science regularly skip the part that involves understanding what's going on even though they're still supposed to talk about it.
> you also need to simulate the environment it’s in
You literally don't for the same reason that we don't have to simulate our own environment. The simulated me can just be in the same environment that I am. The environment isn't going anywhere.
Well, of course. The line is the body. The human body is a line of ergodic behavior stemming back to the origin of not merely the species, but of life itself: a chemical that said "I am that I am" and became life by continuing form or not continuing form. You and I will do the same. The form is ultimately shaped by the environment. What made the environment? Now enter Human. What of this form made it so unique? The recognition of ergodic behavior itself! Now the beast could see the result of its behavior and so birthed prophesy, nightmares, & dreams. So that it might continue. Or not. I think ultimately, we're coming at the problem of AI from the wrong end, if we're attempting to mimic biological intelligence. I think the direction AI is taking has tremendous applications that will reshape humankind, but I do not think it will be comparable; it will be something new: the son of Man, not alive, not dead, all-seeing, judging and impartial to emotion. Either that, or we will place upon it's throne a sword and all shall perish.
Irrespective of the main topic, this citation rings a good warning:
“Jerone Lanier has argued that the belief in scientific immortality, the development of computers with super-intelligence, etc., are expressions of a new religion, “expressed through an engineering culture” (Lanier, 2013, p. 186).”
I would have thought it's possible to pass the Turing test without being "in the world". But you'd have to be much more intelligent than an average human to do that, because you'd be successfully imitating something that is nothing like you.
Is there a strong argument against the fact that every approach to AGI (Artificial General Intelligence) eventually becomes ANI (Artificial Narrow Intelligence)?
In other words, as soon as a facet of AI (NLP etc) gets a specific name and context, it is no longer considered part of AGI.
Interesting idea! Future Me: A computer became better at my job, but I might still claim that it is not generally intelligent because it does not have the same kind of relationships to humans I have.
We can't create artificial intelligence because we have absolutely no idea what intelligence and consciousness really are. You can't recreate what you don't understand and we are not even close to understanding what makes us, us.
An analogy I've thought of is this. A modern jetliner is a miracle of engineering, aerodynamics, electronics, and programming. If the average person on the street, such as myself, was tasked with attempting to recreate one the best that would be achieved would be an extremely crude model that could not even achieve the most basic task of an aircraft: taking off and flying. Thus it is with researchers trying to recreate human intelligence.
>>> We know this phenomenon from everyday life. Most of us are experts on walking. However, if we try to articulate how we walk, we certainly give a description that does not capture the skills involved in walking.
All the article is based on statements like the previous one. Scientifically is possible to nullify it with a counterargument.
Unlike most of the comments here, i'd say i am at the very least sympathetic to the views in this article. But even I have to say, this is a badly written paper. A whole lot of talking, various anecdotes and quotes but nothing to show for. This is just finger-wagging...never is any light thrown upon this mysterious "being embodied in the world" and somehow out of the blue a discussion on causality's importance in cognition comes about and fizzles out without having said anything of substance
When we talk about AGI, everyone always takes it for granted that an AGI would be human-like. But I think if you look at the complexity of the brain, and how poor our attempts to emulate it have been so far, I think it is almost a virtual certainty that the first successful attempts at AGI will create non-human intelligence. In many ways, I expect that our creations will find us to be as unrelatable as we consider dolphins or other highly intelligent animals.
I often see AGI being discounted based on machines lacking expressiveness. Of course, text isn't as complex as human expression (I assume?), but isn't that how we're communicating right now? I think it's expected to assume each other have General Intelligence. If so, wouldn't it be unfair to hold machines to a higher standard?
In 80 years we have gone from nothing to deep learning.
In the next 80 years, we will have AGI, and in the next 8000 years humans will be androids/cyborgs.
That we haven't managed to create true AGI for now doesn't mean anything. It's like the case for the aeroplane: there were even mathematicians that "proved" things heavier than the air cannot fly.
I know nothing about AI but my ignorant impression is that there are essentially two ways to achieve AGI:
1) We somehow model GI artificially, which seems like an impossible task.
2) Or instead we just model the fundamental neural mechanisms (dentrites, axons, etc) and find a way to copy the state of a brain into a sufficiently powerful computer.
Is this correct or are there different approaches?
I guess the author is not aware of Rodney Brooks or any number of other long existing proponents of having AI be embodied. And that’s not even taking into account that the network, its data, and sensors attached to it, and their data all fill the role of a kind of embodiment in an environment if that’s something the author thinks is missing.
To better understand the brain, you have to not only talk about its cognitive functions, but all of it. Then you will understand what it is really for.
Your body stays alive by keeping levels within certain range. There are many functions in your body that have as an objective controlling those levels and your brain controls them at different levels of automation. Like your heart rate, sweating, urine output, etc.
But not everything can be automated, because some situations are more complex: finding a place with the right temperature, access to nutrients, and breathable air, etc. That is where decision making comes in... survival-oriented decision making. And that was what resulted in us developing what we today call intelligence.
When you disembody learning, and put it in an abstract evolutionary environment where survival only depends on solving an abstract problem, the result will not necessarily be an "AGI", a self-preserving, self-aware intelligence that is adapted to survive a wide range of scenarios. But if you changed that environment, made it very hostile and constantly changing, it could eventually lead to the evolution of an AGI.
>But not everything can be automated, because some situations are more complex
This is where you lost me. You are very right that brains do all kinds of things to regulate the body that we don't usually think of as influencing thought. But we can't model those because they're too complex? I don't think environmental variables are too complex.
But (1) if those are important, we can model them too and (2) it may be that we don't need to model computer 'thinking' after the structure of human brains anyway to solve problems intelligently. And (3) the totality of things an AGI might be 'aware of', even without simulating biology, could very well mean that the 'intelligence' of a system is nestled in a complex web of variables that give it the ability to have the equivalent of our tacit knowledge. That's probably an informational question rather than a question of needing to simulate biology.
If you are in the jungle and a lion comes at you, you will not have time to sit down and think what to do, and your brain is prepared to act in those situations as well.
I'd say that's an extremely crude understanding of what computers can actually model. There are ways that humans stream through thoughts that depend on tacit knowledge and unconscious connections, and there's no reason why those can't be modeled by a computer. And the equilavence betweeen "rational" (in some informal, human sense of the term, as in talking out loud like spock), and "computable" is just a misunderstanding. Computable contains much more than this naiive conception of what is rational.
Im no AGI expert, but I am a big fan of Robert Miles who, coincidentally, made a kind of video rebuttal to this article ~3 years ago https://www.youtube.com/watch?v=B6Oigy1i3W4
Achieving Strong AI makes no sense because we don't understand the human brain and how it works. Even if a non-biological AGI passes the hardest imaginable Turing test, there will be arguments against it because "it is still not like us".
My theory on this is that embodiment will be necessary (but not sufficient) as the physical world provides a consistent data model where high order learning can be built on. In effect this becomes the model to bootstrap off of.
This article seems to be yet another long-winded attempt to say that human beings have some sort of "soul" or "vital essence" that computers don't, but since those ideas are out of vogue, it uses obfuscated language to make the same point without explicitly saying it.
See:
> which have allegedly shown that our decisions are not the result of “some mysterious free will”, but the result of “millions of neurons calculating probabilities within a split second”
> the quotations are “nothing but” the result of chemical algorithms and “no more than” the behavior of a vast assembly of nerve cells. How can they then be true?
The article is suggesting that human beings cannot say true things about the world or themselves if human intelligence is no more than chemical algorithms and nerve cells, and that proponents of physicalism are therefore contradicting themselves. This is a fairly bizarre argument.
The use of "allegedly" with regards to explaining human decisions as a result of neurons further reinforces the claim that therefore, there must be some mysterious free will, a human soul, or here a "social context", to explain human consciousness.
The trouble with vitalism or claims of a human soul should be fairly self-evident in the modern age, and claiming that denying it is "scientism" is utter nonsense.
They then mix it up by saying that computers are not in the physical world and do not have a body, therefore cannot be generally intelligent like humans. This is obviously false, what is a robot if not a computer with a body? Computers can interact with the world using sensors and actuators, there's no theoretical reason that they could not match or exceed human physical capabilities (they already do in narrow instances).
It always mystifies me to what extent people insist on trying to hold on to this fantasy of free will, while remaining unable to define free will in a way that does not devolve to "magic" or some attempt to obscure a deterministic description behind smoke and mirrors.
At least some of the latter (e.g. a portion of compatibilists etc.) will if pressed admit that their "free will" is an effect or illusion of mind layered on top of determinism, but for a lot of people the very idea that they don't have actual agency seems entirely impossible to accept.
The offender I hear most often now a days is trying to use Quantum Mechanics to bring free-will back into play. The idea being that if any part of nature is unpredictable, then that could be a mechanism for where free-will might come from.
Except that would be like saying your phone is conscious just because it gets bombarded by cosmic waves that sometimes cause bitflips. Just because the machine isn't 100% predictable, doesn't mean that those cosmic waves have any specific goal in mind.
> The idea being that if any part of nature is unpredictable, then that could be a mechanism for where free-will might come from.
Trying to use QM to introduce free will is just another manifestation of the same "mistaking the model for the reality" problem.
We have no reason to believe that quantum probability is the end of the road just because we can't see what causes the probabilistic results. For all anyone knows, Einstein was still correct and there is still no god playing dice, just the same old predictable billiard balls at a plane that we can't readily observe.
Perhaps. There's been some interesting work in showing that any hidden variable theories can't give any better predictions than QM probabilities. It remains to be seen whether their hypotheses are valid, but the inference seems sound.
That's well and good, but "prediction" is an attribute of a representative model, not of reality. Reality doesn't represent or predict itself; it just occurs. The problem of comparing against "hidden variable theories" in this context is that the variables wouldn't be hidden to reality; they'd only be hidden to us. So can we predict better than QM? Maybe not. Does QM describe the truth behind experience? Probably not, and it isn't meant to.
It always mystifies me that philosophical debates can even get that far when it's impossible to prove that a mind can know a truth because it begs the question.
It's hard to even follow that previous sentence with my own view because it presupposes its own conclusion: Some things you just have to take on faith. For me, the definition of free will is action outside of the constraint of fate. And I'm not sure if it's a fantasy or not, but if fate does not exist, I'd be much more likely to take it on faith that free will does exist. But again, the whole playing field of this discussion is denied because it presupposes that a mind can know a truth, and I've yet to see any reasoning that that can be proven, let alone an actual proof.
There is no requirement to "know a truth" to address the issue of free will.
The starting point would be: Come up with a definition that does not devolve into infinite regression ("god did it", which just moves the question "up one plane" and so does not provide a solution) or smoke and mirrors (and illusion over determinism).
If someone were to be able to form a cohesive definition that does not fall down one of those two holes, it'd be possible to at least try to determine whether or not the definition has holes.
But without even such an attempt at a definition, any argument in favour of free will falls on its face from the start.
> There is no requirement to "know a truth" to address the issue of free will.
How can I believe this to be true if you haven't proven to me that a mind can know a truth yet?
> The starting point would be: Come up with a definition that does not devolve into infinite regression ("god did it", which just moves the question "up one plane" and so does not provide a solution) or smoke and mirrors (and illusion over determinism).
How can I believe that it moves the question "up one plane" when you haven't proven to me that a mind can know a truth yet?
Etc. Most philosophers will cede to this line of reasoning and at best call it boring ala Russell and then continue from there. But they're doing the same thing that people that believe in free will do. They're either denying the battlefield or they're moving the goal posts or they're not coming up with a cohesive definition of a mind or a truth or what have you.
My point is that you can mock the free will folks all you want, but coming down on the other side of the argument is just as rooted in belief sans evidence. There is no proof or hope of a proof that a mind can know a truth that is not circularly reasoned so any firm belief that there is no such thing as free will is a delusion, short of maybe divine intercession.
"Coming down on the other side of the argument" is to acknowledge that until someone has provided a definition of free will, there is no argument to come down on.
Talking about free will without first defining it is as meaningless as just waving your arms in the air.
That is why "knowing a truth" is not relevant to this question yet. Without even a hypothesis from "the other side" to address, whether or not we can "know a truth" makes no difference.
In my original comment I defined free will from my perspective.
> For me, the definition of free will is action outside of the constraint of fate.
Either every action of an individual is predetermined or it isn't. I'm not talking about the causal mechanisms of non-predetermined action (i.e., I'm not saying quantum probability / wave form collapse means someone has free will) I'm simply giving a definition that I think suffices for me.
Though even if this definition seems nonsensical to you, I still wouldn't budge from my core argument because it's impossible to resolve any philosophical debate without first resolving epistemological gaps.
What is determinism? Something just like a single play in a game of pool? Or could it include all of the history of a person and all of the biology, chemistry, and physics that keeps them upright and existing? How much of that information as amenable to "deterministic description"?
Take care that your "determinism" isn't just smoke and mirrors, too.
There's no trouble, we don't know everything. We default to assuming there is nothing we can't explain, doesn't mean that assumption is truth. Our job is to keep pounding the universe for truths until it gives them up.
If natural processes can only be artfully and artificially arranged to perform computation and not thinking how is it that we are having this conversation.
"The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI."
Eeee, errr, well, no, it's not. Strong AI is the pursuit of human-level (not human-like), general purpose intelligence. Weak AI is the pursuit of solutions to difficult individual problems or classes of problems. Source: my AI classes, ca. 1989.
This article isn't starting well.
Edit: And I'm back after reading it all.
There are two important points when evaluating philosophical arguments about artificial intelligence.
1. Have they introduced dualism? (Usually, it's "how have they snuck in dualism without admitting it." Sometimes it's easy to see, as in Searle's Chinese Room thing. Othertimes not so much.)
2. What happens if you continue with their own questions? Does it lead to an unpalatable (or simply wrong) conclusion?
"The main thesis of this paper is that we will not be able to realize AGI because computers are not in the world."
To start with number 2, who exactly is "in the world"? A blind person? A deaf person? A quadriplegic person? A deaf-blind-mute-quadriplegic-from-birth person? That poor bastard from Johnny Got His Gun? How about this laptop? A robot? A robot that is indistinguishable from a human being without physiological tests? The author started out with an elaborate discussion of human-like and non-human-like general intelligence, but now has unrolled that (Without suitable caution signs. Health and Safety are going to be pissed.) and is now only interested in human-like artificial general intelligence, only in something that resembles a human being: "As Hubert Dreyfus pointed out, we are bodily and social beings, living in a material and social world. To understand another person is not to look into the chemistry of that person’s brain, not even into that person’s “soul”, but is rather to be in that person’s “shoes”. It is to understand the person’s lifeworld." (And I'm going to go out on another limb and ask how well anyone, anywhere, understands some other persons' lifeworld. I don't think that is possible.)
"However, there is a problem with both these quotations. If Harari and Crick are right, then the quotations are “nothing but” the result of chemical algorithms and “no more than” the behavior of a vast assembly of nerve cells. How can they then be true?"
And here we have number 1. If materialism is right, how can those quotations be true? How can anything said or done by humans be true? They can't be; they're just biochemical algorithms, neurons, and molecules. Truth requires a soul, and a soul could completely grok some other person's lifeworld. Obviously.
So clearly, nothing even remotely like Watson or AlphaGo would be intelligent. (Even if they're not intended to be general intelligences. Yes, he spent the last part of the article complaining that two systems weren't something that they were never intended to be, in spite of spending a large time at the beginning delineating that very difference.)
Personally, it feels to me that maybe AGI is possible, but a computer capable of it would look nothing like the computers we have today. And, most critically, I'm aware of no highly-progressed research into major new physical hardware systems which more closely resemble our neurons.
There are billions of neurons in a brain. There are billions of transistors in a CPU or graphics card. Somehow, somewhere along the line, we convinced ourselves that brain neurons and transistors are fungible.
At some layer of abstraction they probably are. But, it seems to me that the sheer number of transistors necessary to emulate a neuron would lead to tertiary negative impacts to the overall system. Imagine, for a second, it takes a billion transistors to emulate one neuron; given we're quickly approaching the physical size limit of the universe in our production of transistors, this means you'd need many chips, and actually many computers, to emulate many neurons. Introduce many computers, and you have to introduce network latency and communication problems; both problems that the brain really does not have. And while you could argue "ok, the simulation will be slower, but it would still work", maybe, just maybe, the latency of communication between neurons is actually a critical component of cognition. In fact, that seems likely to me.
Many people are trying to build a brain on tensorflow with their nvidia graphics card originally designed to make unreal tournament run 20% faster. Google was among the first groups with the insight that custom silicon would make training and running these intelligences faster. But, what we're talking about here isn't "running faster"; its running fundamentally differently. We buy supermicro motherboards with PCI busses, plug in silicon that's just a little different than the silicon I use to play Doom Eternal; is it really any surprise that very little progress on AGI has been made?
I don't know what chips to truly, more accurately emulate a neuron would look like. I suspect no one knows. I suspect that, if anyone figures it out, it won't be Google, or Microsoft, or Apple, or China, or Russia; organizations with so many processes, procedures, and immediate-term outcome expectations that selling an idea as wild as "we can't use any of the pre-existing computing theory out there, we need to start from scratch" would be impossible, in favor of "can't you just make the Tensorcore V2 20% faster?" If it will be invented, it will be invented by one person, in their garage, with a unique insight and decades of work.
But I also suspect that it will never be invented. If we can't even solve alzheimers, or psychosis, or even depression, brain disorders which impact hundreds of millions of people every year, what level of hubris is necessary to think we have even 0.1% understanding of what goes on in our heads? We live in a society which refuses to even address, let alone help alleviate, mental illness, and you think we're going to be able to build, let alone maintain and debug, a simulated brain?
This feels a lot like Douglas Hofstadter's elaborate explanation in Gödel, Escher, Bach: An Eternal Golden Braid of how computers will never beat the best human chess players.
Every couple of months we get hit with the same wave of AI stories, and every couple of months I post these two links, inspired by ideas from that book.
Godel's Incompleteness Theorem and Turing's Halting Problem.
You can't build a perfect machine, because that would imply understanding reality perfectly.
Ugly reality is going to break your perfect machine, eventually. With long enough time horizons, the probability approaches 1.
When your machine breaks, you are going to need something else, either another, newer machine which can fix or replace it (in Godel's example, the new book of logic/truth), or something dumb like a human wetware, just flexible enough to know the right answer is "unplug the machine and plug it back in"
> If you ask me in principle if it’s possible for a computing hardware to do something like thinking, I would say absolutely it’s possible. Computing hardware can do anything that a brain could do, but I don’t think at this point we’re doing what brains do. We’re simulating the surface level of it, and many people are falling for the illusion. Sometimes the performance of these machines are spectacular.
If we're successfully simulating the surface level of it, the underlying mechanism is (imho) totally irrelevant to the user. If general intelligence is happening, does it really matter if the underlying mechanism is neurons, or transistors, or vacuum tubes?
Here's the fun philosophical question: Do you have a below-surface level understanding of anything? I mean, sure you know that if you do this, something else does that, but do your really understand?
And this is too bad coming out of DH, because in so many ways I think he's one of the best evangelists out there for understanding what computers can do, and for arguing against stuff such as Hubert Dreyfus' claims that computers can't do X.
That's unfair. Hofstadter said he thought it wouldn't happen until AI programs were our equals in general; he emphasized he was guessing and that his colleagues would disagree; and the explanation was of his worldview that led to the guess.
This article is a muddled, ridiculous mess. I read the first few paragraphs and couldn't motivate myself to do any more than skim the rest. As far as I can tell, there's nothing new here, and the author's argument that "AGI will not be realized" might be true if you stick to his ad-hoc definition of AGI, which seems to conflate "human level" intelligence and "human like" intelligence.
Yes, it's probably true that AI's will not have "human like" intelligence, for some of the reasons cited. Lack of embodiment and the associated experiential learning is the chief reason that I would personally cite for why this is true. However, that line of reasoning is completely irrelevant unless A. make the mistake of conflating "human like" and "human level" OR B. you very specifically demand that your AI must be "human like."
Everybody else realizes that the goal is to build an AI that is as general as human intelligence, not necessarily to build an artificial human.
Edit:
To go back to the embodiment issue for a moment... I think embodiment is important. I've been playing around with building a trivial little shell to pack some AI research in, that can be carried around (initially), and "experience" the world via a variety of different sensors. And I do think, again, that embodiment will probably be necessary to get an AGI that can "act human". I just don't see that as being the goal. Yeah, yeah, Turing Test, blah, blah, I know. As much respect as I have for Turing (and it's a lot, obviously) I don't actually consider the Turing Test to be very interesting, vis-a-vis evaluating an AI. In fact, I think focusing on it could be harmful, because it seems that getting an AI to pass it amounts to teaching the AI to lie well. This seems counter-productive to me.
As for why I think embodiment would matter to making a "human like" (as opposed to "human level") AGI: it mainly comes down to experiential learning. Imagine, if you will, what you know about the meaning of terms like "fall", or "fall down". How much of your knowledge of this is rooted in that fact that you, in your body, have fallen down? And how does that play into your ability to construct metaphors involving other things "falling"? And so on.
But I don't think any of this stuff is necessary to make an AGI that can operate at a human level of generality and solve useful problem on our behalf. And by "operate at a human level of generality" I mean something approximately like "the same AI software, with appropriate training, can do anything from playing chess, to driving a car, to coming up with new theories in physics and chemistry (and so on).
I think in order for AGI to be realized, the machine must be able to have algorithms that can dynamically work with new data, that it receives from its sensors, interprets it, and classify it.
It must generate its own data, and classify it, organize it, compartmentalize it, and regularly subdivide it. And most importantly, it needs to be able to invalidate it. It needs to operate on a ”most likely” scenario, based on its own gathered evidence. Where the scenario is true, until it isn’t, and then it needs to find the new scenario.
All of the data and information in the world, can not be encoded for the AGI, and manually spoon fed to it. This is the fallacy. It doesn’t scale. It cannot work, because it doesn’t operate on the “most likely” scenario. The whole concept would fall apart and crumble in on itself.
This data fidelity issue was the problem that ultimately killed the symbolic AI attempts in the 70s, with all the Lisp programming, where they attempted to manually give knowledge to a robot. And then, after running out of money, they realized that they just couldn’t sustain creating all of the data manually.
And the problem with today’s Deep Learning AI, is that it’s just a very fancy pattern matcher. It’s like a very fancy regular expression searcher, to give an analogy.
The problem with today’s AI attempt, is the same problem that doomed the 1970s attempt: namely, the lack of data fidelity. At some point, you can’t have a human go around classifying everything for you.
Given that, I still think AGI is possible, but a major rethinking is necessary to achieve it. The Deep Learning neural net ideas of today, will not achieve it.
The difference is, people in stone age had no idea what nuclear fission means. They might have believed you, albeit superstitiously, if you told them that there will be food everywhere one day.
Interestingly, the examples you cite seem to be in a decreasing order of difficulty, with several orders of magnitude more of effort required. AGI is gonna be much easier than living forever, which seems should be easier than travelling faster than light.
ISTM that AGI would swiftly result in vastly extended lifespans, at least for those fortunate enough to be at the control knobs. In that sense, they are at the same difficulty level.
"Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. "
I have not read the rest of the article but in the introduction it's stated:
"The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world."
Wiktionary defines tacit as "Not derived from formal principles of reasoning" [1].
So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
This is a dualist line of reasoning and, in my opinion, is nothing more than theology dressed up in philosophy.
I would much rather the author just flat out say they are a dualist or that they reject the Church-Turing thesis.