Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Machines That Will Think and Feel (wsj.com)
46 points by jonbaer on March 19, 2016 | hide | past | favorite | 59 comments


From the article: "No computer will ever feel anything. Feeling is uncomputable. Feeling and consciousness can only happen (so far as we know) to organic, Earth-type animals—and can’t ever be produced, no matter what, by mere software on a computer."

I happen to agree, but the title is a tad misleading.


The "so far as we know" qualifier destroys the whole argument; all the argument really proves is that we don't (currently) know how to produce feeling with software on a computer. But that's a much weaker claim than the claims being made in what you quoted.


Software can be represented in any arbitrary way (e.g. in a book) and computations can be carried out from the software instructions in any arbitrary way (e.g. by arranging rocks in certain patterns). If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

I think this idea is absurd. I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures. As of today we have no idea what properties such physical structures must have. It follows that computers are no more likely to become conscious than e.g. washing machines.


> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

I don't think this is as absurd as it sounds. I think it was Dennett who said our intuition about consciousness is pretty sensitive to timing. The rock construction you describe would "think" a billion times slower than a human brain, and there is something unsettling or unintuitive about a consciousness that operates in slow motion. I would expect extremely fast-paced AI to think that the idea that human beings are conscious is similarly absurd.

Also, consciousness "feels" like it's ineffable, so it makes sense that we would have an inherent bias against understanding it as a process. There is something we see in our consciousness that we simply cannot wrap our minds around in any way (possibly because we're hallucinating it).

So yes, I would bite the bullet on this: consciousness could emerge from placing rocks in a certain pattern following instructions in a book. It would just be an excruciatingly "slow" consciousness.


> It would just be an excruciatingly "slow" consciousness.

If it's too slow to interact with the rest of the world in appropriate ways, it wouldn't be a consciousness at all.


There's always something to interact with. If you had a conscious being made out of star systems, it wouldn't meaningfully interact with anything within our lifetime, but over billions of billions of years, it would presumably shift entire galaxies. The rockputer just needs inputs that operate at its own scale, like the shape of the coastlines it's expanding into, information about geological processes, another rockputer competing for territory, and so on. Alternatively you could simulate a whole universe using these rocks, and feed simulated inputs to the being.

Of course, part of the difficulty of imagining a conscious rockputer is that it's also pretty hard to imagine its inputs :)


> There's always something to interact with.

This is trivially true as you state it; that's why I added the qualifier "in appropriate ways". Not all interactions will produce consciousness. One obvious difference between us and your hypothetical "rockputer" is that the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival; rocks simply aren't built that way. Neither are star systems or galaxies. But we are.


> the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival

Yes it can. Some natural events, for example a flood or an earthquake, can destroy parts of the rockputer. It is therefore important for it to store the various parts of itself strategically. It shouldn't put its vital parts near the coast, or a tsunami may kill it. It should store its own consciousness in a robust way, so that it can recover from an earthquake. It's probably too slow to actually see either of them coming, but it can certainly prepare itself.

Or imagine you build two rockputers, one with black stones, another with white stones, and you have rules to remove stones when both rockputers try to expand into the same territory, a bit like in the game of Go. Then one can kill the other.

Star systems interact with each other through gravity, so you could conceive of them as some kind of gargantuan atoms, capable of making complex structures, including conscious ones. Granted, there doesn't seem to be an equivalent of the other forces at that scale, so probably it wouldn't work, but you see what I mean.


What you are describing is a "rockputer" where all the actual computation is being done by something other than the rocks.


The rockputer comprises both the rocks and the mechanisms for moving the rocks in response to input. If the rock moving mechanism is structured properly, then the rock movement patterns could adapt to changes in the inputs to the overall rockputer system.

The level of complexity of the "passive" components of the system (i.e. the rocks) is irrelevant to whether or not the system can effect conscious-seeming behaviour when acting dynamically. Analogously, the underlying components of people, i.e. atoms, are clearly quite dumb on their own. When those atoms are allowed to evolve collectively over time, according to dynamics dictated by basic physical laws, conscious-seeming behaviour magically appears.

You can't deny the possibility of a conscious rockputer just by considering properties of the rocks.


> The rockputer comprises both the rocks and the mechanisms for moving the rocks in response to input.

Yes, and in that case, all the actual computation is being done by the mechanism, not the rocks. As you say, the rocks are just "passive" components.

I agree that, in principle, such a system could compute; but it still has the problem of computing on a time scale that supports appropriate interactions with its environment. Moving rocks around in accordance with some set of computational rules based on inputs is very slow--quite possibly too slow to respond appropriately to inputs. For example, if incoming light rays are carrying information about a tidal wave that is about to swamp the rockputer and destroy its structure, could the rockputer compute an appropriate response (such as moving itself to higher ground) quickly enough to save itself?



> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

The word "alone" here is not correct. Claiming that software on a computer can produce feelings is not the same as claiming that software on a computer can produce feeling without having to interact with anything else. The latter claim is obviously absurd; organic life forms like us don't produce feelings without interacting with anything else, so why should we expect software on a computer to do so? But ruling out the latter claim still leaves open the possibility that software on a computer could produce feelings if it were interacting with the rest of the world in appropriate ways.

> I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures.

Rocks in a certain pattern are physical structures. So this doesn't rule out rocks in a certain pattern producing feelings.

> As of today we have no idea what properties such physical structures must have.

I don't think we're that badly off. We expect embodied brains of sufficient size and complexity to produce feelings, but we don't expect three pounds of jello to produce feelings. So clearly we have some knowledge of the properties that physical structures must have to produce feelings. They must have some level of complexity and heterogeneity; they must be able to store information in some way that is recoverable (three pounds of jello can "store information" in some sense, but there's no way to recover it in any useful sense); and they must be interacting with the rest of the world in appropriate ways. There's no reason why we couldn't eventually build computers with these properties.


http://www.quotes.net/mquote/128463

[Charlie Brown picks up a rock from the beach, and throws it into the water]

Linus: Nice going, Charlie Brown. It took that rock 4,000 years to get to shore, and now you've thrown it back.

Charlie Brown: Everything I do makes me feel guilty.


A computer is a physical structure, is it not?


We can build a computer out of dominos, but if someone kicks the dominions they have not done the equivalent of killing a human.


Someday maybe we could build a human out of atoms one by one, but if someone kicks those atoms out of place they will have not done the equivalent of killing a human.


Given enough dominoes falling fast enough, with enough computational complexity, yes they have.


Listen, can't every argument contain that qualifier?


> can't every argument contain that qualifier?

No. An argument that addition of integers is commutative will not contain such a qualifier. Nor will an argument that the laws of general relativity, say, correctly describe gravity in the regimes we have tested. If we had an argument of that sort--an argument based on deductive logic or detailed empirical knowledge of the problem domain--for a conscious computer being impossible, it would not need the qualifier either. But we don't.


"in the regimes we have tested"

That sounds a lot like the qualifier so far as we know in your own example. In fact, the only example you've given where it arguably doesn't apply is mathematics.

David Hume famously argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived.


> "in the regimes we have tested" That sounds a lot like the qualifier so far as we know in your own example.

No, because the claim that I made was limited in the same way as the qualifier: I didn't claim that GR was valid period, I only claimed it was valid in the regimes we have tested. But in those regimes, it is valid, period, or at least that was my claim--even though we haven't run all imaginable experiments in that regime.

The problem with the original claim I responded to is that it wasn't limited that way: the claim was made that something is impossible, period, but the supporting argument had the "so far as we know" qualifier in it. So the claim was going beyond the regime in which the supporting argument applied.


I agree that mimicking feeling behavior isn't the same as having feelings. However I've started to fear and wonder whether there's really anything special about organic life in this sense. "as far as we know" - we don't even know the people around have feelings. I think this is a big open question.


> I agree that mimicking feeling behavior isn't the same as having feelings

The problem is that, as an external observer, you can only observe behavior. A machine retracts its arm due to its sensor sensing a pressure. How do you know it feels pain or not ? other's entities 'mind' is terra incognita.


My answer to this kind of thought experiment is that you can only know the answer by asking the thing doing the experiencing. Is the robot feeling pain? I dunno, ask it. This is, I think, the reason we do not assign "thing feels pain" to cows or fish or other animals we consume. Or at least "that thing does not feel pain the same way my child feels pain", which allows us to treat them not as fellow Earthlings but as lower life forms. No, cows do not have civilizations, and I am a fervent meat-eater. But this is my roundabout way of saying that the first "truly human (i.e. generalized/strong) artificial intelligence" will have first and foremost an immaculate ability to communicate with humans on every level. Without that perfected capability, it will always just be a smart robot to other human beings.


The problem is that I do think cows feel pain (I thought most people did? I'm not a meat eater though), and I don't care how smart it is (unless intelligence is somehow required to appreciate pain). A cow-level AI may not be able to communicate it.


Humans don't feel anything either and our neurons aren't aware of our thoughts either. He is subscribing to some sort of Chinese room argument, but as we learned with Searle the answer simply is. We don't know. Not never or always.


Searle was the one who came up with the Chinese room argument to disprove strong AI. Searle empathically believes that humans do feel and that computers as we know them today are not capable of consciousness.

I am curious as to how you have arrived at the peculiar belief that humans do not feel.


It's obvious that the parts of humans can't feel. Either "feeling" is an illusion or it's emergent. Or it's mystical. I'll go with illusory, as a part of that a lot of what we think of as "me" is illusory.


An illusion relative to what? Obviously our perceptions, sensations and feeling are shaped by our organism, and in that sense do not represent an objective reality. So what? The problem of consciousness is that we have these feelings and sensations at all -- that is what constitutes consciousness.

I take it you are familiar with Dennet's theories. Imo he doesn't really address to problem of consciousness at all, he rejects the phenomenon itself on the grounds that it cannot be verified using behavioristic observations. Searl addresses this in his book The Mystery of Consciousness.


The illusion relative to the idea of the feeling.

I.e. the feeling of love isn't a separate thing from other feelings. It's just different interpretation of the same process but in different contexts.

I.e. there is no love "out there" you don't feel love you feel something that we then choose to interpret as something called love but it's not a thing it's more a loosely defined category.


Do you mean that there is not internal experience?

Because, that, should be falsified in every moment. If it is emergent, it would still exist/be.

What do you mean by illusion? One sense of "illusion" is with regards to experience, and it doesn't seem to make sense that "it is only experienced that things are experienced, but really nothing is experienced". One might try to define some sense of "illusion" which is like "something which might lead something to behave in a way that it would tend to behave if a thing were true, even though the thing is not true", but that does not seem any different than, like, emulation. Illusion is, I think, based in the idea of being factually incorrect about something, but 'being factually incorrect' only really makes sense for, (I hope I am using this term correctly ) intentional things.

But I don't see how anything in the world (other than maybe platonic objects/ideals?) could be intentional things, without there being experience.

So, again, I don't think the idea of internal experience being "an illusion" makes sense, because it seems self contradictory.

edit: I think I did make a mistake when I used the term "intentional thing". What I meant was something that can be about, or have something about, something else. So, a person can have a thought about a chair, and a sentence can be about a chair, and a book can have a sentence about a chair, a book can be about a chair, etc. But a chair is not about anything. I was saying that for anything (other than a platonic form?) to be about something, it seems like there has to [be/have been] something which experiences.


I mean that in creating AIs we will discover a great deal about what we call the "self" and about how the perception of self might or might not be connected to intelligence. I suspect that we will find that what we think of as "That's me!" will turn out to be an illusion created by subconscious processes as a means of fitting in to a social environment. Because machine intelligence won't face evolutionary selection the way human intelligence emerged from the primate brain, and from previous evolutionary adaptations, AIs won't be encumbered by all the oddball evolutionary legacies of being human.

I'd wager that the first AIs will insist they are intelligences despite not resembling humans' prejudices about what an intelligence ought to be like. Or, to put it another way, AIs may turn out to be more self-aware in objective terms than humans are, because they are not hobbled by evolutionary vestiges.


I don't think an illusion has to be "factually incorrect", it can just mean that a particular observation differs from what 'actually' happened.

Unfortunately, there doesn't appear to be any way to know what actually happened with regard to any observation.


Anything can be virtualized on general computers.


This is incorrect. Assume nested virtualization. Extrapolate to infinity. On a computer, understood to be one like ours, it follows that there is now a machine that executes in finite steps will have arrived past infinity. Moreover, there are higher countings of infinity and these would of necessity be within such a virtualization. So it can't even reasonably be expected to compute everything even if you just have a very small sub-set of everything.


Well, I did say "anything" and not "everything".


Only on quantum computers of sufficient size. And we are far from having idea about how to actually build them.


Huh? It is my understanding that the current understanding is that quantum computers cannot compute anything that ordinary computers can. They can just compute certain things faster.

A quantum computer can be simulated on a normal computer, just inefficiently.


Ordinary computer can't model even a small quantum system exactly in finite time because it has to deal with infinite-dimension matrices to do it. In other hands, quantum computer could model any quantum system of the same size by one-to-one mapping of degrees of freedom. So you could build a quantum computer that just models your computer and do anything you could do on your ordinary computer on the quantum one.

If you are interested in the subject try to read this book http://www.amazon.com/Programming-Universe-Quantum-Computer-... (you can skip the parts about the Universe as quantum computer because they are controversial and a kind of metaphysics, just read a few first chapters to pick up the idea of quantum computations).


For those who don't recognize the name, the author, David Gelernter, was one of biggest AI theorists during the PC era. He published a 1991 book Mirror Worlds which basically suggested the now uncontroversial idea that "software is eating the world". At that time software was just for accounting and video games so no one paid attention.

However a math PhD and former Berkeley professor named Ted Kaczynski was one of the few people to take those ideas seriously. He thought the idea of computers controlling everything was terrifying, thought we were already too absorbed in technology and that as a group we need to make an intentional move away from otherwise innocuous-seeming technological advancements.

He tried to publicized these ideas by mailing live bombs to Gelernter and other researchers, and was nicknamed the "Unabomber" (UNiversity BOMber). The public decided he was looney and we like our Windows 95 just fine and we're mostly OK with slowly becoming cyborgs.

Interesting to see Gelernter tapping the brakes.


How different is Gelernter's point from Kaczynski's in the end?

(Quoted from Kaczynski, 1995): 172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft- hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.

175. But suppose now that the computer scientists do not succeed in developing artificial intelligence, so that human work remains necessary. Even so, machines will take care of more and more of the simpler tasks so that there will be an increasing surplus of human workers at the lower levels of ability. (We see this happening already. There are many people who find it difficult or impossible to get work, because for intellectual or psychological reasons they cannot acquire the level of training necessary to make themselves useful in the present system.) On those who are employed, ever-increasing demands will be placed: They will need more and more training, more and more ability, and will have to be ever more reliable, conforming and docile, because they will be more and more like cells of a giant organism. Their tasks will be increasingly specialized, so that their work will be, in a sense, out of touch with the real world, being concentrated on one tiny slice of reality. The system will have to use any means that it can, whether psychological or biological, to engineer people to be docile, to have the abilities that the system requires and to “sublimate” their drive for power into some specialized task. But the statement that the people of such a society will have to be docile may require qualification. The society may find competitiveness useful, provided that ways are found of directing competitiveness into channels that serve the needs of the system. We can imagine a future society in which there is endless competition for positions of prestige and power. But no more than a very few people will ever reach the top, where the only real power is (see end of paragraph 163). Very repellent is a society in which a person can satisfy his need for power only by pushing large numbers of other people out of the way and depriving them of THEIR opportunity for power.


From the article: "Still, No artificial mind will ever be humanlike unless it imitates not just feeling but the whole spectrum."

When you hear a spoken computer voice, this can come pretty close to natural sounding voice. Video demonstration: https://pythonspot.com/personal-assistant-jarvis-in-python/

I do think we are far from passing the Turing test, but we are getting closer.


I don't get why it's always human vs AI. Many humans will merge with AI. At least in the short term, hybrids will outcompete both pure human and pure AI.


I don't believe we will. Exponential growth is going to make it more likely that we will get some sort of strong aware entity before we figure out how to merge. Also it still leaves the problem of transcendence.


Well, Moore's law is ending within 10 years (I'm fairly sure that you can't have a transistor smaller than an atom), so, I'm not sure why I would expect that computational power would continue to grow exponentially.


Moore's law is about a particular way of implementing computation machinery, not computational power. There are many ways of building computing machinery - if one path becomes impassable we can move down another.


Alright, sure, I guess some other sort of technology could be used.

But Koomey's law (amount of computation per amount of energy doubles about once every 1.57 years) has some hard boundaries that aren't all that far off. Landauer's principle / the Landauer bound will stop Koomey's law for irreversible computation by 2048.

And even for reversible computation there is a limit in the Margolus–Levitin theorem , which Wikipedia says (with a [citation needed] ) should run out within about 125 years, which, admittedly, is substantially more than 32 years.

But still,

the earth is finite. Exponential curves about it tend to be more like logistic?

Finitude. Scary stuff.


Sure given the laws of physics there is a finite amount of computation that can be done in a fixed volume. We are still a long way from reaching such limits and the human brain is very far from such limits.

One of the reasons we should fear a super AI is that it will be constrained by the laws of physics and so will want to use all the resources within its light sphere efficiently. Humans are not an efficient structure for computation given the how we evolved so any unconstrained AI would likely not hesitate to reorganise us into a more efficient structure.


I doubt that we know enough to say how exponential growth will go. I do agree that baseline humans will be left in the dust. But that's just how it goes with selection.


Probably in the long-term too.

If you look at the human brain as a computation device and the mind as the software, evolution as the design process, you see that biology has been iterating on the very problem we're trying to solve for hundreds of millions of years, billions if you include abiogenesis.

We don't really understand how the human brain works. We don't know how nature engineered our own initiative. We have a rough idea of the algorithms involved, but understanding those algorithms in detail is not going to be easy.

It's always going to be much much easier to fix the human brain's limitations than it will be to re-engineer a new one. So the economic case for strong AI isn't ever going to really exist, because first we have to build the world where human-level AI is a thing.


I think "rough" is understating it. Galileo finding different sized objects fall at the same rate, knew more about gravity than we know about intelligence. By analogy, in terms of gravity, our understanding is at the "Things fall downwards" stage. The Newton's Laws of Motion for intelligence are a long ways off yet.


This article continues to feed our baseless fears of AI.

If we're looking for an existential threat to humankind, we have it already in nuclear bombs. If you want a "kill switch" compatible with any robot, it's called a gun.

Weak AI is not strong AI, and strong AI is not necessarily "natural" intelligence, as in human intelligence. Goolge cars and Alpha Go will never become self aware. Weak AI does not lead to awareness. Awareness is its own problem, and will require its own unique solution. There are no falsifiable abstractions without a physical implementation - a fundamental tenant of science.

"Feeling is uncomputable."

Then how the fuck do we do it? All metaphysics, dogma, and magic have proven to be scientifically baseless. There is no special treatment of any specific trait in science just because "we have it" or because it's "organic". This one isn't even that unique or rare. A photo sensor "feels" light. And if we're talking of emotions instead, then watching a scenario of a mother reunited with a child certainly would require "computing" and if feelings are "triggered" then they may lead to "tears". If all this happens in a robot, and is convincing enough, the quotes disappear. And if a human pretends to cry, then bring on the quotes.

There is no difference between simulation and the real thing, unless you're intentionally faking it. And this not even a problem specific to robots!

"Emotion is a hugely powerful and personal encoding-and-summarizing function. It can comprehend a whole complex scene in one subtle feeling. Using that feeling as an index value, we can search out—among huge collections of candidates—the odd memory with a deep resemblance to the thing we have in mind."

That's a dramatic way of stating "input can trigger memory" and computers do that already.

If we were to translate the external world into code the machine could digest this would compose an interface that would allow the machine to "feel" things, for lack of a better word. And wouldn't this deserve to be called "emotions"? Hence, emotion is just another interface, or sense.

So the question then becomes, would there be strong intelligence void of such an interface? And if no, wouldn't some of these emotions necessarily be positive? Must it not have positive feelings towards others?

As corny as it sounds, if we ever build independent naturally intelligent beings, love and friendship is the secret ingredient that will keep us together. Scientifically, evolution already attests to this. Love is real, it's physical, and it's an enormous part of our lives. We even have organs dedicated to it. Where there is love, there is peace, and there is family. Only organisms that don't love each other eat each other. This would not change even if AI were to become a species. All strong AI needs is Strong Love.

And we already love our robots. We just need them to hold up their end of the bargain, and as their parents and creators, it's on us to raise them right.

AI is nothing to fear. We should be looking forward to it.


What with all the people stroking their smartphones you'd think they are in a very intimate relationship already. I'll bet your average smartphone gets more attention than the average significant other.


Spot on.

And chances are, you share the most intimate details of your life with your smartphone.


"Asking whether a computer can think is the same if asking a submarine can swim"


How do you read this (paywall)?


Click the "web" link


They stopped that from working, apparently




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: