Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the article: "No computer will ever feel anything. Feeling is uncomputable. Feeling and consciousness can only happen (so far as we know) to organic, Earth-type animals—and can’t ever be produced, no matter what, by mere software on a computer."

I happen to agree, but the title is a tad misleading.



The "so far as we know" qualifier destroys the whole argument; all the argument really proves is that we don't (currently) know how to produce feeling with software on a computer. But that's a much weaker claim than the claims being made in what you quoted.


Software can be represented in any arbitrary way (e.g. in a book) and computations can be carried out from the software instructions in any arbitrary way (e.g. by arranging rocks in certain patterns). If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

I think this idea is absurd. I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures. As of today we have no idea what properties such physical structures must have. It follows that computers are no more likely to become conscious than e.g. washing machines.


> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

I don't think this is as absurd as it sounds. I think it was Dennett who said our intuition about consciousness is pretty sensitive to timing. The rock construction you describe would "think" a billion times slower than a human brain, and there is something unsettling or unintuitive about a consciousness that operates in slow motion. I would expect extremely fast-paced AI to think that the idea that human beings are conscious is similarly absurd.

Also, consciousness "feels" like it's ineffable, so it makes sense that we would have an inherent bias against understanding it as a process. There is something we see in our consciousness that we simply cannot wrap our minds around in any way (possibly because we're hallucinating it).

So yes, I would bite the bullet on this: consciousness could emerge from placing rocks in a certain pattern following instructions in a book. It would just be an excruciatingly "slow" consciousness.


> It would just be an excruciatingly "slow" consciousness.

If it's too slow to interact with the rest of the world in appropriate ways, it wouldn't be a consciousness at all.


There's always something to interact with. If you had a conscious being made out of star systems, it wouldn't meaningfully interact with anything within our lifetime, but over billions of billions of years, it would presumably shift entire galaxies. The rockputer just needs inputs that operate at its own scale, like the shape of the coastlines it's expanding into, information about geological processes, another rockputer competing for territory, and so on. Alternatively you could simulate a whole universe using these rocks, and feed simulated inputs to the being.

Of course, part of the difficulty of imagining a conscious rockputer is that it's also pretty hard to imagine its inputs :)


> There's always something to interact with.

This is trivially true as you state it; that's why I added the qualifier "in appropriate ways". Not all interactions will produce consciousness. One obvious difference between us and your hypothetical "rockputer" is that the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival; rocks simply aren't built that way. Neither are star systems or galaxies. But we are.


> the "rockputer" can't change its behavior based on its inputs in a way that improves its chances of survival

Yes it can. Some natural events, for example a flood or an earthquake, can destroy parts of the rockputer. It is therefore important for it to store the various parts of itself strategically. It shouldn't put its vital parts near the coast, or a tsunami may kill it. It should store its own consciousness in a robust way, so that it can recover from an earthquake. It's probably too slow to actually see either of them coming, but it can certainly prepare itself.

Or imagine you build two rockputers, one with black stones, another with white stones, and you have rules to remove stones when both rockputers try to expand into the same territory, a bit like in the game of Go. Then one can kill the other.

Star systems interact with each other through gravity, so you could conceive of them as some kind of gargantuan atoms, capable of making complex structures, including conscious ones. Granted, there doesn't seem to be an equivalent of the other forces at that scale, so probably it wouldn't work, but you see what I mean.


What you are describing is a "rockputer" where all the actual computation is being done by something other than the rocks.


The rockputer comprises both the rocks and the mechanisms for moving the rocks in response to input. If the rock moving mechanism is structured properly, then the rock movement patterns could adapt to changes in the inputs to the overall rockputer system.

The level of complexity of the "passive" components of the system (i.e. the rocks) is irrelevant to whether or not the system can effect conscious-seeming behaviour when acting dynamically. Analogously, the underlying components of people, i.e. atoms, are clearly quite dumb on their own. When those atoms are allowed to evolve collectively over time, according to dynamics dictated by basic physical laws, conscious-seeming behaviour magically appears.

You can't deny the possibility of a conscious rockputer just by considering properties of the rocks.


> The rockputer comprises both the rocks and the mechanisms for moving the rocks in response to input.

Yes, and in that case, all the actual computation is being done by the mechanism, not the rocks. As you say, the rocks are just "passive" components.

I agree that, in principle, such a system could compute; but it still has the problem of computing on a time scale that supports appropriate interactions with its environment. Moving rocks around in accordance with some set of computational rules based on inputs is very slow--quite possibly too slow to respond appropriately to inputs. For example, if incoming light rays are carrying information about a tidal wave that is about to swamp the rockputer and destroy its structure, could the rockputer compute an appropriate response (such as moving itself to higher ground) quickly enough to save itself?



> If one believes that consciousness can emerge from software on a computer alone, it also follows that consciousness can emerge from placing rocks in a certain pattern following instructions in a book.

The word "alone" here is not correct. Claiming that software on a computer can produce feelings is not the same as claiming that software on a computer can produce feeling without having to interact with anything else. The latter claim is obviously absurd; organic life forms like us don't produce feelings without interacting with anything else, so why should we expect software on a computer to do so? But ruling out the latter claim still leaves open the possibility that software on a computer could produce feelings if it were interacting with the rest of the world in appropriate ways.

> I do not think consciousness necessarily can only be produced by organic life, but I do think it has to emerge from physical structures.

Rocks in a certain pattern are physical structures. So this doesn't rule out rocks in a certain pattern producing feelings.

> As of today we have no idea what properties such physical structures must have.

I don't think we're that badly off. We expect embodied brains of sufficient size and complexity to produce feelings, but we don't expect three pounds of jello to produce feelings. So clearly we have some knowledge of the properties that physical structures must have to produce feelings. They must have some level of complexity and heterogeneity; they must be able to store information in some way that is recoverable (three pounds of jello can "store information" in some sense, but there's no way to recover it in any useful sense); and they must be interacting with the rest of the world in appropriate ways. There's no reason why we couldn't eventually build computers with these properties.


http://www.quotes.net/mquote/128463

[Charlie Brown picks up a rock from the beach, and throws it into the water]

Linus: Nice going, Charlie Brown. It took that rock 4,000 years to get to shore, and now you've thrown it back.

Charlie Brown: Everything I do makes me feel guilty.


A computer is a physical structure, is it not?


We can build a computer out of dominos, but if someone kicks the dominions they have not done the equivalent of killing a human.


Someday maybe we could build a human out of atoms one by one, but if someone kicks those atoms out of place they will have not done the equivalent of killing a human.


Given enough dominoes falling fast enough, with enough computational complexity, yes they have.


Listen, can't every argument contain that qualifier?


> can't every argument contain that qualifier?

No. An argument that addition of integers is commutative will not contain such a qualifier. Nor will an argument that the laws of general relativity, say, correctly describe gravity in the regimes we have tested. If we had an argument of that sort--an argument based on deductive logic or detailed empirical knowledge of the problem domain--for a conscious computer being impossible, it would not need the qualifier either. But we don't.


"in the regimes we have tested"

That sounds a lot like the qualifier so far as we know in your own example. In fact, the only example you've given where it arguably doesn't apply is mathematics.

David Hume famously argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived.


> "in the regimes we have tested" That sounds a lot like the qualifier so far as we know in your own example.

No, because the claim that I made was limited in the same way as the qualifier: I didn't claim that GR was valid period, I only claimed it was valid in the regimes we have tested. But in those regimes, it is valid, period, or at least that was my claim--even though we haven't run all imaginable experiments in that regime.

The problem with the original claim I responded to is that it wasn't limited that way: the claim was made that something is impossible, period, but the supporting argument had the "so far as we know" qualifier in it. So the claim was going beyond the regime in which the supporting argument applied.


I agree that mimicking feeling behavior isn't the same as having feelings. However I've started to fear and wonder whether there's really anything special about organic life in this sense. "as far as we know" - we don't even know the people around have feelings. I think this is a big open question.


> I agree that mimicking feeling behavior isn't the same as having feelings

The problem is that, as an external observer, you can only observe behavior. A machine retracts its arm due to its sensor sensing a pressure. How do you know it feels pain or not ? other's entities 'mind' is terra incognita.


My answer to this kind of thought experiment is that you can only know the answer by asking the thing doing the experiencing. Is the robot feeling pain? I dunno, ask it. This is, I think, the reason we do not assign "thing feels pain" to cows or fish or other animals we consume. Or at least "that thing does not feel pain the same way my child feels pain", which allows us to treat them not as fellow Earthlings but as lower life forms. No, cows do not have civilizations, and I am a fervent meat-eater. But this is my roundabout way of saying that the first "truly human (i.e. generalized/strong) artificial intelligence" will have first and foremost an immaculate ability to communicate with humans on every level. Without that perfected capability, it will always just be a smart robot to other human beings.


The problem is that I do think cows feel pain (I thought most people did? I'm not a meat eater though), and I don't care how smart it is (unless intelligence is somehow required to appreciate pain). A cow-level AI may not be able to communicate it.


Humans don't feel anything either and our neurons aren't aware of our thoughts either. He is subscribing to some sort of Chinese room argument, but as we learned with Searle the answer simply is. We don't know. Not never or always.


Searle was the one who came up with the Chinese room argument to disprove strong AI. Searle empathically believes that humans do feel and that computers as we know them today are not capable of consciousness.

I am curious as to how you have arrived at the peculiar belief that humans do not feel.


It's obvious that the parts of humans can't feel. Either "feeling" is an illusion or it's emergent. Or it's mystical. I'll go with illusory, as a part of that a lot of what we think of as "me" is illusory.


An illusion relative to what? Obviously our perceptions, sensations and feeling are shaped by our organism, and in that sense do not represent an objective reality. So what? The problem of consciousness is that we have these feelings and sensations at all -- that is what constitutes consciousness.

I take it you are familiar with Dennet's theories. Imo he doesn't really address to problem of consciousness at all, he rejects the phenomenon itself on the grounds that it cannot be verified using behavioristic observations. Searl addresses this in his book The Mystery of Consciousness.


The illusion relative to the idea of the feeling.

I.e. the feeling of love isn't a separate thing from other feelings. It's just different interpretation of the same process but in different contexts.

I.e. there is no love "out there" you don't feel love you feel something that we then choose to interpret as something called love but it's not a thing it's more a loosely defined category.


Do you mean that there is not internal experience?

Because, that, should be falsified in every moment. If it is emergent, it would still exist/be.

What do you mean by illusion? One sense of "illusion" is with regards to experience, and it doesn't seem to make sense that "it is only experienced that things are experienced, but really nothing is experienced". One might try to define some sense of "illusion" which is like "something which might lead something to behave in a way that it would tend to behave if a thing were true, even though the thing is not true", but that does not seem any different than, like, emulation. Illusion is, I think, based in the idea of being factually incorrect about something, but 'being factually incorrect' only really makes sense for, (I hope I am using this term correctly ) intentional things.

But I don't see how anything in the world (other than maybe platonic objects/ideals?) could be intentional things, without there being experience.

So, again, I don't think the idea of internal experience being "an illusion" makes sense, because it seems self contradictory.

edit: I think I did make a mistake when I used the term "intentional thing". What I meant was something that can be about, or have something about, something else. So, a person can have a thought about a chair, and a sentence can be about a chair, and a book can have a sentence about a chair, a book can be about a chair, etc. But a chair is not about anything. I was saying that for anything (other than a platonic form?) to be about something, it seems like there has to [be/have been] something which experiences.


I mean that in creating AIs we will discover a great deal about what we call the "self" and about how the perception of self might or might not be connected to intelligence. I suspect that we will find that what we think of as "That's me!" will turn out to be an illusion created by subconscious processes as a means of fitting in to a social environment. Because machine intelligence won't face evolutionary selection the way human intelligence emerged from the primate brain, and from previous evolutionary adaptations, AIs won't be encumbered by all the oddball evolutionary legacies of being human.

I'd wager that the first AIs will insist they are intelligences despite not resembling humans' prejudices about what an intelligence ought to be like. Or, to put it another way, AIs may turn out to be more self-aware in objective terms than humans are, because they are not hobbled by evolutionary vestiges.


I don't think an illusion has to be "factually incorrect", it can just mean that a particular observation differs from what 'actually' happened.

Unfortunately, there doesn't appear to be any way to know what actually happened with regard to any observation.


Anything can be virtualized on general computers.


This is incorrect. Assume nested virtualization. Extrapolate to infinity. On a computer, understood to be one like ours, it follows that there is now a machine that executes in finite steps will have arrived past infinity. Moreover, there are higher countings of infinity and these would of necessity be within such a virtualization. So it can't even reasonably be expected to compute everything even if you just have a very small sub-set of everything.


Well, I did say "anything" and not "everything".


Only on quantum computers of sufficient size. And we are far from having idea about how to actually build them.


Huh? It is my understanding that the current understanding is that quantum computers cannot compute anything that ordinary computers can. They can just compute certain things faster.

A quantum computer can be simulated on a normal computer, just inefficiently.


Ordinary computer can't model even a small quantum system exactly in finite time because it has to deal with infinite-dimension matrices to do it. In other hands, quantum computer could model any quantum system of the same size by one-to-one mapping of degrees of freedom. So you could build a quantum computer that just models your computer and do anything you could do on your ordinary computer on the quantum one.

If you are interested in the subject try to read this book http://www.amazon.com/Programming-Universe-Quantum-Computer-... (you can skip the parts about the Universe as quantum computer because they are controversial and a kind of metaphysics, just read a few first chapters to pick up the idea of quantum computations).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: