Yea, sure. What's so hard about that? XKCD made a comic about something similar: https://xkcd.com/505/.
It's unintuitive, sure, but that's just because of the enormous size of the cardboard you'd need to do it. Human intuition is pretty bad outside of its human-sized comfort zone.
The hard part is coming to grips with consciousness merely being an abstract or concrete system changing states and nothing more. If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
For example, say I throw a bucket full of sand on the ground. Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states. Does that mean as I was pouring the the sand on the ground, the sand was momentarily self-aware?
Are you aware of your cells (as in, the individual cells, not "my cells" as a concept)?
Are your cells aware of you (as in, the you that is contemplating these questions, not the adjacent cells that make up part of you)?
For me, the answer to the first is no, and I strongly suspect the answer to the second is also no.
So I don't believe that the grains of sand would be aware. But it is possible that "entity comprised of sand granules" is aware, just as "entity comprised of carbon-based cells" is aware.
Basically its all physics rules how particles interact and the energy flowing through. Now am "I" in control of making decision to do one thing and not the other? Or was it inevitable that the decision was made as that was the only possible outcome of particles inter acting in my brain at that moment? I am stuck with that thought
Whether or not it was the only possible outcome is a question of determinism, which at the quantum level looks like it should be answered with no. That doesn't mean, however, that the outcome is random; it rather depends strongly on the sum of all of the life experiences that have influenced your brain's wiring, all the way since the second trimester of your mother's pregnancy. In that sense, your decisions are uniquely yours.
Whether or not that was actually "you" is a different question, but since "you" don't actually exist as an independent agent (despite your feelings to the contrary), it's a moot point to discuss.
> If weird assemblages of water, carbon, calcium and trace minerals can be self-aware, why not?
Because humans (and indeed all life) have "spirits" which can think independently of the body, and that's the true reason we are conscious/self-aware. The body is merely a glove for the spirit. Before birth and after death your spirit still exists and is self-aware (don't take my word for it; try it out - when you die, think to yourself "wow, I'm still conscious without a body!"). Spirits are created by God beings from intelligences (which is a spiritual resource that can be neither created nor destroyed). Gods are created by granting humans more knowledge and powers. The progression is this: intelligence -> spirit -> human (body + spirit) -> God. However, not everyone makes it to God status since Gods choose which humans are to be granted further knowledge and power. And people that have been really bad can be stripped of their body and spirit to have their intelligence recycled back into the immutable pool.
Hence Strong AI is impossible since we aren't/don't know how to mine/utilize intelligences (the immutable spiritual resource) like God does.
Which means that the brain must be able to transmit information to and from the spirit. Please, by all means, point us to a plausible mechanism for such a thing.
You can ask this same question about transmissions between ones subjective experience and ones neurons, and the only serious difference between sams answer and modern answers is that sam makes an unwarranted number of assumptions, and as such must be cut away with ockams razor.
If you really want to feel that you're qualitatively better than sam, you have to bite the bullet and go for full dennett-style functionalism, effectively coming out as a p-zombie, which I personally find quite unpalatable. In the other direction lies lots of other equally weird stuff, like modal realism and panpsychism.
> sam makes an unwarranted number of assumptions, and as such must be cut away with occam's razor.
I agree with you that the logical thing to do with my answer is to cut it away with Occam's razor, and I don't blame you for it.
However, it should be noted that Occam's razor is a general rule of thumb, not some infallible law.
If you were handed a piece of paper with a high level description of how the universe truly works, would you recognize it as the truth or dismiss it as the ramblings of a lunatic?
If you went back in time even a few hundred years and handed the most brilliant thinkers of the day a piece of paper containing just one paragraph about general relativity, how many of those slips of paper would survive Occam's razor?
You're missing the fundamental part about empirical testing.
In the slightly paraphrased words of Dara O'Briain: "Science knows it doesn't know everything, but that doesn't mean you can just 'fill in the gaps' with whatever comes to mind". Claiming that we couldn't recognise an advanced description of our world doesn't mean you can then jump in and say "... and so it's this!".
Likewise, if you were to hand Isaac Newton a paragraph explaining general relativity, something beyond his ability to measure even if he could conceptualise it, there would be no point applying Occam's Razor, because it's not describing something he can detect in the first place.
> Likewise, if you were to hand Isaac Newton a paragraph explaining general relativity, something beyond his ability to measure even if he could conceptualise it, there would be no point applying Occam's Razor, because it's not describing something he can detect in the first place.
Right, so either way, the slip of paper doesn't survive.
I'm not claiming my worldview is empirically testable or even verifiable. I just thought it would be fun to write it down in the off chance that it is true, so that people can later amuse themselves in hindsight.
If you were a scientist in the 1600s, and you read in the newspaper in the anonymous comments section that someone thought that "Time slows down the faster your travel. Time and space are two orthogonal dimensions of the same invisible fabric.", wouldn't it be funny for you to later find out that the random stranger that gave you that forgettable, idiotically insane theory in the paper actually somehow had astonishing insight?
But before anyone took relativity seriously, two things happened:
1. A constant speed of light independent of any reference frame turned out to be a logical consequence of the empirically-verified laws of electromagnetism, and
2. Measurements of the speed of light confirmed that it was constant in all reference frames we were capable of measuring.
Where is the corresponding theory and empirical observations to support dualism?
> If you went back in time even a few hundred years and handed the most brilliant thinkers of the day a piece of paper containing just one paragraph about general relativity, how many of those slips of paper would survive Occam's razor?
I'm pretty confident Newton would get it. He could've verified the speed of light against observations of the moons of Jupiter. He knew about some puzzling aspects of magnets, and was theorising about magnetism while working on optics.
Of course, much better than telling people the answer is telling them which threads to pull, which of the many confusing phenomena in the world bears more investigation.
Your perspective is all wrong here. You can't have a high-level description of how the universe works until it is interpreted as such. Thus if someone gave you a piece of paper, their are far more available interpretations than the two you propose. Most likely, you would simply recognize it as something you don't understand. Like say, exactly what most people do when you hand them a book on QFT. They will neither admit it to be true or lunacy, since they do not have the capacity to judge it as either.
If you had a book that explained how the universe worked, it would either have to explain it in a step-by-step fashion in unambiguous language for your specific consumption, or you're inappropriately putting that spin on it entirely by your own volition.
>If you were handed a piece of paper with a high level description of how the universe truly works, would you recognize it as the truth or dismiss it as the ramblings of a lunatic?
Using false equivalence to convince people that your religious beliefs are correct is unscrupulous to say the least. Your arguments do not prove your claims. It would be better to simply claim up front that it takes a certain amount of faith...then proceed with your story.
This. It's the ugly truth, but it's best to swallow hard, man up, and just straight up admit it. I know nothing, you know nothing, nobody knows anything. Talking about it, pardon me, out of your ass is pointless and just shows how much we have the instinct to control and understand things, even things that are out of our control and understanding. Explains religions, belief in afterlife, belief in a soul, and similar things.
That, and call bullshit when someone tries to convince you of a position shielded by meaningless phrases, obfuscated by scientific words, that just happen to fit exactly in the hole left by our current understanding and that you'll have to wait for death to confirm but they have, somehow, the ability to know.
Bonus points fot telling them to fo if they try to coerce you (or anyone really) by scaring you into believing their nonsense with punishments, in this life and the next one(s).
Isn't that a category mistake? one's subjective experience ~is~ the action of one's neurons. The alternative is, with sam, to grant the existence of an immaterial soul.
That isn't really explanatory to me though. It's like identifying your running operating system with the patterns of electron spins in your hard disk/RAM: while they may be identifiable with each other, that doesn't mean you can derive useful knowledge about one from the other without a huge amount of other information which has little to do with the hard disk itself, and isn't really encoded in it.
The amount of information we need to derive consciousness from neurons could be enormous. My (utterly uninformed) guess is that there are huge frontiers of information theory which need to be crossed first. Similarly to how the discovery of entropy revealed new kinds of knowledge about physical systems, and new ways of obtaining it.
While physics definitely does not have any hiding place for the soul (the relevant energy scales have all been explored,) the interface between biology and information theory is largely unknown, and potentially limitless.
There is something called Integrated-Information Theory which attempts to do that kind of thing, but it doesn't seem to be thought very credible by the experts.
I understand the downvotes for retina_sam's comment for the number of unproven ideas it asserts, but as unlikely as it seems in our materialistic view of the universe, I think it's important to not lose sight of the possibility that maybe there really is something special, something non-physical, about awareness. It is really poorly understood, and I still have not seen an adequate explanation for why I can perceive things and be aware of them.
"Awareness is an illusion", as proposed by some thinkers, certainly doesn't do it for me, because there's still something that has to perceive that illusion. It's got to be an emergent property of sufficiently complex logical processing, but how does it emerge? What is it really? I feel like I'm still missing a vital step.
The lack of an adequate explanation does not demand the need for a non-physical explanation.
I think that because consciousness is part of the core understanding of ourselves, we're drawn to a non-physical and spiritual explanation.
But as a phenomenon the conscious mind is not different than other observed phenomenons.
The rational approach would therefore be to expect an explanation that can be provided by our physical environment just as we do with any other unexplained phenomenon we observe.
I have never understood the argument that "observation" is something special and that it must be taken into consideration. In fact, is it really the brain that is observing a phenomenon or is it something else?
In the double slit experiment, it really isn't the brain, is it? It's the screen you place some distance away from the slits, or the electron detector if you place it in one slit only. The fact that "an outside observer" only sees the pattern made by these detectors later and may reason about them is entirely unnecessary to the experiment itself and to the collapse of the wave function. If you leave the results of a double slit experiment lying overnight, has the wave function not collapsed for longer? As such, the "observer" in these experiments are devices, just like in most other experiments.
Just as the infamous cat is an observer of a quantum system, for whom the wave function has collapsed potentially much earlier than for an outside observer. A wave function collapse is in that case much more of a statement of information state than about the state of reality, which is the exact point of quantum mechanics, in that there must be a distinction now between "observers" (which my be single atoms, mind!) inside of the sphere of influence of an event (whether that event may be bounded by physical barriers or a light cone is irrelevant by the way). [Note that Schrödinger brought up the cat example exactly to point out that the world does not exist in a blurry double-state way even if unobserved.]
The same is true for most psychological observations, which get recorded by a computer, by a pen or by an undergrad. The fact that someone else observes these observations doesn't change the phenomenon in any reasonable way.
One more analogy: we now have computers that are entirely capable of recording/observing phenomena by themselves, some of them internal to themselves even, where we have no other way of recording such phenomenon. If you run an A/B test on a web site, let it run for long enough, then look at your analytics page and it says to accept one hypothesis with 99.99% probability, then you have exactly the same situation as if you had done the experiment through other people and publishing it as a paper. Except, of course, no human mind has "observed" the phenomenon, neither directly nor indirectly, until you opened the results page. So what it comes down to is this: did the result exist before you read the results page? I would answer yes.
All of this is a long way of saying that IMHO the observation of a phenomenon does not change the phenomenon itself.
> "Awareness is an illusion", as proposed by some thinkers, certainly doesn't do it for me, because there's still something that has to perceive that illusion.
Maybe the illusion is just perceiving a past/previous instance of itself -- which further sustains the illusion? The recursion obviously bottoms out at the point of emergence, if we assume that consciousness is an emergent phenomenon.
(Don't take this too seriously -- I just think it's fun to think about these things.)
Roger Penrose wrote an interesting (if unavoidably controversial) book about these questions, "The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics".
I would say no. Even if awareness is of divine origin, you could still simulate an effect that, from an outside perspective, is indistinguishable from the real thing. The computer doesn't have to be truly aware in the way we are in order to present itself to us as if it is.
> But... Strong AI can be achieved, if nothing else by emulating a brain synapse by synapse. Therefore god doesn't exist?
Isn't this type of thinking a form of cargo cult science, though? According to Facebook's AI director [1]:
> The equivalent [of cargo cult science] in AI is to try to copy every detail that we know of about how neurons and synapses work, and then turn on a gigantic simulation of a large neural network inside a supercomputer, and hope that AI will emerge. That’s cargo cult AI. There are very serious people who get a huge amount of money who basically—and of course I’m sort of simplifying here—are pretty close to believing this.
At any rate, that's a theory you have there (that simulating neurons results in emergent AI), now you have to prove your theory before I will admit God doesn't exist. I contend that without a spirit, no such AI will emerge.
If that's a theory he has there, then you imply god and the spirit are not a theory. If that is the case, please provide proof. Quite literally the entire human race would be interested.
> If that's a theory he has there, then you imply god and the spirit are not a theory
That's just lazy. You can't shift the burden of proof on me when OP was the one making the extraordinary claim (sufficiently sophisticated synapse simulations are guaranteed to result in emergent strong AI). If you want to demand proof of my claims, you'll need to post your demands to the comment in which I made the extraordinary claim.
The existence of god is an extraordinary claim. The claim that replicating a brain at the molecular level will reproduce the behaviour of a brain is not an extraordinary claim. It's not even a claim, it's a fact.
Not really a fair comparison - the islanders of the 'cargo cult' were trying to entice aircraft full of cargo down by recreating what the explorers did - build towers, runways out of palm leaves, and wave coconut 'radios' around.
But they didn't know aircraft were built by humans, what radios did, or that there was a huge society far away building aircraft and training people to fly them.
And how many aircraft did they ever see, and how long did they spend trying before they were invaded/educated/gave up?
If they had 7 billion aircraft, seeing them all from scratch, and a thousand years to investigate, would they have progressed further?
We know brains work, we believe they are completely grown and contained within a skull (no remote factories building them), and we keep exploring further and building better tools and gathering more data.
The roundworm C. elegans has one of the simplest nervous systems of any organism, with its hermaphrodite type having only 302 neurons. Furthermore, the structural connectome of these neurons is fully worked out. There are fewer than one thousand cells in the whole body of a C. elegans worm, each with a unique identifier and comprehensive supporting literature because C. elegans is a model organism. Being a model organism, the genome is fully known, along with many well characterized mutants readily available, a comprehensive literature of behavioural studies, etc. With so few neurons and new calcium 2 photon microscopy techniques it should soon be possible to record the complete neural activity of a living organism. By manipulating the neurons through optogenetic techniques, combined with the above recording capacities the project is in an unprecedented position to be able to fully characterize the neural dynamics of an entire organism.
In trying to build an "in silico" model of a relatively simple organism like C. elegans, new tools are being developed which will make it easier to model more complex organisms.
> I contend that without a spirit, no such AI will emerge.
A couple of questions seem to fall out of this:
1. If an AI would not emerge from a high-fidelity whole-brain simulation, then what, in your theory, would emerge? How would you characterise it?
2. How would you determine that what emerges is or is not intelligent? In other words, how would you determine whether or not your theory had been falsified?
3. If an AI actually had a spirit (adopting your definition), would you have a way to recognise this?
Because humans (and indeed all life) have "spirits" which can think independently of the body, and that's the true reason we are conscious/self-aware.
Apart from the obvious "extraordinary claims require extraordinary evidence. You claim this, now prove it" angle, this has so many faults it can only be handwavium and wishful thinking.
Why would the spirit need a glove? Why would a glove need a spirit? Do roundworms and algae and cockroaches have spirits? Why couldn't a 'God' create a thinking being without using a spirit? Why is there conveniently a spirit-force that can supply anything from 10k humans to 7 billion humans with spirits? It can't be created or destroyed, then what happens when it runs out? Where is it stored? How does it get from there to here?
What does it mean that the spirit can 'think' independently of the body? As far as we know, thinking is a matter of survival in the physical world - interpreting sensory data, outwitting predators and prey, remembering sources of food and water, outsmarting and outfighting competitors for social standing and mating rights. What use would a spirit have for any of this ability?
Nothing is perfectly efficient, so there will be waste heat in the process of information transfer from body to spirit, where is that measurable heat? What protocol does it use to transmit information? What forces and particles and transmitters and receivers? What power source? Where in your body are sandwiches turned into ectoplasm? Where in the spirit force are memories stored? What use are physical world memories after body death, why would a spirit want or need them? And if they're no use, what's the point in describing it as still 'you' after death?
If the brain can build spirit receiver mechanisms, then they're made out of physical things in the physical world. So where's the chunk of physics (or the gap) which covers them? Where's the Faraday cage that can block them? If brains can grow them, then lab-grown brain tissue can, and human machines can, so we could plausibly build AI / thinking machines and electromechanical/cyborg Gods one day. That's completely the opposite of you saying it's "impossible", since it's still just manipulation of matter, which we are getting ever better at. Take over the communication mechanism and pump adverts and malware into it, right? False enticing memories.
If you were going to invent anything which is not proven, and claim it's the truth, isn't it just so convenient that it's a) based on faith, b) fixes (or nullifies) all earthly suffering, c) addresses all fear of death and illness, d) includes an untestable but scary justification for why you should do as you're told (and not be 'bad') and e) has /literally no compromises or downsides/?
If that's not too good to be true, I don't know what is.
"Spirits are created by God beings from intelligences (which is a spiritual resource that can be neither created nor destroyed)."
"Woo-woo is created by WOW from handwavium (which is a woo-woo resource that {science words to try and sound authentic}"
> The hard part is coming to grips with consciousness merely being an abstract or concrete system changing states and nothing more.
But that's no different whether it's an computer, marbles on card board, or stone on a beach. If one of them can become aware at sufficient complexity, then logically, so could the others. Though it's easier to believe with computers, because they already are so complex that they're mostly black boxes to us, whereas stones on the beach is just not feasible in reality.
> If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
Maybe it does. It's an interesting thought. This also goes back to the idea of the self-aware anthill in Hofstadter's _Gödel, Escher, Bach_ (where the ants are not aware, but the anthill is). Or maybe awareness is an illusion, as some people claim (but then what is perceiving that illusion, is what I'd like to know). Or maybe computers can't be aware and we can.
> Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states.
Is it possible though? There is likely no equivalence between all arbitrary dynamical systems, including those with very high entropy such as a bucket of sand, and systems capable of performing what we call computation.
Otherwise building a computer would be much easier than it currently is.
"I define". There's your problem right there. "Exactly represents" is your other problem.
Your description paints a picture of high entropy (and wonders how that could be the basis for computational consciousness), but once you have defined an exact representation, you no longer have any entropy in your system. You just have a circuit operating or, if it's truly alive, a metabolism at work.
You should read "Gödel, Escher, Bach". It's an incredibly enjoyable, though quite dense, exploration of how possibly can intelligence arise from non-intelligent things.
...and of how that intelligence/consciousness/whatever would exist on a level "above" the machine, so to speak. And that's where the confusion and arguments usually pop up: it's not that the machine (or slab of meat) is self-aware as much as there is a self-awareness running on the machine, and there's a reasonable chance that the consciousness is not understandable, even in principle, from an examination of the parts that enable it (whether physical or process).
>say I throw a bucket full of sand on the ground...
What happens if I homomorphically encrypt the AI and delete the key? What happens if I run the AI in a quantum computer such that the algorithm runs only if a particular atom decays, which I then choose not to observe? etc etc. Scott Aaronson gives talks about these things if you're interested.
If you buy into materialism, the idea that the mind arises from the brain through physics alone and nothing else, then your mind is able to be simulated on a big enough computer, which means that consciousness can be implemented by a machine shuffling around stones on an infinite desert.
(To be fair, materialism is not certain, though there's a lot of circumstantial evidence.)
In this view, consciousness is a class of computations. If your thrown bucket of sand does the same computations, then it would be conscious. Objects that don't aren't.
Are you trying to argue that for any way sand can fall out of your bucket, there's a way of mapping those falls to any computation you please so that any physical action can represent any consciousness?
> If you buy into materialism, the idea that the mind arises from the brain through physics alone and nothing else, then your mind is able to be simulated on a big enough computer
It's not a joke. It's literally someone on the verge of a psychotic break from reading too many LessWrong blogposts. Those folks aren't only into game-theoretic polyamory and shitty harry potter fanfiction, far from it.
Just because it isn't a joke doesn't mean it isn't hilarious though. Look into roko's basilisk, if you dare!
> If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
This is a formal logical fallacy. If we accept the antecedent "rock does not imply not-conscious", your consequent of "rock implies conscious" does not follow.
> For example, say I throw a bucket full of sand on the ground. Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states. Does that mean as I was pouring the the sand on the ground, the sand was momentarily self-aware?
You're assuming that is possible to define a mapping between the mechanics of the falling-sand dynamic system and a dynamic system that satisfies a definition of "self-aware". It is difficult to argue this in any meaningful sense because there is no satisfactory formal definition of consciousness in existence. I strongly suspect that if a such a definition did exist, the falling-sand dynamic system would not satisfy it, but if it did, then yes, the falling sand is conscious ;)
Drifting off topic but something about that comic is confusing. Surely by putting down rocks to represent the computation is he is not simulating the Universe, only describing a simulation of it. In the case of the cardboard and marbles there is a physical process by which the program is executed, in this case the only way it is executed is in the mind of the guy. Or is that the point?
One row represents one instant of "the machine". By adding another row below it, the human is doing the execution very very very slowly.
Actually, depending on the rules, there is no need for more than one row to be present at any time. When the human starts working on the next row, (s)he can remove the rocks from the previous row after using up its information.
"Depending on the rules" means the future state (next row) is completely defined by one previous state (one previous row). This still allows one to make a turing machine. An example where one previous state is not enough is Newtonian mechanics, where you need, not just previous positions, but also previous velocities, or, equivalently, the second to last row as well. (note: the physics in the comic is being simulated on top of the turing machine, so it's not a problem for this comic).
Minor nitpick: Someone who is smart enough to crack quantum mechanics, and build a "manual" computer when alone, should be smart enough to figure out a way to find a source of energy so the computation happens automatically.
> Minor nitpick: Someone who is smart enough to crack quantum mechanics, and build a "manual" computer when alone, should be smart enough to figure out a way to find a source of energy so the computation happens automatically.
Not to take this xkcd too seriously but: if he were truly there for infinity (that is, the duration of his existence is infinite), then he wouldn't be in any hurry. Another way to put it is that, in the steady-state sense of things, no means of computation is "noticeably" faster to him than any other (only relatively faster).
Consider that any time difference between events (e.g. the termination of the same calculation performed by different mechanisms) would be infinitesimal compared to the duration of his infinite existence. Also, interestingly, since he is there for eternity then his existence does not really have a start time or end time, as far as he's concerned.
If he wanted to have some real fun with infinity, he might kick off a couple of interacting harmonics in the sand, maybe watch them interact, as anything else - finite - cannot last.
I got the impression that he was simulating _our_ universe, not _the_ universe. In other worlds, the laws of physics, etc, in our universe might only exist in a rock simulation in some other universe with very different properties (unlimited spaces and time, etc).
The question you're asking is essentially the same as the Chinese room thought experiment. My interpretation is that the system of the stones plus the mind of the guy together make up the program, when neither alone can.
I think the simulation is as much in the rocks as it is in the head of the guy doing it. The rocks would have an infinite size but it doesn't need to be in the head of the guy for him to compute it.
It's unintuitive, sure, but that's just because of the enormous size of the cardboard you'd need to do it. Human intuition is pretty bad outside of its human-sized comfort zone.