Thought experiment: if sentient AI is possible with nothing more than software, does that mean if you "load" the sentient AI program into a "computer" made of cardboard and marbles, that the cardboard and marbles will be self-aware?
Yea, sure. What's so hard about that? XKCD made a comic about something similar: https://xkcd.com/505/.
It's unintuitive, sure, but that's just because of the enormous size of the cardboard you'd need to do it. Human intuition is pretty bad outside of its human-sized comfort zone.
The hard part is coming to grips with consciousness merely being an abstract or concrete system changing states and nothing more. If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
For example, say I throw a bucket full of sand on the ground. Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states. Does that mean as I was pouring the the sand on the ground, the sand was momentarily self-aware?
Are you aware of your cells (as in, the individual cells, not "my cells" as a concept)?
Are your cells aware of you (as in, the you that is contemplating these questions, not the adjacent cells that make up part of you)?
For me, the answer to the first is no, and I strongly suspect the answer to the second is also no.
So I don't believe that the grains of sand would be aware. But it is possible that "entity comprised of sand granules" is aware, just as "entity comprised of carbon-based cells" is aware.
Basically its all physics rules how particles interact and the energy flowing through. Now am "I" in control of making decision to do one thing and not the other? Or was it inevitable that the decision was made as that was the only possible outcome of particles inter acting in my brain at that moment? I am stuck with that thought
Whether or not it was the only possible outcome is a question of determinism, which at the quantum level looks like it should be answered with no. That doesn't mean, however, that the outcome is random; it rather depends strongly on the sum of all of the life experiences that have influenced your brain's wiring, all the way since the second trimester of your mother's pregnancy. In that sense, your decisions are uniquely yours.
Whether or not that was actually "you" is a different question, but since "you" don't actually exist as an independent agent (despite your feelings to the contrary), it's a moot point to discuss.
> If weird assemblages of water, carbon, calcium and trace minerals can be self-aware, why not?
Because humans (and indeed all life) have "spirits" which can think independently of the body, and that's the true reason we are conscious/self-aware. The body is merely a glove for the spirit. Before birth and after death your spirit still exists and is self-aware (don't take my word for it; try it out - when you die, think to yourself "wow, I'm still conscious without a body!"). Spirits are created by God beings from intelligences (which is a spiritual resource that can be neither created nor destroyed). Gods are created by granting humans more knowledge and powers. The progression is this: intelligence -> spirit -> human (body + spirit) -> God. However, not everyone makes it to God status since Gods choose which humans are to be granted further knowledge and power. And people that have been really bad can be stripped of their body and spirit to have their intelligence recycled back into the immutable pool.
Hence Strong AI is impossible since we aren't/don't know how to mine/utilize intelligences (the immutable spiritual resource) like God does.
Which means that the brain must be able to transmit information to and from the spirit. Please, by all means, point us to a plausible mechanism for such a thing.
You can ask this same question about transmissions between ones subjective experience and ones neurons, and the only serious difference between sams answer and modern answers is that sam makes an unwarranted number of assumptions, and as such must be cut away with ockams razor.
If you really want to feel that you're qualitatively better than sam, you have to bite the bullet and go for full dennett-style functionalism, effectively coming out as a p-zombie, which I personally find quite unpalatable. In the other direction lies lots of other equally weird stuff, like modal realism and panpsychism.
> sam makes an unwarranted number of assumptions, and as such must be cut away with occam's razor.
I agree with you that the logical thing to do with my answer is to cut it away with Occam's razor, and I don't blame you for it.
However, it should be noted that Occam's razor is a general rule of thumb, not some infallible law.
If you were handed a piece of paper with a high level description of how the universe truly works, would you recognize it as the truth or dismiss it as the ramblings of a lunatic?
If you went back in time even a few hundred years and handed the most brilliant thinkers of the day a piece of paper containing just one paragraph about general relativity, how many of those slips of paper would survive Occam's razor?
You're missing the fundamental part about empirical testing.
In the slightly paraphrased words of Dara O'Briain: "Science knows it doesn't know everything, but that doesn't mean you can just 'fill in the gaps' with whatever comes to mind". Claiming that we couldn't recognise an advanced description of our world doesn't mean you can then jump in and say "... and so it's this!".
Likewise, if you were to hand Isaac Newton a paragraph explaining general relativity, something beyond his ability to measure even if he could conceptualise it, there would be no point applying Occam's Razor, because it's not describing something he can detect in the first place.
> Likewise, if you were to hand Isaac Newton a paragraph explaining general relativity, something beyond his ability to measure even if he could conceptualise it, there would be no point applying Occam's Razor, because it's not describing something he can detect in the first place.
Right, so either way, the slip of paper doesn't survive.
I'm not claiming my worldview is empirically testable or even verifiable. I just thought it would be fun to write it down in the off chance that it is true, so that people can later amuse themselves in hindsight.
If you were a scientist in the 1600s, and you read in the newspaper in the anonymous comments section that someone thought that "Time slows down the faster your travel. Time and space are two orthogonal dimensions of the same invisible fabric.", wouldn't it be funny for you to later find out that the random stranger that gave you that forgettable, idiotically insane theory in the paper actually somehow had astonishing insight?
But before anyone took relativity seriously, two things happened:
1. A constant speed of light independent of any reference frame turned out to be a logical consequence of the empirically-verified laws of electromagnetism, and
2. Measurements of the speed of light confirmed that it was constant in all reference frames we were capable of measuring.
Where is the corresponding theory and empirical observations to support dualism?
> If you went back in time even a few hundred years and handed the most brilliant thinkers of the day a piece of paper containing just one paragraph about general relativity, how many of those slips of paper would survive Occam's razor?
I'm pretty confident Newton would get it. He could've verified the speed of light against observations of the moons of Jupiter. He knew about some puzzling aspects of magnets, and was theorising about magnetism while working on optics.
Of course, much better than telling people the answer is telling them which threads to pull, which of the many confusing phenomena in the world bears more investigation.
Your perspective is all wrong here. You can't have a high-level description of how the universe works until it is interpreted as such. Thus if someone gave you a piece of paper, their are far more available interpretations than the two you propose. Most likely, you would simply recognize it as something you don't understand. Like say, exactly what most people do when you hand them a book on QFT. They will neither admit it to be true or lunacy, since they do not have the capacity to judge it as either.
If you had a book that explained how the universe worked, it would either have to explain it in a step-by-step fashion in unambiguous language for your specific consumption, or you're inappropriately putting that spin on it entirely by your own volition.
>If you were handed a piece of paper with a high level description of how the universe truly works, would you recognize it as the truth or dismiss it as the ramblings of a lunatic?
Using false equivalence to convince people that your religious beliefs are correct is unscrupulous to say the least. Your arguments do not prove your claims. It would be better to simply claim up front that it takes a certain amount of faith...then proceed with your story.
This. It's the ugly truth, but it's best to swallow hard, man up, and just straight up admit it. I know nothing, you know nothing, nobody knows anything. Talking about it, pardon me, out of your ass is pointless and just shows how much we have the instinct to control and understand things, even things that are out of our control and understanding. Explains religions, belief in afterlife, belief in a soul, and similar things.
That, and call bullshit when someone tries to convince you of a position shielded by meaningless phrases, obfuscated by scientific words, that just happen to fit exactly in the hole left by our current understanding and that you'll have to wait for death to confirm but they have, somehow, the ability to know.
Bonus points fot telling them to fo if they try to coerce you (or anyone really) by scaring you into believing their nonsense with punishments, in this life and the next one(s).
Isn't that a category mistake? one's subjective experience ~is~ the action of one's neurons. The alternative is, with sam, to grant the existence of an immaterial soul.
That isn't really explanatory to me though. It's like identifying your running operating system with the patterns of electron spins in your hard disk/RAM: while they may be identifiable with each other, that doesn't mean you can derive useful knowledge about one from the other without a huge amount of other information which has little to do with the hard disk itself, and isn't really encoded in it.
The amount of information we need to derive consciousness from neurons could be enormous. My (utterly uninformed) guess is that there are huge frontiers of information theory which need to be crossed first. Similarly to how the discovery of entropy revealed new kinds of knowledge about physical systems, and new ways of obtaining it.
While physics definitely does not have any hiding place for the soul (the relevant energy scales have all been explored,) the interface between biology and information theory is largely unknown, and potentially limitless.
There is something called Integrated-Information Theory which attempts to do that kind of thing, but it doesn't seem to be thought very credible by the experts.
I understand the downvotes for retina_sam's comment for the number of unproven ideas it asserts, but as unlikely as it seems in our materialistic view of the universe, I think it's important to not lose sight of the possibility that maybe there really is something special, something non-physical, about awareness. It is really poorly understood, and I still have not seen an adequate explanation for why I can perceive things and be aware of them.
"Awareness is an illusion", as proposed by some thinkers, certainly doesn't do it for me, because there's still something that has to perceive that illusion. It's got to be an emergent property of sufficiently complex logical processing, but how does it emerge? What is it really? I feel like I'm still missing a vital step.
The lack of an adequate explanation does not demand the need for a non-physical explanation.
I think that because consciousness is part of the core understanding of ourselves, we're drawn to a non-physical and spiritual explanation.
But as a phenomenon the conscious mind is not different than other observed phenomenons.
The rational approach would therefore be to expect an explanation that can be provided by our physical environment just as we do with any other unexplained phenomenon we observe.
I have never understood the argument that "observation" is something special and that it must be taken into consideration. In fact, is it really the brain that is observing a phenomenon or is it something else?
In the double slit experiment, it really isn't the brain, is it? It's the screen you place some distance away from the slits, or the electron detector if you place it in one slit only. The fact that "an outside observer" only sees the pattern made by these detectors later and may reason about them is entirely unnecessary to the experiment itself and to the collapse of the wave function. If you leave the results of a double slit experiment lying overnight, has the wave function not collapsed for longer? As such, the "observer" in these experiments are devices, just like in most other experiments.
Just as the infamous cat is an observer of a quantum system, for whom the wave function has collapsed potentially much earlier than for an outside observer. A wave function collapse is in that case much more of a statement of information state than about the state of reality, which is the exact point of quantum mechanics, in that there must be a distinction now between "observers" (which my be single atoms, mind!) inside of the sphere of influence of an event (whether that event may be bounded by physical barriers or a light cone is irrelevant by the way). [Note that Schrödinger brought up the cat example exactly to point out that the world does not exist in a blurry double-state way even if unobserved.]
The same is true for most psychological observations, which get recorded by a computer, by a pen or by an undergrad. The fact that someone else observes these observations doesn't change the phenomenon in any reasonable way.
One more analogy: we now have computers that are entirely capable of recording/observing phenomena by themselves, some of them internal to themselves even, where we have no other way of recording such phenomenon. If you run an A/B test on a web site, let it run for long enough, then look at your analytics page and it says to accept one hypothesis with 99.99% probability, then you have exactly the same situation as if you had done the experiment through other people and publishing it as a paper. Except, of course, no human mind has "observed" the phenomenon, neither directly nor indirectly, until you opened the results page. So what it comes down to is this: did the result exist before you read the results page? I would answer yes.
All of this is a long way of saying that IMHO the observation of a phenomenon does not change the phenomenon itself.
> "Awareness is an illusion", as proposed by some thinkers, certainly doesn't do it for me, because there's still something that has to perceive that illusion.
Maybe the illusion is just perceiving a past/previous instance of itself -- which further sustains the illusion? The recursion obviously bottoms out at the point of emergence, if we assume that consciousness is an emergent phenomenon.
(Don't take this too seriously -- I just think it's fun to think about these things.)
Roger Penrose wrote an interesting (if unavoidably controversial) book about these questions, "The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics".
I would say no. Even if awareness is of divine origin, you could still simulate an effect that, from an outside perspective, is indistinguishable from the real thing. The computer doesn't have to be truly aware in the way we are in order to present itself to us as if it is.
> But... Strong AI can be achieved, if nothing else by emulating a brain synapse by synapse. Therefore god doesn't exist?
Isn't this type of thinking a form of cargo cult science, though? According to Facebook's AI director [1]:
> The equivalent [of cargo cult science] in AI is to try to copy every detail that we know of about how neurons and synapses work, and then turn on a gigantic simulation of a large neural network inside a supercomputer, and hope that AI will emerge. That’s cargo cult AI. There are very serious people who get a huge amount of money who basically—and of course I’m sort of simplifying here—are pretty close to believing this.
At any rate, that's a theory you have there (that simulating neurons results in emergent AI), now you have to prove your theory before I will admit God doesn't exist. I contend that without a spirit, no such AI will emerge.
If that's a theory he has there, then you imply god and the spirit are not a theory. If that is the case, please provide proof. Quite literally the entire human race would be interested.
> If that's a theory he has there, then you imply god and the spirit are not a theory
That's just lazy. You can't shift the burden of proof on me when OP was the one making the extraordinary claim (sufficiently sophisticated synapse simulations are guaranteed to result in emergent strong AI). If you want to demand proof of my claims, you'll need to post your demands to the comment in which I made the extraordinary claim.
The existence of god is an extraordinary claim. The claim that replicating a brain at the molecular level will reproduce the behaviour of a brain is not an extraordinary claim. It's not even a claim, it's a fact.
Not really a fair comparison - the islanders of the 'cargo cult' were trying to entice aircraft full of cargo down by recreating what the explorers did - build towers, runways out of palm leaves, and wave coconut 'radios' around.
But they didn't know aircraft were built by humans, what radios did, or that there was a huge society far away building aircraft and training people to fly them.
And how many aircraft did they ever see, and how long did they spend trying before they were invaded/educated/gave up?
If they had 7 billion aircraft, seeing them all from scratch, and a thousand years to investigate, would they have progressed further?
We know brains work, we believe they are completely grown and contained within a skull (no remote factories building them), and we keep exploring further and building better tools and gathering more data.
The roundworm C. elegans has one of the simplest nervous systems of any organism, with its hermaphrodite type having only 302 neurons. Furthermore, the structural connectome of these neurons is fully worked out. There are fewer than one thousand cells in the whole body of a C. elegans worm, each with a unique identifier and comprehensive supporting literature because C. elegans is a model organism. Being a model organism, the genome is fully known, along with many well characterized mutants readily available, a comprehensive literature of behavioural studies, etc. With so few neurons and new calcium 2 photon microscopy techniques it should soon be possible to record the complete neural activity of a living organism. By manipulating the neurons through optogenetic techniques, combined with the above recording capacities the project is in an unprecedented position to be able to fully characterize the neural dynamics of an entire organism.
In trying to build an "in silico" model of a relatively simple organism like C. elegans, new tools are being developed which will make it easier to model more complex organisms.
> I contend that without a spirit, no such AI will emerge.
A couple of questions seem to fall out of this:
1. If an AI would not emerge from a high-fidelity whole-brain simulation, then what, in your theory, would emerge? How would you characterise it?
2. How would you determine that what emerges is or is not intelligent? In other words, how would you determine whether or not your theory had been falsified?
3. If an AI actually had a spirit (adopting your definition), would you have a way to recognise this?
Because humans (and indeed all life) have "spirits" which can think independently of the body, and that's the true reason we are conscious/self-aware.
Apart from the obvious "extraordinary claims require extraordinary evidence. You claim this, now prove it" angle, this has so many faults it can only be handwavium and wishful thinking.
Why would the spirit need a glove? Why would a glove need a spirit? Do roundworms and algae and cockroaches have spirits? Why couldn't a 'God' create a thinking being without using a spirit? Why is there conveniently a spirit-force that can supply anything from 10k humans to 7 billion humans with spirits? It can't be created or destroyed, then what happens when it runs out? Where is it stored? How does it get from there to here?
What does it mean that the spirit can 'think' independently of the body? As far as we know, thinking is a matter of survival in the physical world - interpreting sensory data, outwitting predators and prey, remembering sources of food and water, outsmarting and outfighting competitors for social standing and mating rights. What use would a spirit have for any of this ability?
Nothing is perfectly efficient, so there will be waste heat in the process of information transfer from body to spirit, where is that measurable heat? What protocol does it use to transmit information? What forces and particles and transmitters and receivers? What power source? Where in your body are sandwiches turned into ectoplasm? Where in the spirit force are memories stored? What use are physical world memories after body death, why would a spirit want or need them? And if they're no use, what's the point in describing it as still 'you' after death?
If the brain can build spirit receiver mechanisms, then they're made out of physical things in the physical world. So where's the chunk of physics (or the gap) which covers them? Where's the Faraday cage that can block them? If brains can grow them, then lab-grown brain tissue can, and human machines can, so we could plausibly build AI / thinking machines and electromechanical/cyborg Gods one day. That's completely the opposite of you saying it's "impossible", since it's still just manipulation of matter, which we are getting ever better at. Take over the communication mechanism and pump adverts and malware into it, right? False enticing memories.
If you were going to invent anything which is not proven, and claim it's the truth, isn't it just so convenient that it's a) based on faith, b) fixes (or nullifies) all earthly suffering, c) addresses all fear of death and illness, d) includes an untestable but scary justification for why you should do as you're told (and not be 'bad') and e) has /literally no compromises or downsides/?
If that's not too good to be true, I don't know what is.
"Spirits are created by God beings from intelligences (which is a spiritual resource that can be neither created nor destroyed)."
"Woo-woo is created by WOW from handwavium (which is a woo-woo resource that {science words to try and sound authentic}"
> The hard part is coming to grips with consciousness merely being an abstract or concrete system changing states and nothing more.
But that's no different whether it's an computer, marbles on card board, or stone on a beach. If one of them can become aware at sufficient complexity, then logically, so could the others. Though it's easier to believe with computers, because they already are so complex that they're mostly black boxes to us, whereas stones on the beach is just not feasible in reality.
> If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
Maybe it does. It's an interesting thought. This also goes back to the idea of the self-aware anthill in Hofstadter's _Gödel, Escher, Bach_ (where the ants are not aware, but the anthill is). Or maybe awareness is an illusion, as some people claim (but then what is perceiving that illusion, is what I'd like to know). Or maybe computers can't be aware and we can.
> Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states.
Is it possible though? There is likely no equivalence between all arbitrary dynamical systems, including those with very high entropy such as a bucket of sand, and systems capable of performing what we call computation.
Otherwise building a computer would be much easier than it currently is.
"I define". There's your problem right there. "Exactly represents" is your other problem.
Your description paints a picture of high entropy (and wonders how that could be the basis for computational consciousness), but once you have defined an exact representation, you no longer have any entropy in your system. You just have a circuit operating or, if it's truly alive, a metabolism at work.
You should read "Gödel, Escher, Bach". It's an incredibly enjoyable, though quite dense, exploration of how possibly can intelligence arise from non-intelligent things.
...and of how that intelligence/consciousness/whatever would exist on a level "above" the machine, so to speak. And that's where the confusion and arguments usually pop up: it's not that the machine (or slab of meat) is self-aware as much as there is a self-awareness running on the machine, and there's a reasonable chance that the consciousness is not understandable, even in principle, from an examination of the parts that enable it (whether physical or process).
>say I throw a bucket full of sand on the ground...
What happens if I homomorphically encrypt the AI and delete the key? What happens if I run the AI in a quantum computer such that the algorithm runs only if a particular atom decays, which I then choose not to observe? etc etc. Scott Aaronson gives talks about these things if you're interested.
If you buy into materialism, the idea that the mind arises from the brain through physics alone and nothing else, then your mind is able to be simulated on a big enough computer, which means that consciousness can be implemented by a machine shuffling around stones on an infinite desert.
(To be fair, materialism is not certain, though there's a lot of circumstantial evidence.)
In this view, consciousness is a class of computations. If your thrown bucket of sand does the same computations, then it would be conscious. Objects that don't aren't.
Are you trying to argue that for any way sand can fall out of your bucket, there's a way of mapping those falls to any computation you please so that any physical action can represent any consciousness?
> If you buy into materialism, the idea that the mind arises from the brain through physics alone and nothing else, then your mind is able to be simulated on a big enough computer
It's not a joke. It's literally someone on the verge of a psychotic break from reading too many LessWrong blogposts. Those folks aren't only into game-theoretic polyamory and shitty harry potter fanfiction, far from it.
Just because it isn't a joke doesn't mean it isn't hilarious though. Look into roko's basilisk, if you dare!
> If rocks on the ground can be (slowly) self aware, then doesn't that mean everything in the universe is self-aware?
This is a formal logical fallacy. If we accept the antecedent "rock does not imply not-conscious", your consequent of "rock implies conscious" does not follow.
> For example, say I throw a bucket full of sand on the ground. Then, later, I define an incredibly complex abstract computer, such that the sand falling out of the bucket and shifting along the ground exactly represents a self-aware AI within my defined computer going through its states. Does that mean as I was pouring the the sand on the ground, the sand was momentarily self-aware?
You're assuming that is possible to define a mapping between the mechanics of the falling-sand dynamic system and a dynamic system that satisfies a definition of "self-aware". It is difficult to argue this in any meaningful sense because there is no satisfactory formal definition of consciousness in existence. I strongly suspect that if a such a definition did exist, the falling-sand dynamic system would not satisfy it, but if it did, then yes, the falling sand is conscious ;)
Drifting off topic but something about that comic is confusing. Surely by putting down rocks to represent the computation is he is not simulating the Universe, only describing a simulation of it. In the case of the cardboard and marbles there is a physical process by which the program is executed, in this case the only way it is executed is in the mind of the guy. Or is that the point?
One row represents one instant of "the machine". By adding another row below it, the human is doing the execution very very very slowly.
Actually, depending on the rules, there is no need for more than one row to be present at any time. When the human starts working on the next row, (s)he can remove the rocks from the previous row after using up its information.
"Depending on the rules" means the future state (next row) is completely defined by one previous state (one previous row). This still allows one to make a turing machine. An example where one previous state is not enough is Newtonian mechanics, where you need, not just previous positions, but also previous velocities, or, equivalently, the second to last row as well. (note: the physics in the comic is being simulated on top of the turing machine, so it's not a problem for this comic).
Minor nitpick: Someone who is smart enough to crack quantum mechanics, and build a "manual" computer when alone, should be smart enough to figure out a way to find a source of energy so the computation happens automatically.
> Minor nitpick: Someone who is smart enough to crack quantum mechanics, and build a "manual" computer when alone, should be smart enough to figure out a way to find a source of energy so the computation happens automatically.
Not to take this xkcd too seriously but: if he were truly there for infinity (that is, the duration of his existence is infinite), then he wouldn't be in any hurry. Another way to put it is that, in the steady-state sense of things, no means of computation is "noticeably" faster to him than any other (only relatively faster).
Consider that any time difference between events (e.g. the termination of the same calculation performed by different mechanisms) would be infinitesimal compared to the duration of his infinite existence. Also, interestingly, since he is there for eternity then his existence does not really have a start time or end time, as far as he's concerned.
If he wanted to have some real fun with infinity, he might kick off a couple of interacting harmonics in the sand, maybe watch them interact, as anything else - finite - cannot last.
I got the impression that he was simulating _our_ universe, not _the_ universe. In other worlds, the laws of physics, etc, in our universe might only exist in a rock simulation in some other universe with very different properties (unlimited spaces and time, etc).
The question you're asking is essentially the same as the Chinese room thought experiment. My interpretation is that the system of the stones plus the mind of the guy together make up the program, when neither alone can.
I think the simulation is as much in the rocks as it is in the head of the guy doing it. The rocks would have an infinite size but it doesn't need to be in the head of the guy for him to compute it.
The cardboard and marbles would be the engine which drives self-awareness. The individual parts are not self-aware.
But considering the size and complexity that would be required for such a feat of engineering, a very liberal estimate would probably require more mass than is available on the entire planet.
Edit: the sheer gravity of which would destroy the system.
That's semantically difficult to say, i've seen it in other comments, too, self is not used in the correct frame of reference: The individual parts' self is irrelevant; yet you claim, they are not self aware; when you imply the part's weren't aware of the whole ('s self), you switch frame of reference. The self of the sum of the parts are just the parts. Indeed the parts are the awareness. The whole might be aware of the self of each individual part, but for that to be true, contrary to your statement, each individual part doesn't need to be aware of itself (it might as well code for some other part or whatever).
"Yes," except that the scale of the assemblage would have to be unimaginably enormous. Then, in order to continue working at that scale, it would have to incorporate copious error correcting and fail over. It would need mechanisms to reset and reload marbles, and a source of energy to raise them. The system for distribution of that energy would itself be complex...
My point is that the Chinese Room argument is disingenuous. It oversimplifies the problem and asks for your incredulity that the oversimplified solution could answer the unsimplified problem.
The specific bug the Chinese Room suffers from is its old and back then we'd have conversational AI next week so obviously it seemed a good example at the time of a non-trivial task that only conscious beings can perform.
A re-stating of it in modern terms using modern skills would be that surely only conscious beings can invent and operate under Calculus rules. In fact its pretty hard to do and somewhat elitist in that only a small fraction of primates can learn calculus, larger than who know it now, but it'll never be 100%.
So... Mathematica and Maple are decent computer algebraic systems. Better than most conscious beings, even. Well, OK someone programmed them. So fine how about Coq the theorem prover?
Now the ancient Chinese Room argument extended to the modern computer is assume we have no access to the engineering department who created Mathematica because its 500 years from now or just the sake of argument or whatever. In fact assume the philosopher barely understands the concept of boolean logic much less floating point or algorithms. Then ask a philosopher which specific transistor in the CPU is the one that "understands" the calculus chain rule. What part of the hardware is the mathematica "for loop" construct as opposed to does the "for loop" construct? How much silicon and plastic does it take to think about Calculus? Observationally a lot less than it takes to do natural language conversational Chinese but why and how much and now it does not take a soul or disembodied spirit thingie to do Calculus and when did that change and when will it change for natural language conversational Chinese?
Or heres some fun, when will technology advance enough such that self awareness and self authored demands for political autonomy no longer require soul spirit thingies and are just great piles of fast logic gates?
The cardboard and marbles aren't self-aware, they just encode a configuration that sees itself as self-aware when left alone to execute. Just like the atoms of your body. Self-awareness isn't a static property, since it's hard to call someone under anesthetic self-aware, but just another thing an agent in motion perceives.
The trouble I've always had with this argument is that the amount of books needed to convincingly converse in Chinese would be staggeringly huge. You essentially have an entire human personality as a lookup table. It's not just that you can ask, in Chinese, What's your favorite food, and somewhere in the pages is an answer. You can ask, Tell me about your first love, and somewhere in the pages is an entire conversation. And not only would it be an entire conversation, it would be every possible conversation, depending on how the outside person replied. If it helps visualize the problem - the outside person could say, How about a game of chess?, and the inside person cannot rely on their knowledge of chess, so the books must encode every possible chess game, convincingly enough to hold up a game. Imagine how big a lookup table for chess would be. And that's just a tiny part of what a person knows.
And then despite having you accept that without question, and picture it as a small room, Searle turns around and says it would not be "at all plausible" to imagine the system as being conscious.
The system is almost entirely made of book. The human in the middle is almost invisible if you look at the system. That's the implausible part - that a human could even operate this Borges library. It's so unreasonable that intuition past that point doesn't give you any useful conclusions.
But, if you suspend disbelief enough to accept this galaxy-sized Chinese megalibrary, it's not implausible that "conscious" is a fine adjective for an operable printout of an entire consciousness.
You do not need a pre-existing, galaxy-sized library. The program operator has the ability to write into blank books and move them from one shelf in the library to another. The program could, in theory, assemble a book of knowledge on something that exists entirely outside the Room, solely from knowledge gleaned from the inputs passed in from outside.
Your initial set of books might simply be a script to follow to elicit additional information and index all new information such that it can be retrieved later. You would then end up with a set of core rulebooks, a whole heap of raw conversations, and several layers of indexes into them.
Searle's problem is that he decided the computer was incapable of understanding. But the computer is not the program. The library is the thing that might comprise a strong AI, not the human operator.
The scary thing is that the human operator, having no understanding of the Chinese characters, does not actually know what the program is doing. It could smuggle instructions through the output window, through him, and the human operator would then be surprised when burly men drag him out of the Room one day, to replace him with a dozen monkeys who have all been trained to process the book instructions faster and more efficiently. Customers presenting inputs might then be surprised to learn that there is no longer a human inside the Room, as the conversations had become so much more interesting.
This thought experiment seems to say: "the human following the instructions doesn't understand the Chinese, therefore strong AI is impossible".
Seems like a fallacy to draw that conclusion, it doesn't prove anything, this is like saying: this tomato is not yellow, so bananas are not yellow.
In addition, how can he say strong AI (which he defines as "understanding") isn't possible, doesn't he have at least one example of it, which is his own mind?
Taken literally, it even suggests human intelligence is impossible. After all, our thoughts originate from a bunch of neurons following instructions and passing information between each other. And each individual neuron isn't any more intelligent than any other cell and can't be said to "understand" our thoughts any more than the man in the Chinese Room understands Chinese.
Searle is not, as you might at first anticipate when you see that he's on the "computers can't be conscious" side of the argument, some sort of woo-woo psychic-spiritualism philosopher; he's actually really grounded and relatively materialistic, though he despises the latter term. (He thinks that you should not respond to the problems of dualism by choosing one or the other side of the duality to commit to, and argue the other into nonexistence -- but rather by saying "oh I guess it was stupid to draw a line here in the first place.") Even the earlier simplification I gave, he "thinks computers can't be conscious", he'd object to directly. He thinks that you're a computer, and he thinks that you're conscious. He just doesn't think that you're conscious by virtue of being the computer that you are. So that's who we're talking about here.
Now, Searle says: we know, or think we know, that the brain is conscious, and it's conscious due to some aspect of the dance of molecules that's going on under the hood. And we happen to have this model of dances, called computation, which is defined in terms of abstract symbol-shuffling: the great strength of computation is that the 0s and 1s can be voltages and electric currents (as in transistors) or magnetic alignments of spins (as in hard drive) or how a gear is turned (as in the difference engine) or whether a resistance is infinite or low (as in your keyboard). They're just abstract symbols.
At its most abstract the Turing test suggests that any human language can be boiled down to these abstract intrinsically-meaningless symbols (which is almost certainly true; that's the premise of the invention of writing), and then that anything which can output symbols in a way that's indistinguishable from a human being, deserves to be given the title of "understanding" that language.
Searle steps in and says, no, that's crazy. And he says it's crazy because he's a philosopher and philosophers are very concerned with what words mean exactly, and the word understanding means exactly that these things you're talking about are not abstract symbols, but a computer necessarily treats them that way; it's in the definition of computation that the symbols are abstract.
Now that you understand who this is and where he's coming from, maybe you can understand the argument better. He's trying to try to give a more formal proof of this above statement. So, he steps in with this Chinese room argument.
The Chinese Room argument says: Your brain is conscious, because of the dance that its neurons do. Now according to this nice computer-reductionalist view, anything which does any dance is understanding a language as long as it produces results which are indistinguishable from a human's responses. So Searle says: let's step inside some Turing machine's dance so our brains are doing two dances at once. The very asset which makes computers awesome allows us to take a Turing-test-passing-algorithm and perform it ourselves: we are the computer, we get this stream of symbols in, we look up rules in some complicated rulebooks, we stream some symbols out, and we now pass the Turing test for understanding Chinese. But is the test correct: do we actually understand Chinese? No. There is no reasonable definition by which we understand Chinese just by performing this complicated computational dance. And this applies even if we move all of the rulebooks and whatever into your skull so that the neurons do those things.
Now here's where we come to the conclusion that strong AI is impossible: the human has everything which is part of the definition of computation. I'll repeat that twice more. The human embodies a whole, complete, computer. There's nothing which a computer can be (by virtue of being a computer) which the human is not. So if we go this far and we say "that brain does not understand Chinese" we have to carry that over; "that silicon does not understand Chinese, either." And no computer will understand Chinese merely by virtue of running this computer program. Clearly our understanding of Chinese is not a property of arbitrary dances of neurons which can solve a Turing test, but there is something special about the right dance of neurons which solves a Turing test. The mere fact that the right dance exists points to some physics which is not reducible to computation.
In other words Searle thinks that you can simulate people as much as you please, but that simulations are not reality, thank you very much. Just like we humans are very good at simulating pain-behavior without actually being in pain, so too can a computer simulate understanding-behavior without actually understanding a thing. If you want to understand how understanding works, computers will doubtless be a useful tool, but they cannot offer a complete and final answer precisely because they do not pin down the material dance firmly enough to get to something causal.
Searle was never able to explain what exactly makes a human brain conscious, other than repeating that it is based on biological processes, rather than mechanical or electrical processes (which he says cannot be conscious). He just explains a mystery (consciousness) with another mystery (why is biology superior - in his view - to mechanics as a way of "doing" consciousness)
That's mostly true and he's even admitted it on several occasions. (His TED talk for example.) He thinks that you can ultimately get some sort of (possibly quantum) electromechanical microscopic description of consciousness by poking the brain with finer and finer instruments, but he doesn't know what we're going to find when we do that; so he's pretty consistent about "we don't have a microscopic definition of consciousness because the science isn't done yet--but we can at least define macroscopically what we mean."
Remember, Searle wants to stop before the scientific domain and he is very optimistic about what science can conclude. I myself am more skeptical that we can get to consciousness without some sort of philosophical innovation. Like, I got my degree in physics, so I'm predisposed to think that it's going to work like "here are the building blocks of qualia, each one needs to be identified with some worldline of some particle moving at speed c, so you get timeless processes with qualia at the bottom of your physics: then you can build consciousness by intertwining these basic qualia into bigger and bigger experiences and fractally including their history within themselves to serve as memory and so forth." I want to start with building blocks and laws of combination and at the end come out with the solution. Searle doesn't want to make any assumptions and seems to think "it's like with QM, we basically invented the finer points of philosophy we needed once we got a chance to poke with finer and finer instruments. We just need to get the big points now, that consciousness is a system feature of the brain like the liquidity of water, and stop trying to reduce it, and we'll get there soon enough."
I think we can build emerging non-biological conciousness (if we keep going long enough) before we understand what conciousness is (if we ever understand that). But you can never prove anything or anybody is concious except yourself so attempting to prove it would be futile.
Whether or not we can prove it will not stop it from doing what it does, just like not being able to prove some other human is concious doesn't stop them from being so (or not ;)).
If a human in the room is following simple rules, he's just using part of his body to play a _simple_ computer, not using the full brain capacity for understanding. That doesn't make it impossible to build a computer as complex as the brain that does everything the same way as the brain does it internally.
Saying humans can do something that could never be built artificially is, imho, magical thinking.
If you want to prove that computers can never "understand" like a human can, you would have to prove that nothing that we can ever build with our hands and doesn't involve biological cells can ever do the same things the human brain can.
> Now according to this nice computer-reductionalist view, anything which does any dance is understanding a language as long as it produces results which are indistinguishable from a human's responses
Who says that? More stuff inside is needed for understanding it, than just input and output. That doesn't stop a complex enough computer from doing that. The brain can do it (at least I experience mine can), so some other (even non-biological) network could too.
No, you don't understand: the conclusion of the Chinese room argument is precisely your statement "more stuff inside is needed for understanding it, than just input and output." That's precisely why consciousness is not a reduction to executing the proper computer program.
Maybe you're not clear on what computers are? I strongly encourage you to look up for example the definition of Turing machines. They do not have any "stuff inside"; they consist of a subset of mathematical functions from bitstrings to bitstrings,
data NextMove s v = GoLeft s v | GoRight s v | Halt v
newtype TuringMachine s v = TuringMachine (s -> v -> NextMove s v)
A Turing machine is a mathematical function, a subset of (s, [v]) -> [v] that can be generated by the above functions.
Mathematical functions are defined as pairs of inputs and outputs. There literally is no "stuff inside" them. They're just sets of (input, output) pairs satisfying left-uniqueness: if (i1, o1) and (i1, o2) are both in the function then o1 = o2 and they were the same pair all along.
Again, Searle is happy that our brains are computers and happy that our brains are conscious. But they are not conscious by virtue of running the right computer program with their computational aspects. In fact, getting you to compute something is relatively hard and unnatural, which is why computers are so great as calculators whereas it takes many years to teach humans the same.
Is the argument only about turing machines, or about any non-biological machine we could build? Turing machines don't include for example random generators nor parallelism, but we can build both of those. I'm talking about anything we could build.
I do not believe in something magical (unbuildable) that biological cells or human beings have, so with enough technology we could build something that understands just like the brain does. If the brain can do it, nothing stands in the way of something else doing it. Maybe the brain uses for example some quantum mechanics we don't yet fully understand, but we'll still be able to technically use it once we know how, it won't be magically limited to only human brains.
EDIT: ok if the argument is really only about turing machines and that is what is meant by "program", that should be indicated a bit more clearly imho, because computers are not turing machines (nor is the brain), they're less powerful in one way, yet have more features in other ways :)
I have no clue what Searle would say, and my degree is in physics not philosophy. But my best guess is that it's probably a borderline case? The person inside the Chinese room still of course has access to qualia, and those qualia do line up in a one-to-one way with the outside world: this suggests clearly "not a p-zombie." But it does seem that in an important sense they're not the right qualia; to use that language, "there's something that it feels like to talk about a waterfall, and the person in the Chinese room does not feel that when they talk about a waterfall."
If you want to up the geekery factor to eleven, you could probably think of the Chinese room as a sort of homomorphic encryption.
Thank you for both these comments, this is the best explanation I've ever read about Searle (Searle is usually a punchline in my conversations with my friends[1]). I'm not completely convinced, and I think an effective rebuttal is along the lines of "there's no such thing as a privileged qualia", but I'm not smart enough to make it properly, and have much to think about for a few nights! :)
[1] For instance, most recent search in my Slack for Searle returns: "There are days when all i do is purely syntactic copy-pasting into stack overflow and copy-pasting back into my terminal, with absolutely no understanding of the underlying semantics of my actions, and have such surprisingly impressive results, that i've become more and more convinced by Searle."
On the other hand Google translate is (was?) a clear example that word by word translations aren't intelligent or even decidable. So he's got a point, but far from a general proof.
That doesn't at all address the point of the argument. Given the following
- an assumption that you can generate an algorithm to express behavior indistinguishable from a human's at a given task, and
- an implementation of the algorithm at a macroscopic scale, carried out by individual humans, each executing a small part of the algorithm
then, where in this system would you say that an actual understanding of the task exists? Google Translate doesn't pass the first requirement.
That assumption is too simplistic. If the algorithmic behavior was indistinguishably human behavior, and carried out by humans, it would just be human behavior. Of course machine translation doesn't pass the requirement for human behavior. Nothing does, except humans. And if newer machine translation does pass, I'd say that's humanly accomplished, by use of tools.
Nobody could learn a distinct language from nothing but a dictionary, enough to fool a native speaker. The assumption is ridiculous. The dictionary isn't conscious, either way.
If you have a challenge to the Chinese Room argument, go read and respond to Searle's original essay. There's no point in responding to my TL/DR version -- offered only to correct your initial misapprehensions -- with further misapprehensions.
The Chinese room is a silly example, in which a highly simplified physical system that doesn't appear to have conscious thought is extended to the idea that no merely physical system has conscious thought. If the structure of the Chinese room was not a simple catalog of questions and answers, but hundreds of billions of complicated cardboard-and-marble apparatus, each processing some amount of data in a way that individually seemed meaningless, but together produced cogent speech... would you still be so sure it wasn't conscious?
This is a good point. The chinese room seems to stem from a more antiquated system of serial processing and the analogy doesn't quite hold when we look at the massive parallelism that occurs in the human mind that gives us consciousness. Furthermore humans can still think in the absence of understanding. In fact id argue a majority thoughts by people are conjugated without full understanding of the subject of the thought.
well, not only is it true that--(if a sentient AI is possible with nothing more than software...), but also if you assume that we don't lose our enthusiasm for building simulations of ourselves, and for helping our little sisters with their projects, then you must conclude that with all probability we are actually living in such a school project simulation made of cardboard and marbles!
It can't be proved that "it's marbles all the way down", but the odds are overwhelming that one of the layers is. Then once you accept that you have to accept that in all probability it's at least a layer of marbles every so often all the way down...
thought experiment: In how many of those layered universes is there a cardboard and marbles Peter Thiel funding quirky ideas? Is it Peter Thiels all the way down?!
This is essentially Searle's Chinese Room. I studied it during undergraduate years and I've never been satisfied with his argument against strong AI. https://en.wikipedia.org/wiki/Chinese_room
No. The cardboard and marbles are the substrate on which the program runs; one might as well argue for the self-awareness of the individual carbon atoms in your brain. The program can't run without some sort of substrate, but the substrate is inert without the appropriate program; it is the system as a whole which is self-aware.
Godel, escher, Bach by Douglas Hofstadter is a colorful, whimsical, yet rigorous treatment of this argument in book format.
The act of doing the computation doesn't change the outcome of the computation. Still doesn't answer the question completely, but in my mind at least the idea that humans would act the same even if they weren't conscious is a non starter.
My point is that if a machine can "create" consciousness through computation, but computation itself doesn't make the result any more real, then didn't the created consciousness exist all along?
Right, which is what I'm saying. It wouldn't make sense otherwise. It would mean that the result of the computation didn't depend on consciousness in the first place.
> if sentient AI is possible with nothing more than software, does that mean if
> you "load" the sentient AI program into a "computer" made of cardboard and
> marbles, that the cardboard and marbles will be self-aware?
fwiw, quote from moon-is-a-harsh-mistress:
Am not going to argue whether a machine can "really" be alive,
"really" be self-aware. Is a virus self-aware? Nyet. How about oyster? I
doubt it. A cat? Almost certainly. A human? Don't know about you,
tovarishch, but I am. Somewhere along evolutionary chain from
macromolecule to human brain self-awareness crept in. Psychologists
assert it happens automatically whenever a brain acquires certain very
high number of associational paths. Can't see it matters whether paths
are protein or platinum.
The particularities of cardboard and marbles would likely prevent it from being "self-aware", in so far as "self-awareness" requires that the machine be fed a large amounts of information regarding itself, through itself. How many kinds of sensors can you build out of cardboard and marble? I don't know. But probably not many, and not as good or abundant as those receptors present in mammals, like those which constantly feed data into our nervous system.
But yeah, if this isn't an actual problem, you'd probably have something self-aware.
I think the limitation here is that the amount of cardboard required would lead to some sort of gravitational collapse. Other than that, there isn't a limit to the number of sensors you can build out of cardboard and marbles.
Not any of the components, any more than the individual letters of the alphabet could be a tragic hero or Shakespeare sonnet, but they could be arranged such that system as a whole would be.
Another example: the planet Whisper from Orion's Arm. The planet is covered in grass, and the grass and the wind combined to from a sound-based computer running a virtual world that the original inhabitants have uploaded their minds into.
> One definition of intelligence is the ability to skip deductive steps. To jump to a conclusion from the shadow of a ghost of a set of questions. It's preposterous that such a thing could be possible in an uncompromisingly digital reality, but if you make a computer wet enough, or big enough, or abstract enough, it will start to happen. And it has, now.
I have thought of this as framing a paradox for the impossibility of strong AI. Turing machines can never really be embodied. They can always move to a different substrate. So intuitively, they don't have skin in the game in the same way that living things do. Mathematically, they are unable to represent the real numbers, only the rational numbers.
Maybe this intuition is what you're getting at here.
Aside from the xkcd example, there's also the Machina Babbagenseii from Orion's Arm: sophont robots (vecs in OA-speak) made of walking Analytical Engines.
I think that self-awareness is more than a matter of (classical) computation. The immediacy of subjective experience implies an entanglement of states, as when one observes one's own self. The marbles would have to be coherent in a cardboard XOR gate for that to happen.
The reason why this is hard is because you also need to give the cardboard and marbles the ability to act in the world, that is a bit trickier. As long as you can manage the IO there shouldn't be an issue though.
in effect, to a first approximation you can say something like "if there's a nonillionth prime, then it is self-aware and has just reported feeling nauseuous."
Where the only change to that sentence to make it correct is that rather than "nonillionth prime" you need quite a bit more state and a lot more rules. Still, though, in effect numbers can be self-aware. I'm quite rigorous in my thinking and there is no room for any disagreement here, there's no ambiguity or open question around it.
Well, of course there's an open question. Nobody has yet shown that a simulated neuron is exactly equivalent to a biological one; its generally assumed this is the case, but even simulating c. elegans brain won't show this 100%. I would also suggest that it is more correct to say your number S, as a representation of state, has felt (past tense) pain or whatever, but to suggest it is currently feeling anything would be to privilege some part of the number over another - although I'm not sure exactly how to be more rigorous about that?
Suppose we replace 1 of your neurons with a black box that still chemically interfaces with your other neurons, but comes up with the decision about what to do in silicon. If you had a switch to flip back and forth between that black box and the biological version it replaced, your consciousness would not change as you flipped it back and forth - the same is true for replacing a few hundred, thousand, million or billion neurons: if the parts behave the same, the whole behaves the same.
so imagine flipping that switch back and forth -- or better, imagine that I'm in front of you flipping it back and forth, but you don't see the state of the switch, and your neurons are biological sometimes but sometimes simulated (it's just a hypothetical, this isn't an actually possible machine). You can't even tell the position of the switch. You feel and report pain the same way, consciousness too. Would you really say that when the flip is switched to simulation, you don't really feel what you report feeling? Of course not.
I think it's illusory to talk about the direction of time. The reason we do it this way is causative. The reason I say S "is feeling" pain is if you continue to simulate the next few quadrillion states, it ends up expressing/feeling those subjective feelings. In another sense the tense we use doesn't matter at all.
I'm not sure what you're saying about privileging numbers - what does it mean to privilege a number?
This is one of the themes in "I am a strange loop" by Hofstadter. If I remember correctly, the relevant chapter is titled "who pushes whom in the cranium"
It's actually an essay of his: "Who Shoves Whom Around Inside the Careenium" [1] discussing a marble-based conciousness (with the marbles 'careening' around the 'cranium' generating the awful pun...)
On what basis? If there was an "aware" AI program, it would be just as "aware" in a silicon computer as it would be in a marble computer (albeit much more slow).
Probably on the same basis that your blood cells, skin cells, etc aren't sentient. So the cardboard and marbles would not be sentient, but the system might be. Its the difference between an intrinsic property of the system and an emergent one.
People build these, and they are fun, and I'm surprised that they don't still make and sell the Digicomp[1]. (yes I know some people have been doing limited runs but it seems like there should be a persistent market for it)
As one of the comments on the bottom of the page points out, Canadian woodworker Mathias Wandel built a similar machine some time back: https://www.youtube.com/watch?v=GcDshWmhF4A
this calculator nicely illustrates an insight that I feel gets lost the way most people learn and reteach some simple CS concepts:
"two's complement" is not a different system for arithmetic that includes a "sign bit", it's just a different encoding or labelling of states which happens to have a bit that reflects the sign. So, inputs to this calculator can be said to go from 0-15, but more interestingly it can also add numbers in the range -8 to +7 (and therefore, it can also subtract, though it can't negate so you'd have to manually do that to your input by performing a different encoding table lookup).
(edit: now I'm realizing you could negate by performing a two's complement multiplication by -1, performed using this calculator via a sequence of 3 (shift+adds) of your input number to itself... that's correct at least up to some fencepost)
And then by extension, you could test "what about treating the range as -10 to +5", would that encoding succeed or break down? for starters, you would no longer have a sign bit...
This is a cool project. But when I was in school, if a classmate of mine came in with this, it would have really rustled my jimmies. Some kids have much better resources than other kids.
> Some kids have much better resources than other kids
Do you mean in the sense of "engineer parents who encourage this kind of thing", or in the sense of "their own computer to research this on and time to build it", or in some other sense?
I'm assuming you wouldn't have been envious of some kids' access to cardboard and glue.
He didn't make the context he was referring to very clear, which is this: "Then my sisters had a science activity where they had to present a science project and I was helping them to choose a subject."
Science Olympiad is perhaps one of the best examples of the differing level of parental involvement and the impact it makes. However, personally, I always tried to remove the competition from learning despite the curve. Ultimately, what the Jones's are doing has little impact on your own ability to learn and that's what its all about.
Yes her project would have been much cooler than yours without resources, but at least you see and learn about more diverse and advanced knowledge than from seeing 30 crappy ant farms.
Maybe seeting this would have kicked off my interest in programming earlier and I'd be a Mark Zuckerberg now :)
Reminds me of time that I was working as a clerk and doing CS part-time. It was the early days of computers in the office and was asked how do these magic machines work?
I replied that there was nothing special about electronics, computers could be constructed out of many things. Water/pipes/valves. Ropes and pulleys [1][2]
I think I may have used the wrong term. What I mean is that if you drop two marbles into the contraption at the same time, they need to be serialized before reaching this AND gate. A typical logic gate allows the signals to arrive simultaneously, but this one doesn’t, so one of the signals (marbles) needs to be held back while the other one goes first.
Here's a guy who made a 4-bit adder through water[0]. Creating a computer through water seems to be really doable. I'd be interested to watch more stuffs like this.
Back in the late 80s or early 90s there was a group working on this a a backup for military planes. The thought was the "water" (they used some oil like fluid and machined metal blocks for the gates) wouldn't be susceptible to EM pulses and could keep a plane in the air if hit with a pulse weapon.
I believe it was abandoned because solid state ICs grew way too quickly and outpaced the ability of the liquid ICs to keep up.
This is awesome! Would be really cool if something like this would exist as a 2d animation so we can see what would happen with different kinds of input.