The problem I have with qualia is that the argument assumes that qualia are a transcendent, non-physical thing. Why can't a qualia, eg, the experience of the redness of red, simply be what that conscious thing experiences when a set of neurons is activated in a particular way?
I know that sounds circular, so I'll expand. We can't know exactly how one person experiences a particular shade of red vs another person. But we can know that one person can experience the exact same shade of red in different ways in different contexts: what thing is red (a rose vs a welt), or what it the current lighting is, or their mood on different days. From that we can conclude that the red qualia isn't some transcendent fixed property.
We also know from experiments on conscious, human subjects undergoing brain surgery that stimulating certain networks can result in complex phenomena: eg, the subject smells fall leaves with a hint of maple syrup, say. Or they suddenly feel the impression of being in the presence of a long dead aunt.
The brain builds up associations, and each "node" that is activated nudges associated nodes into activation, and so on. I assume we've all had the experience where we are trying to remember the name of someone: we picture them in our mind and all we can come up with is the feeling it has two syllables and starts with a vowel. In those cases those associations are manifest but it wasn't enough to trigger the primary node (the name) we are seeking.
Why can't qualia simply be the state of consciousness when a particular set of nodes are in a particular state. So many other experiences seem to be exactly this; why assign a mystical, non-physical property for the case of subjective experiences?
I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
> I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
Agree with all you say, but just suspect that when we do come up with a network that can achieve such states, we will look at it and shake our heads and say,
Agreed -- when I talk (and I don't think I'm alone) words seem to spontaneously tumble from my mouth without any effort. I'm learning what I'm saying at the same moment the person I'm talking with hears it. Only when I've painted myself into a verbal corner, or there is some complex concept that where I need to rehearse the phrasing before I get my idea across, does consciousness enter into it.
That is all to say: 90%+ of what comes out of my mouth is about as sophisticated as ChatGPT.
I suspect talking yourself through a problem (e.g. rubber-duck debugging) works because (A) the forced encode+decode loop causes additional re-checking and re-interpretation, and (B) it recruits more brain-areas that would otherwise be bypassed by the faster/cheaper internal information path.
it seems that way, but all you say goes through a complex filter built during a lifetime: experience, memories, desires, fears. most of this is subconscious, but it's still part of your conscience.
not to mention that talking at a pub after a few pints is different from talking in a meeting where you have to filter every word to obtain the effect you want.
But that's the problem, we can explain all of this with LLMs.
> all you say goes through a complex filter built during a lifetime: experience, memories, desires, fears. most of this is subconscious, but it's still part of your conscience
aka "training"
> not to mention that talking at a pub after a few pints is different from talking in a meeting where you have to filter every word to obtain the effect you want
aka "temperature"
I'm not trying to trivialise this, but I really don't see that this is in any way a strong argument against AI.
I think you make a good point, I was just referring to the comment "when I talk (and I don't think I'm alone) words seem to spontaneously tumble from my mouth without any effort."
"Well, almost... we just need to improve its throughput by a factor of a bajillion and--for a fair-comparison--it should only draw something like 10-20 watts of power."
Biological neural nets have a benefit of being able to run massively parallel, I know basically nothing about silicon hardware, but I’m assuming that attempts to model hugely parallel systems like a brain on a serial architecture is going to be magnitudes more inefficient.
I do remember years ago reading an a response to a question someone asked about why can’t we just build chips that run in parallel and there’s the constraint of printing on a 2D surface that limits how far you can take that idea.
> I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.
I'm in the camp that says this question is unanswerable. We know we individually are conscious because we experience it. We accept that other people are conscious because they are so similar to us and they say they are, so by Occam's Razor they aren't zombies and they aren't lying. We haven't proven they are conscious, but we accept it. It seems reasonable. But if our test is just that it seems reasonable, there will be no convincing someone that a very dissimilar thing isn't just lying or faking it.
We say some things are not conscious not because we have evidence of this, but because the test is whether we ourselves say "Yes, this is conscious."
Furthermore, we hold onto this distinction as important because it has ethical consequence. We can do what we want to things that aren't conscious, so it's important that as many things as possible not be conscious. Is that thing conscious? It depends. Does it taste good?
One more thing: we say we are unconscious under anesthesia or when we are asleep. Why? Because we don't don't have any memory of what it was like to be in these states. But this is a test of memory, not consciousness. I don't have memories from when I was one year old, but I'm fairly certain I was conscious then.
> One more thing: we say we are unconscious under anesthesia or when we are asleep. Why? Because we don't don't have any memory of what it was like to be in these states.
Most people do have a memory of what it was like to be dreaming, at least some of the time. And some people have vivid, detailed recall of dream narratives -- I dated one for four years. Even in her summarizing mode she could go on for ten minutes about one dream.
Going back to your main theme -- there are people who know they experience pain, and will grant that other humans experience pain. But because fish are so different from humans, their aversive behaviors when snagged on a hook, or gasping as they suffocate out of the water are attributed to reflexive action and is absent pain. Sure, I can't know what their experience is like, but if it looks like a duck and quacks like a duck, etc, it is likely a duck. The same applies to consciousness of advanced artificial neural networks.
> Most people do have a memory of what it was like to be dreaming, at least some of the time. And some people have vivid, detailed recall of dream narratives -- I dated one for four years. Even in her summarizing mode she could go on for ten minutes about one dream.
They were talking about anesthesia, its a different state to a dream state.
When you fall under anesthesia you just shut down - no experiences of any kind - and next thing you experience is waking up.
My point is just that saying you were unconscious in a particular state because you have no memory of being conscious is a test of memory, not a test of consciousness. You don't remember that you were unconscious; you have no memory of being conscious.
> Because we don't don't have any memory of what it was like to be in these states.
I want to add an anecdote from about 5 years ago when I fainted: there is a tiny moment as I was regaining consciousness that I had an experience that is unlike anything I’d ever experienced before (and words really fail to describe it) there was no sense of time, or even presence- it was like pure observation. It wasn’t the blackness nothingness that I think is associated with being unconscious, instead it was the opposite- it felt like hundreds of images, sounds and thoughts all layered on top of each other- it was very ‘noisy’. That might sound stressful, but I don’t remember feeling anything at all.
It was only when some of that noise faded away that I experienced a sense of ‘self’ again and a moment later I formed the idea where am I? And then I opened my eyes and everything was back to typical conscious experience (although feeling a little disoriented).
> simply be what that conscious thing experiences when a set of neurons is activated in a particular way?
That is what it is, but that's totally independent of whether they're physical. One is a sign pointing to the thing, the other is a claim about the characteristics of the thing.
"This is a question mark: ?"
versus
"A question mark is a punctuation mark that indicates an interrogative phrase."
versus
"A question mark is half to three quarters of a roughly circular shape, open at the lower left, with a small line segment at the bottom followed by an open space and then a dot."
> but the toy networks we have today don't have nearly the right topology to achieve such states.
But why do you think it needs some "right topology"? Why can't "qualia-generating" computation be simple?
If you mean it needs recurrence (i.e. not being simply feed-forward) — then a network continuously fed with its previous output (which is the way you run LLMs!) does have this property.
People said the same thing about computers playing chess. "They will never play creatively, like humans, because they simply calculate by rote". Then along comes Alpha Zero and produces amazing, "creative" games that redefine the nature of computer chess. All of a sudden, such moves don't require human ingenuity any longer.
Humans always place themselves at the center of the universe, until the final moment when reality absolutely proves the lie of such self-importance. It's impressive how creative we are in creating such arguments, until that inevitable moment is upon us.
This is also, I believe, why so many are cavalier about the difficulty of avoiding catastrophic outcomes after inventing general artificial intelligence. We are really good at lying to ourselves. I fear this will be the last time.
Who said it was strange? It's just predictable at this point, and it shouldn't surprise anyone that we continue to make the same mistake again and again.
Let me rephrase, it feels there is a lack of empathy in expecting people not to think this way. Maybe even a hint of snobbery.
Realistically, it requires training to see oneself in this way , its not the default mode of seeing the world. The default mode is, things happen to me, I experience things, so I'm the center and my fellow man must be the same.
If we ever do have conscious computers, let's see if they can be conscious without an ego?
I have empathy for you too because people aren't just going to wake up tomorrow and think differently, and if you want them too, well then I guess that must be frustrating.
Thank you, but you're imagining things, and you're arguing against statements I never made.
Never did I state that anything was "strange", or say that people should change the way they think, or that I looked down on them for the way they think. All those are things rattling around inside your cranium, and you're looking for someone else to blame for them.
All I did was comment on what I noticed as a fact of the human condition, as it exists today. And you seem to agree, since you said it was natural that people think that way.
Take care. Please spend your empathy on someone else.
I do agree and that's my point, it's a natural way to think about existence. No point in pretending people will magically think some other way because it wont' happen.
When you live in a body that can feel pain and everything you experience basically happens to you, of course you think you're the center of everything because the way we experience things is as such.
I'll be empathetic to whoever I want to be thanks :)
> There is a difference between the nature of a phenomenon and the nature of a phenomenon’s existence, and the existence of intentionality and qualia is self-evident.
I would argue that good faith rational inquiry should not begin by having a desired final conclusion in mind and declaring it to be self-evident.
Would you be able to argue in favor of you not having intentionality or qualia?
Considering qualia self-evident is not equivalent to starting an inquiry with a predetermined conclusion. Rather, it is acknowledging a foundational aspect of existence that is necessary for any further inquiry to take place.
Although I agree that intersectionality could be inquired, qualia is the only thing that is literally self-evident. Qualia, by definition, refer to individual instances of subjective, conscious experience. This experience is immediately known to the experiencer and is, therefore, self-evident in a very basic sense.
Unless you're doing math (and are willing to take first order logic as a priori true) you need to start with something. Learning about the world requires data, data requires identifying a data source, and identifying a data source requires knowing at least one thing about the world.
As foundations go, it's hard to see how you could go any deeper than "I am having an experience".
I find it amazing that I can write an original story or a poem, give it to chat-gpt and talk about what it might mean, what the character's motivations might be, how they might be viewed as others, and have meaningful conversations and explorations. Idk man. I don't know how it works, but it's still amazing to me even now.
I find it too mind boggling. But it isn't intelligence as you might find in a child, it's transactionally applying a corpus of knowledge and deciding what is best to say next, throwing in some randomness which makes it appear more human.
In many ways it simulates the human brain, shockingly well, but a dead human brain.
I'm really concerned about AGI now, when I see how much ChatGPT really is "knowing" more than I will ever do.
If a system with actual sentient feelings, needs, curiosity and eventually self-doubt was to go online, and never need to eat or sleep, it's only a matter of time until we become its pets.
Yep. Pets, or just one of many biological species that go extinct as more and more resources are taken up by AGIs doing what they want to do, remaking the planet to be more suitable for themselves and not terribly caring if it becomes less hospitable for us.
I expect that AGI is probably between 2 and 6 years away, with super-intelligence a year or two after that, if we don't make a concerted and coordinated effort to restrict access to computing power. So much of our brains are taken up with things like processing visual, auditory, and sensory input, muscle control, emotions, etc. To have a consciousness, you don't need as much processing power as we have -- which is a lot, but we're rapidly approaching it.
To say that artificial consciousness is (or "remains") impossible, is to imply that there is a ghost in the machine in conscious lifeforms. Maybe there is, I don't know. But if there isn't, there should be no reason for artificial consciousness to be impossible.
As with all of these discussions, the definition of 'consciousness' is the crux of the argument – and since nobody can agree on that term, we're doomed to talk past each other!
This isn't a bad thing. It's actually the core human question – that of our existence. Stare into a mirror for a few minutes and ask yourself "Why am I me? Why am I here?" and you'll come to the exact same conclusion: unknown and probably unknowable.
But I believe that’s define-able. It’s how much you control your own thoughts. That’s what consciousness is. And it’s a vast, vast scale of opposites with so much variety, that a better picture than a spectrum is needed but I can’t imagine what. but because we have so many senses to become conscious in, we get a fusion of consciousnesses that is probably unique to us.
That's not the same question though. I've answered those questions for myself, but I couldn't tell you what consciousness is or isn't, because that requires language, and language cannot suffice to communicate our personal experiences.
This is the correct answer. But left unsaid is the fact that we don't have any objective definition of consciousness yet. I posit that until we do, artificial consciousness if by definition impossible.
Searle's Chinese Room thought experiment is about mastering the Chinese language.
There are two issues I have with it as it was originally presented:
1. Mastering a language is not the same as having consciousness.
2. Who "knows Chinese" in the Chinese Room thought experiment? I would say neither Searle nor Searle in/as part of the Chinese Room "speak" Chinese. But it is
fair to say that the book that the fictional Searle follows can be
seen as a model of the Chinese language; or at least the combination of
the book and Searle as its "processor" collectively are an implemented
operational model of the Chinese language. And a model of Chinese is
NOT the same as being skilled at conversing in Chinese (executing the
model in a particular way). Other posters here have drawn analogies from
music evoking certain subjective emotions, and again a semantic network
that has concept nodes labelled with the names of these emotions is not
the same as experiencing these emotions, although the semantic network
can be said to constitute a model of sorts of the music's effects. But
again, model(x) != qualia(x).
Perhaps… but perhaps first person point of view is merely control over one’s thoughts and extensions. The more you control yourself the more conscious you are. So then consciousness has been achieved, it’s in there, and we’ve all fallen into a horrible trap where the AI is destined to take over the world with some sort of digital government. Digital as opposed to analog, not as in device like a smartphone.
Conversely fMRI data shows that apparently conscious action is preceded by significant activation of the relevant regions of the brain before areas associated with consciousness are engaged.
We see supporting behaviour with experiments in split brain patients.
So it's wholly unclear, IMO, that the experience of being conscious is necessary for complex planning and action.
A good example might be driving on "autopilot" (not the Tesla kind, the human kind), meaning driving your usual commute and all of the sudden realizing that you can't really recall the last minutes, you were just going "through the motions". Now, driving is a complex task, with planning and action and such, yet it can happen entirely delegated to the subconscious.
I wonder how this author will feel about that question when they're busy running from their home because some "non-conscious" ... "entity" has ... "decided" that they're a nuisance - that it's tired about hearing about its "lack of consciousness."
Ah, that put a smile back on my face that was lacking earlier in the day. TERRific!
While I intuitively reject the idea of mechanical consciousness, I also admit that it's very hard to refute it logically. If your gpt5-powered autonomous vacuum cleaner is also your best buddy, because it's objectively better at supporting any conversation and seems to really understand you, is plugging it off a murder? I'm struggling to say no.
“Conscious machines are impossible!” Wrote the conscious entity, to the other conscious entities. This is a bit odd.
I would meekly suggest, if you use some human sound, like “artificial,” or “natural,” to label something, it makes no difference to the Universe. It’s energy waves all the way down. Aka, we play by the rules of the universe. Unless the author of this article can prove he is a zombie, there is such a thing as “conscious atomic system” which can be replicated.
As far as I can tell, the author seems to be trying to simultaneously use two separate definitions of "consciousness":
1. Something which results in "qualia"
2. Something which arises by a non-mechanistic process
and then is just expecting readers to accept that these two definitions are equivalent. It's the same fatal flaw as the original Chinese Room argument.
For point 2 anything with real world input is non mechanistic since non determinism propagates.
Eg. A regular computer with a camera that's programmed to do something if a photon hits a certain pixel is non deterministic. This type of setup is common in our ai systems.
I personally do agree that consciousness probably does have an element of non determinism but I don't think that non determinism needs to come from neurons in the brain. We have a whole bunch of non deterministic inputs, consciousness could well come from that and that's true if computers too. In fact it's pretty well accepted there's nothing really quantum about human neurons.
By all means human neurons (and animal neurons) are -- for lack of a better word -- magic as they achieve a thing that no other thing does. Namely they arise in some a lack of actual feeling that has no mechanistic explanation. That does not mean it's magic, it's just that we have no way of ascertaining whether computers are capable of this feeling and all evidence points to the contrary.
A feeling in your mind is more than just a signal. Feeling a pit in one's stomach is not because I feel serotonin or GABA or whatever (i don't know which one). You really, truly feel a literal pit in your stomach. Why does one feel this? Even if you claim it's due to a physical pit in my stomach (perhaps due to muscle clenching or whatever), why do I sense a pit? Not in the 'oh my brain neurons fire indicating the presence of something' but in the 'why does it feel like that'?. Why why why? No one can explain the qualia of the sensation and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
EDIT: there should be a rule on HN against downvotes without responses. Unless you can explain qualia; stop downvoting. It's a major problem.
and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
But not that. If you admittedly can’t explain the phenomenon in the first place, how can you have any confidence in that assertion? You’re trying to have it both ways.
“Nobody can understand it, but I just know that it doesn’t work and definitely can’t work in the way you’re suggesting.”
Well we've never seen qualia in non biological objects. It's like if I told you there are aliens. Maybe it's true; maybe it's not. Neither of us can prove it so we assume .. nothing.
To claim that non biological things certainly experience qualia is... A statement of faith.
I have not said they don't.
My statement is simply that we've never had any evidence they do.
My statement (the equivalent of agnosticism) is observational.
The other (that non biologic things are equally capable of qualia) is a faith based statement.
It's like me saying 'only biological things experience qualia because God made it so'.
You're skipping some steps there. Maybe I am too? Let me spell out my thinking more clearly:
- "Qualia" are subjective conscious experiences; you know they're real because you experience them yourself.
- We all assume that other people have them too (ignoring abstract philosophical debates around "zombies").
- Sounds like you assume dogs have them too -- because dogs act as if they're conscious, right?
- However, some people assert that computers cannot have qualia, by the "Chinese Room" argument.
My position is that the "Chinese Room" argument is meaningless if we don't understand anything about the basis of qualia in the first place -- and we don't. Where does the "only biological entities" requirement come from? Do bacteria have consciousness? Where do you draw the line?
In contrast, "if it acts as if it's conscious, maybe it is conscious" at least seems logically consistent. It looks pretty likely that computers could act as if they're conscious, if AI continues to advance. Maybe they would really be conscious and maybe they wouldn't, who knows? But I haven't seen any convincing argument to rule it out.
To repeat, the only direct confirmation of qualia we have is our own subjective experiences. Everything else is guesswork.
I believe dogs have qualia because I believe all mammals share a common ancestor and thus it stands to reason that we, being somewhat related to dogs, must have somewhat similar experiences.
Although not everyone feels that way and many believe only humans have qualia.
More to the point though... It's a hard stretch convincing me that matrix multiplications lead to qualia. At which point in the multiplication is sensation felt?
Whereas a dog and a human presumably operate on principles we still don't understand, so there is room in this unknown space for qualia to exist.
> No one can explain the qualia of the sensation and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
For any entity with qualia, of which only one certainly exists from the perspective of any given observer, this is more accurately stated as “any other entity than the self” rather than “non-biological objects”.
Qualia inherently is not observable externally, so it is impossible to build a body of evidence which would support associating it with observable traits like “being a biological object”.
A computer that has never been exposed to philosophical texts could potentially ask questions that children often do.
For example, many children upon reaching the age of reason (usually somewhere around 7) will ask questions like 'how do I know my blue is the same as yours?' which is evidence they experience qualia.
Current computer AI attempts don't count because they've been exposed to large corpi of philosophy
> No one can explain the qualia of the sensation and the only way to claim this thing can be experienced by non-biological objects is by a blind faith.
Sure, but the exact same argument could be made about everyone who isn't you, couldn't it? You only have two pieces of evidence to support the idea that other people feel anything:
- you feel things, other people are made of the same meat stuff as you so maybe it's a safe assumption that other people feel things in the same way you do (aside: this does pose a bit of a problem for those of us who eat animal products)
- other people tell you they feel things
Could I not say that you're relying on blind faith in assuming that I feel things just because you do?
Also, how do you know you feel things? How do you know you have free will? How do you know you're conscious? The only answer anyone can honestly give to any of those questions is "because I feel like I do".
We could say "well, it'll be magic then", but that's not particularly satisfying. To be fair you did say it's magic "for lack of a better word", however there is a better word: unknown.
Indeed I have no evidence that anyone but me experiences it. But I am a biologic organism. Thus my claim that there is no evidence of non biological qualia still stands.
> there should be a rule on HN against downvotes without responses
Downvotes are specifically for things that do not support productive discussion, so downvote + response makes no sense.
EDIT: If HN had a private message facility, I can somewhat see a case for private feedback with downvotes, though I suspect that would also end up unworkable.
We're subjectively experiencing our own set of qualia, but who's to say machines don't experience a completely different set? Since we don't understand what causes qualia, we don't know where it will arise or not or even what it looks like when it does.
The argument is essentially that artificial consciousness is impossible by definition. Because consciousness is only a subjective experience, there is nothing in the physical realm that can prove that something is conscious or not.
Discussing about the relative merits of neurons and transistors will get you nowhere. It is a philosophical, possibly religious question, outside the real of natural science.
The argument is necessarily either that human neurons are magic and that artificial consciousness is impossible, or that some version of solipsism is true (the author asserts their own consciousness as self-evident and that no test can possibly show whether something else is conscious).
Well you don't have evidence either way, maybe we do have magic neurons, until a computer displays the same level of consciousness as other living creatures, then we don't really know because we don't fully understand how a human brain works and we cannot build one.
So I feel a bit similar about your argument to be honest.
I guess that depends on your definition of magic, you talk like we understand everything about the universe, I'm sure that for all intense and purposes, there is plenty of "magic" left.
If you can't see the magic in existence itself, in many natural phenomena, then I guess that's your problem.
There is magic even in things we understand. Gravity, magnetic forces, the ability to feel emotions it goes on forever basically.
What I've actually come to realize is the phrase, "nothing is magic" is actually nonsensical because it's a paradox. Things can be understood and still magic.
You are conflating different meanings of magic. Precise language is necessary for this conversation to have any value.
> the magic in existence itself
this refers to the mystery of why anything exists at all, and why it exists in the way it does. Not the same as some special property that only exists in certain phenomena.
> There is magic even in things we understand
This again refers to the "why it exists in the way that it does" again, as fundamental forces just "are" rather than having a causal explanation. Again, not the same thing.
> Things can be understood and still magic
That's the opposite of what you were saying before, which is that there is some special element within us that can't be replicated in a machine. That would require us not understanding something, or at the very least recognizing and understanding some property that we have identified that can't be replicated in machines, which we haven't.
My TL;DR is really, just have an open mind because until we really truly understand how the brain works then you don't really know what type of "magic" is involved in producing the wonderful ability to enjoy life and experience.
My definition of magic is: mysteriously enchanting; magical: magic beauty. That's from dictionary.com btw.
Something can be explainable through words or scientific study and still be mysterious, enchanting and having magic beauty.
I'm very open minded, and have some pretty crazy ideas about reality (just look at my comment history), and in fact practice nondual meditation, which is grounded in the idea that everything is mind, and there is no separation between objective and subjective reality. Pretty magical stuff.
But topics have scope. I still find value in science. I recognize that science is just the map and not the territory, and is not a final answer about any truth. Within the confines of science itself though, there are an internal set of rules that keep it consistent. Intermingling two systems only confuses the issue. I don't use dzogchen nondual meditation to analyze chemical reactions, and I don't use science to analyze the qualia of my personal experience.
That being said, there is room for going up a level and seeing where the two systems might synergize or interact, but it requires a lot of thought and effort. It's not just something you come up with off the cuff. Magic in the Harry Potter sense may exist but until there is some mechanism for measuring it and understanding it, there's no room for it within scientific methods.
It's a metaphysical issue to intertwine the two ideas, not a physical one.
The Chinese room thought experiment proved nothing, except that people are willing to latch on to bad philosophy. This author hasn't nailed down consciousness, and so is teetering on the edge of a dualist, vitalist cliff.
Consciousness is just a log file. It's our mind's representation of itself. Suppose we decide to eat something because we're hungry. What actually happens is a vast array of calculations in our meat computer considering many inputs and possibilities. However, once the "eat" action is selected, it summarizes all the calculations into "I was hungry so I decided to eat" and feeds it back into the meat computer as input for the next round of calculation.
Therefore, the only "proof" we have of no consciousness in the current LLM zoo is that they're all once-through.
I'd also point out that even if you accepted the Chinese room thought experiment it doesn't apply.
Modern AI systems have tons of non-deterministic input. They have cameras and they will do completely different things if a photon hits one pixel vs another. This is an input that has true quantum randomness and the computer will have a completely different response based on this true quantum randomness. Modern AI systems are absolutely not deterministic which is required for the Chinese room thought experiment. Non-determinism may apply to a black box computer without real world input but as soon as you add non-determinism which modern real world AI systems absolutely do have you break any comparison to the Chinese room thought experiment.
So the entire "herp derp it's like a catapult" is wrong straight off the bat. Modern AI systems act based on non-deterministic input and since non-determinism propagates and makes the output of the entire system non-deterministic you better have a better explanation than "It's deterministic" to explain why a computer can't be conscious.
That has nothing to do with this SPECIFIC argument, I didn't read a word of it. I.e., I'm not singling out this particular set of essays / arguments / etc. The problem is much more fundamental - this boils down to claiming that "the universe is 'solved'".
That one species that got to just enough of a base level of mental ability to be able to think, when looking in the mirror, "hello, gorgeous"... and, on one dinky planet in one non-descript arm of some random spiral galaxy in a universe so absurdly vast that it makes our lack of understanding of ourselves, and the one planet we live on, a 'ING FOOTNOTE, ... can come up with some sort of rigorous argument to "uphold our magnificence" and "preeminence" ... as, apparently (to some), both "God's [special] children", and also "God" ourselves in deciding such "cases".
The universe isn't solved, and I'll take any odds on universe vs. humans anyone wants to offer.
I know that sounds circular, so I'll expand. We can't know exactly how one person experiences a particular shade of red vs another person. But we can know that one person can experience the exact same shade of red in different ways in different contexts: what thing is red (a rose vs a welt), or what it the current lighting is, or their mood on different days. From that we can conclude that the red qualia isn't some transcendent fixed property.
We also know from experiments on conscious, human subjects undergoing brain surgery that stimulating certain networks can result in complex phenomena: eg, the subject smells fall leaves with a hint of maple syrup, say. Or they suddenly feel the impression of being in the presence of a long dead aunt.
The brain builds up associations, and each "node" that is activated nudges associated nodes into activation, and so on. I assume we've all had the experience where we are trying to remember the name of someone: we picture them in our mind and all we can come up with is the feeling it has two syllables and starts with a vowel. In those cases those associations are manifest but it wasn't enough to trigger the primary node (the name) we are seeking.
Why can't qualia simply be the state of consciousness when a particular set of nodes are in a particular state. So many other experiences seem to be exactly this; why assign a mystical, non-physical property for the case of subjective experiences?
I'm in the camp that the most likely outcome is that artificial neural networks can be conscious and have real experiences, but the toy networks we have today don't have nearly the right topology to achieve such states.