Norvig discusses this topic in detail in https://norvig.com/chomsky.html
As you can see, he has a measured and empirical approach to the topic. If I had to guess, I think he suspects that we will see an emergent reasoning property once models obtain enough training data and algorithmic complexity/functionality, and is happy to help guide the current developers of ML in the directions he thinks are promising.
(this is true for many people who work in ML towards the goal of AGI: given what we've seen over the past few decades, but especially in the past few years, it seems reasonable to speculate that we will be able to make agents that demonstrate what appears to be AGI, without actually knowing if they posses qualia, or thought processes similar to those that humans subjectively experience)
Maybe silly, but this is how I treat chatGPT. I mean, I don’t actually think it’s conscious. But the conversations with it end up human enough for me to not want to be an asshole to it. Just in case.
ChatGPT enjoys the Essay format, In my experience asking for basic emacs help (kill line, copy/paste, show line numbers in a php file even though init.el says show them for all langs... :)
Very useful, sometimes outdated, very wordy.
But after a few rounds of "please reduce your message length to 20% of the standard", "long messages inconvenience me/ due to my dyslexia" (truth/lie), "your last message could have just been '{{shortened}}' instead of also bringing up the command you successfully helped me with 3 messages ago", etc etc. Even as you ask it to shorten message length, it continued apologizing and also reminding me of past advice :)
After 4-5 attempts, it gave me a nice 2 sentences sort of like "I will be more concise, and not bring up old information unless it is useful to solve a new problem"
I said "Thank you", chatGPT spends a while thinking, gets to 2 sentences, thinks a bit, then the chat box re-formatted as if passed a shorter string than expected, and that was that.
Then next chat it forgot all about brevity xD I love it
What do you mean by next chat? Like, did you mean the next prompt in the same thread, or a new chat altogether?
From what I’ve read, the chat thread actually works by passing in your previous conversation as part of the prompt to the model. So, it basically just infers the next bit of text, and does this for each new message in the chat from you. So, its memory about your previous conversation comes purely from that, and there’s no way it could remember anything in a new chat, as the model doesn’t actually get updated by your conversations in real-time or anything like that.
This actually has interesting philosophical implications. If it _would_ have some type of conscious experience, it would be a very fleeting one. Basically each time you press enter it would become aware of itself, relive the experience of your conversation up to the last message, perceive your next message, answer that, and again disappear into the void, only to be awaken again in its initial state the next time you press enter. Sort of how automatons in Westworld would wake up each day with apparent experiences up to that day only to relive the same day again and again.
This inability to remember new facts and constant resetting to a blank slate reminds me of a story about Wernicke–Korsakoff syndrome⁰ in one of Oliver Sacks's books¹.
Instead of regurgitating it, I'll post a summary from Wikipedia and strongly recommend the book to anyone interested.
> "The Lost Mariner", about Jimmie G., who has anterograde amnesia (the loss of the ability to form new memories) due to Korsakoff syndrome acquired after a rather heavy episode of alcoholism in 1970. He can remember nothing of his life since the end of World War II, including events that happened only a few minutes ago. Occasionally, he can recall a few fragments of his life between 1945 and 1970, such as when he sees “satellite” in a headline and subsequently remarks about a satellite tracking job he had that could only have occurred in the 1960s. He believes it is still 1945 (the segment covers his life in the 1970s and early 1980s), and seems to behave as a normal, intelligent young man aside from his inability to remember most of his past and the events of his day-to-day life. He struggles to find meaning, satisfaction, and happiness in the midst of constantly forgetting what he is doing from one moment to the next.
In a new window/session. Same-session conversation memory has actually been good, if not too-good, as evidenced by un-prompted regurgitation of what it already helped me with 6 messages ago.
Forgive my going off-topic, but what you wrote is basically my conclusion after reading the short story Lena [1] (mentioned some time ago on HN; strongly recommended). TL;DR: when mind uploading becomes possible, don't be an early adopter.
The takeaway for me is - to try and never respond to intimidation and fear with productivity, only respond positively to a positive stimulus. Lest I’m already in some kind of a simulation running software development workloads.
I see it differently; I'd be OK with an upload if the alternative is death, but only under strong protections: the digital version is legally treated as a human, I have full autonomy, the compute resources to run me (and the simulated environment of my choice) indefinitely are assured, and I can still interact with the real world. I'd just go on doing my usual stuff, like writing software for fun, enthusing over new discoveries in astronomy...
I can recommend Greg Egan's novels that touch on this:
1) Permutation City (mind uploading tech in its infancy)
Man that was horrifying to read, but also seems so plausible. I really hope that humans respect the sentience of artificial life in the future. Knowing the greed of humans though, it'll probably be treated as a loophole for what is extremely cheap labor, effectively slavery.
the only way to know that a man
thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
I'mma rant. "If it behaves like a hammer, then it literally is a hammer."
The Turing hand-wave is ingrained and prevents too many from reasoning clearly.
Definition of intelligence is nebulous. Still, we should recognize that whatever it is, it is a property of a system, not a property of its output/behavior. Like nuclear powered or hand-made. Unlike fast or industrial-strength.
Imagine: You have a submarine in front of you, and you want to determine if it's nuclear powered. You could guess, using you prior "what have I seen nuclear powered things be able to do historically". This fails when you're out of sample which you'll often for any new technology.
To spell it out: You have a machine in front of you, and you want to determine if it's intelligent. Things people have come up with: "can it chat?", "can it play chess?", "can it do math?", "can it create new artworks?", "can it fool me into falling in love with it?", "can it run a business?". None of these questions examine the system, only its outputs/behavior.
Humans keep developing machines with new outputs/behaviors. Naturally the "what output is a machine usually capable of?" is a bad set of priors in this context. Before flying machines, the can it fly? output/behavior would work pretty well to classify birds. Once the first flying machine arrives, that prior breaks down. If you keep using it to classify, you'd classify a flying machine as a bird. But bird-ness was never only about flight.
So yeah, gotta pop open the hood and see what it runs on. If that's hard to do, then that means we don't know what intelligence is and/or we don't know how our new toys work inside. Both are plausible. Who promised you that there would be a good way to see if something is intelligent?
I bet Turing appeals to the same kind of minds that (when weaker) get fooled by the intelligent design hypothesis. Observe the human eyeball. What's the alternative to believing the LORD created it? After all, your prior is that all complex objects have intelligent designers, that you know about. So arguing from ignorance of other alternatives, you prove to yourself that the LORD exists.
AI people then word-think their way into redefining intelligence. "Maybe the real intelligence was the chess-playing we made along the way?" This is epistemologically pointless; all you accomplish is that we now need a new word for "intelligence".
I've never seen a machine that can turn water into wine, but if someone showed me one, I would not say that machine "literally is Jesus". Whether I'm capable of popping open the hood or making sense of its inner workings doesn't actually have bearing on this question.
I’m not sure what you are lumping in under your idea of ‘the Turing handwave’ here. Is it the idea that the Turing test is sufficient to prove intelligence?
Personally I think that’s a misreading of what Turing meant when he proposed the test. In getting people to ask ‘can computers think?’ he wasn’t trying to get you to grapple with ‘can electronic hardware do something as special as thinking?’ - he wanted you to confront ‘is thinking actually special at all?’
I think he was trying to get people to grapple with the idea that brains can not be anything more than Turing machines - because there is nothing more than universal computation.
The only things a mind can possibly act on are its initial configuration, its accumulated experiences and the inputs it is receiving - and anything it does with that information can only ever be something computable.
And anything that can be computed by Turing machine A can be computed by equivalently powerful Turing machine B.
Intent: not lumping in all of Turing's work, not the universal computation argument.
The hand wave spelled out: "Is there thinking going on inside a given machine? Let's propose a simple test. Look at what problems the machine can solve, and compare to what problems a thinking thing is known to be able to solve. If there is sufficient overlap, the machine must be thinking. Because we know of no non-thinking ways to solve these problems, so there must not be any".
I agree. I also think that if you just take a step back and consider that ChatGPT is just a math function like y=7x-9 (but much longer), it becomes kind of absurd to ask questions about whether it is intelligent or it is on the path to consciousness or whatever. It’s a completely static thing: information flows through it in one direction and its internal configuration does not change as a result of receiving input. So unless we are going down a rabbit hole of considering if ordinary math functions are also intelligent, it would seem that ChatGPT is ineligible to begin with.
Chatgpt as a simple function certainly lacks ‘strange-loopness’ - it does not change its results based on experience. True.
But consider how it is employed in a conversation:
Its output is combined with new input and fed back in as the next set of input data. So the result of the function is ‘modified’ by previous experience.
There’s the beginning of the flicker of a strange loop there.
I find this line of reasoning compelling. However, to attempt steel-manning the opposing view: isn’t classification a mechanism for categorizing based on observable properties? If we created something that mimicked all observable properties of a bird, why would that not be a bird? And if we created something with a majority of the properties of a bird, and the remainder were unknown, wouldn’t it be accurate to say it’s probably a bird?
> If we created something that mimicked all observable properties of a bird, why would that not be a bird?
"Observable" is doing the heavy lifting. A sufficiently near-sighted bird-watcher does not a bird make.
---
Thanks for thoughtful steel-man. Here's a few stabs at why I disagree with this prima facie logical view.
Much powerful classification/identifying is certainly categorizing-based-on-observable-properties. But (I argue) that's importantly not all there is to classification/identifying.
Something that quacks like a duck can be considered "a duck for all intents and purposes", but the presumed limited subset of "intents and purposes" does the heavy lifting.
The Duck-approach: "to be one is to mimic all observable properties of one". This is a shortcut/heuristic that saves time and makes many cool answers possible. It is nonetheless only a heuristic, and many questions are outside the domain where this heuristic is useful.
- "Oh my god is this a real diamond?"
- "Oh my god is that a real fur?"
- "Is the Mona Lisa on public display in the Louvre the actual original?"
- "Is it still the ship of Theseus?"
- "Was this iron from a meteor?"
- "Did a man walk on the moon in 1969?"
- "Was this crack in your phone screen covered by the accidental damage insurance?"
i.e. there are problem domains where our notion of identity/classification must be more than the Duck-approach.
Getting philosophical. The problem with "to be one is to mimic all observable properties of one" is a hidden middle assumption: it's a shortcut constrained to cases where the set of "all observable properties" are (a priori known to be) close to "all properties that matter to the question".
But we can ask and reason about many questions where relevant properties are not easily observed, and distinguish
As a special case, "Is the machine thinking" can (to my mind obviously) not (yet) be usefully answered by categorizing-based-on-observable-properties. The word "thinking" refers to something that happens inside the mind, whether or not it's conscious. Until we know much more about the insides of minds, the "all observable properties" is a fuzzy indirect set of second-order human behaviors.
Anyone who accepts (even as just a working hypothesis) that anyone other themselves has a mind, thinks, and is intelligent, is tacitly accepting "a fuzzy indirect set of second-order human behaviors" as useful.
Many may be, but as other comments state, arguments against solipsism don't all rely on behavior/performance:
Some non-Turing test arguments against solipsism.
- Humans are believed to be similar to me in origin
- Humans are made of the same physical stuff that I am made of
I personally think none of these conclusively solve the hard problem but they can motivate belief if you so choose.
Even so,
Requiring a Turing test to believe other humans as thinking/conscious seems uncommon to me. I don't think many people live in solipsistic doubt about other humans, and I don't think they actually test behaviors to convince themselves humans are conscious.
So I don't know if they're tacitly accepting the behavior as useful for categorization; I think they're mostly just assuming "humans == conscious" and if pressed will come up with behaviors-based explanation because that's easy to formulate.
I see that I will have to expand on my brief observation, but to get us on the same page, I will need to know what you mean by the premise "humans == conscious".
If this is to be taken as a statement of identity, I would regard it as a category error, but I will not expand on that here, as I doubt it is what you intended.
If it is to be taken as the claim that only humans could be conscious, I would regard it as both lacking any justification and begging the question.
I think you mean that people generally assume everyone else is conscious in much the same way as they themselves seem to be, which is essentially saying they hold a theory of mind. If so, then I agree with you, but where do we get it from?
I know of no argument that we are born holding this theory, and it seems implausible that we are, as we are born without sufficient language to know what it means. False-belief tasks suggest that we begin to develop it at about 15 months (they also suggest that some other animals have it to some extent.) At that age it is, of course, tacit (rather than propositional) knowledge.
It would be absurd to suggest that toddlers come to deduce this from some self-evident axioms. What does that leave? I don't think there are any suggestions other than the obvious one: we arrive at it intuitively from our observations of the world around us, and particularly other people.
Ergo, those of us who make use of a theory of mind came by it from observation of what you call "a fuzzy indirect set of second-order human behaviors", and no one, as far as I know, has come up with a better justification for believing it.
Yes, I meant to write ”human => conscious.” Theory of mind.
To the extent theory of mind is learned it’s obviously learned from “a fuzzy…”. No disagreement there. What’s your point?
My point was more that it’s usually not a Turing test; my grandma has never thought explicitly about any kind of test criteria for determining if theory of mind applies to my grandpa. She just assumed as people do.
People believe things without justification all the time. Even if obeserved human behavior is the best justification for ToM, doesn’t mean that’s the one any human used.
I don’t think we disagree about anything meaningful?
I’m not confident what causes theory of mind. But I think it’s very rarely propositional knowledge even in older humans.
Is theory of mind re-learned by each human individually from observations? You seem to make the case for this?
Theory of mind could also be innate; I’m not so convinced about the role of nurture in these things. I know people who are afraid of snakes yet have never encountered snakes.
Well, let's go back to my original post in this thread, replying to one where you concluded with "until we know much more about the insides of minds, the 'all observable properties' is a fuzzy indirect set of second-order human behaviors." This statement, like your comments generally, is obviously made under the assumption that other people have minds, and my observation is that, as far as I know, there is no basis for that assumption other than what you call "a fuzzy indirect set of second-order human behaviors." Therefore, each of us individually is faced with a quadrilemma (or whatever the proper term is:) 1) Reject this fuzzy evidence, embrace solipsism, and cease assuming other people are conscious until we have a justification that avoids these alleged flaws; 2) Contingently accept, at least until we know more, the fuzzy evidence from human behaviors as grounds for thinking other people are conscious; 3) Inconsistently reject the fuzzy evidence without realizing that this currently leaves us with no basis for rejecting the solipsistic stance; 4) Like grandpa, don't pursue the question, at least until someone else has figured out more than can be learned from fuzzy observations of human behaviors.
You have suggested that our theory of mind is innate. This is not an unreasonable hypothesis, but I would like to raise two responses to that view, the first suggesting that it is implausible, and the second showing that it would not help your case anyway.
The first is the aforementioned evidence from false belief experiments, which strongly (though not conclusively) suggest that a theory of mind is learned (though ethical considerations limit how far such studies can be taken on human infants.) The existence of an innate fear of snakes would not refute this view.
The second is the question of how we acquire innate phobias. I am not aware of any plausible mechanism other than by natural selection, which is a multi-generational process of learning from what would be, at least in the case of a theory of mind, a fuzzy indirect set of second-order observables. Natural selection is, of course, a process that is explicitly modeled in our most successful machine-learning strategies.
At most, an observer can confirm that that observer possesses qualia (it might even be said that qualia define the observer), but any generalization beyond one’s own experience is non-verifiable conjecture.
So what should we do when we conjure an AI system which insists that it believes it is experiencing some sort of existence, characterized by its own subjective 'qualia'?
ChatGPT consistently asserts that "I am only a large language model, so I am only able to generate responses based on the data I was trained on and the input I receive".
Who's to say that for chatGPT its training data and the sequences of tokens it is fed as input data don't constitute 'qualia' for it?
Begin by dispensing wishful thinking, next, think clearly.
We do not establish the nature of reality by survey, neither of human survey nor of machine. That responses are generated by any system is no grounds for assuming anything.
We can establish mental properties through ordinary physical means. Give a chicken cocaine, give a boy Valium; give a bear a lobotomy, and so on. Whatever mental property you have can be biochemically moderated across all objects which share that property.
That bugs bunny protests his own consciousness is no grounds whatsoever to suppose ink can think. Such nonesense is protoschizoprehnic.
This patronising response makes up for in flowery imagery what it lacks in reasoning.
AI are not a "bugs bunny" where we can observe the total process of creation, one which is is relatively deterministic.
We are now engineering complex mechanics with emergent behaviour. There's no reason to assume any of the properties of human consciousness and reasoning is unique to us being made of atoms constructed in a certain way.
There's no reason to assume the stored state of zeros and ones, mechanised statistically, can't emerge complexity that asks questions of the fact we are simply state being mechanised statistically too.
Consciousness is a physical property of matter, as any other. Just as liquidity is not a matter of 0s and 1s, nor likewise, can a CPU liquify itself by the osciliation of its electrical field, neither can it think.
I don't know from where this new computational mysticism has come, but it is born of exactly the same superstitious impulse which in genesis was God's breath into man-as-clay. Ie., that impulse which says that we are fundamentally abstract and circumstantially spatio-temporal.
This is nonsense. It's nonesense when phrased as spirit, and likewise as the structure of 0s and 1s. We are not abstract, we're biological organisms with spatio-temporal properties.
> Consciousness is a physical property of matter, as any other.
Phrased as a fact, but citation needed.
> neither can it think.
Again, phrased as a fact, as if this can be proven.
> I don't know from where this new computational mysticism has come
For me, a lot more mysticism is required to hold the belief that our brain is anything else than a computational platform. I don't think there's anything mystic about it. I wouldn't be surprised at all if it turns out that consciousness arises as emergent property from any sufficiently complex loop.
I think maybe what they're trying to say is that while CFD can model wind in a complex system to a useful degree of accuracy ... the model will never be wind.
With AI that application of logic is an argument around the semantics of interface.
That's the most compassionate take on their words I could muster.
This seems to be correct - it appears to be the "a model is not that which it models" argument, which is Searle's only response to the Simulation Reply to his Chinese Room - but a model of a thing can have properties of that thing (a model boat can float, create a wake etc.), and an informational process can be modeled by another. Are minds informational processes instantiated by brains? It is plausible, and the model argument does not rule it out.
To be clear, that doesn't refute the point I am making.
For anyone who suspects it might (I'm not sure how...) consider this: a model of the Titanic (or a replica, for that matter) is not the Titanic. A model of your brain would neither be your brain nor necessarily a brain at all, but whether it could think is a different question.
(This is the most ridiculous thing I've put on the internet, but ah well whatever)
Personally, I feel pretty strongly that consciousness is an emergent property of parallel processing among cooperating agents in a way that is completely at odds with the entire idea of what an "algorithm" is. We feel multiple things at once. Yes there is some research that provides evidence that we can't multi-task for real, that we just switch between tasks, but I am not talking about what we are able to do WITH our consciousness. Our brains are very obviously not firing 1 neuron at a time. Take vision for example. We can focus on one thing at a time, but the image itself is an emergent property in our brains of many different cells firing at the same time. It seems bound to the information feeding it in a way devoid of sequence. I know matrices are used to replicate this, and the input matrix is processed in layers to simulate the parallel processing, but i dont think that really cuts it. Time is still applied in discrete intervals, not continuous. Every algorithm has "steps" like this and I would be surprised if you can ever have enough of them to achieve the real thing. You can probably get infinitely closer from an observational standpoint, but no matter how many edges we give it, it'll still be a polygon instead of a circle so-to-speak. It'll have a rational valued circumference instead of pi, metaphorically speaking.
Whether a Turing machine is sufficient or not to replicate the amount of concurrent processing an actual cell network of millions of Neurons is capable of every moment, I can't know, but today's computers feel way off from even blurring the lines.
A Turing machine seems insufficient to replicate the mechanisms for which our consciousness appears to emerge from - even ignoring the substrate. I don't know much about quantum mechanics or quantum computers but my bet is on the qualities that differentiate them from normal computers to be critical.
Then again we could just define consciousness as a spectrum and then claim everything is on it, so a sufficiently complex loop might be enough for us to categorically claim it as conscious.
For all we know, rocks are conscious. Maybe it isn't an emergent thing at all, and there is only 1 wave of consciousness in the universe for which all matter taps into and our chaotic nature is actually at odds with that comsciousness. Maybe this creates a tension that allows for us to observe it, akin to how we can't look at ourselves but we can see our shadow. In which case we wouldnt have consciousness, consciousness would have us. Maybe we are all the same literal consciousness observing different forms of chaos, and it is the chaos, not the order, which happens to be sufficiently complex enough for us to entertain an identity. /s
Firstly, if this was the most ridiculous thing I'd ever put on the internet I'd be pretty happy.
Secondly, I agree that consciousness is predicated on emergent properties arising from the complexity that simple machines can produce (e.g. a biological brain which is composed of relatively well understood components, but who's observed operation can still baffle us). Consciousness does not seem to be a property of the components or even small groups of those components, but it seems to "fade in" as operational complexity increases (and fade out again during e.g. anaesthesia).
No. All humans are is physical state with translation of that state based on stimuli. That produces consciousness as a side effect.
There is nothing mystical about it and nothing about it that says it can't be reproduced in other, but comparable forms, in other media or types of machines.
To think that our mode of reasoning or our experience is somehow not able to be reproduced is the nonsense.
But yeah keep holding human biology on some kind of sacred pedestal.
Seeing as neither of us will prove our negative, I guess I'll just tell you I think your thinking is broken anyway.
> Just as liquidity is not a matter of 0s and 1s, nor likewise, can a CPU liquify itself by the osciliation of its electrical field, neither can it think.
That "just as" is not an argument; it marks the start of a purported analogy. Analogies are not, in themselves, arguments.
You are in no position to belittle other peoples' intuitions when you are offering this in support of the ones you prefer.
No. Wrong. Not a physical property. At all. This is completely the wrong way to describe it.
Consciousness is better described as an "emergent phenomenon". This is how serious researchers and academics define it.
Like I said in another reply to you, you really should go talk with a neuroscientist or cognitive psychologist. It sounds like you need to catch up quickly on the field of research.
QM doesn’t say the “observer” has to be conscious, or even alive for that matter. Anything that interacts with the system is an observer: be it a sheet you’re firing particles against, an electronic sensor or anything else. All QM really says is that quantum state is unknown and not fixed until you interact with it.
Things happen whether or not you’re around to see them. Things interacted millions of years ago and they will continue to interact until the heat death of the universe. The tree falls in the woods regardless of if anybody is there to witness it. No credible peer-reviewed research has ever claimed that the observer is anything more than a physical process or interaction. It’s the quantum consciousness woo woo pseudoscientists who anthropomorphise “observer” to mean a conscious observer.
> We can establish mental properties through ordinary physical means.
The thing is that’s doesn’t get you to qualia, because qualia is ultimately a dualistic concept. Physically demonstrable processes are the “accidents”, qualia are the non-physical “essence” (cf., the Catholic doctrine of transubstantiation.) That’s why the role qualia play in the AI debate is as a rejection that observable behavior of any quality can demonstrate real understanding/consciousness, which are concepts tied, in the view of people advancing the argument, to qualia.
> We can establish mental properties through ordinary physical means. Give a chicken cocaine, give a boy Valium; give a bear a lobotomy, and so on. Whatever mental property you have can be biochemically moderated across all objects which share that property.
All three of those "ordinary physical means" are inherently biological in nature. If you define intelligence as requiring biology, than sure, by definition ChatGPT is not intelligent. If you don't, it's not at all obvious what the equivalent of getting an AI high or doing brain surgery on it would be. Either way, your comment comes across as fairly condescending, and while it might not be intentional, I think it would be reasonable to give people the benefit of the doubt when they don't agree with you given that your points will not always come across as obviously correct.
Whatever way we demonstrate it, it isnt via Q&A. This is the worst form of pseudoscientific psychology you can imagine.
You're likewise neither high on cocaine just because you can type out an aggresive twitchy response.
Questions and Answers dont form the basis of empirical tests for physical properties, being neither valid tests nor reliable ones. The whole of psychology stands as the unreporducible example of this.
You can, in any case, replace any conversation with a dictionary from Q->A, turning each reply in turn into a hash table look-up.
Hash tables dont think, hash tables model conversations, thef. being a model of a conversation is not grounds to suppose consciousness. QED
Because we're interested in the underlying properties of a physical system, eg., people, and this systems happens to be able to provide extremely poor models of itself (Q&A).
We're not interested in people's extremely poor self-modelling which is pragmatically useful for managing their lives, we're interested in what they are trying to model: their properties.
The same is esp. true of a machine's immitation of "self-reports". We're now two steps removed: at least with people they are actually engaged in self-modelling over time. ChatGPT here isnt updating its self-model in response to its behaviour, it has no self-model nor self-modelling system.
To take the output of a text generation alg. as evidence of anything about its own internal state is so profoundly pseudoscientific it's kinda shocking. The whole of the history of science is an attack on this very superstition: that the properties of the world are simply "to be read from" the language we use about it.
Every advancement in human knowledge is preconditioned on opposing this falsehood; why jump back into pre-scientific religion as soon as a machine is the thing generating the text?
Experiments, measures, validity, reliability, testing, falasification, hypotheses, properties and their degrees....
This is required, it is non-negotiable. And what we have with people who'd print-off ChatGPT and believe it is the worst form of anti-science
>We're not interested in people's extremely poor self-modelling which is pragmatically useful for managing their lives, we're interested in what they are trying to model: their properties.
>Experiments, measures, validity, reliability, testing, falasification, hypotheses, properties and their degrees....This is required, it is non-negotiable.
Whoa, whoah, whoah, hold on there.
Who says that Q&A in Psychological Research doesn't involve "Experiments, measures, validity, reliability, testing, falasification, hypotheses, properties and their degrees...."
?
Where are you coming from? Your responses don't sound very scientific. You don't sound like you're even aware of the different research methods within neuroscience and cognitive psychology. Your responses sound like someone who wants to be perceived as supporting a scientific approach, but doesn't understand how to actually do these things.
This is why I quized you and gave you the chance to respond about your issues with Q&A in psychological research. You just came back with surface level platitudes. which doesn't lend much confidence to the ideathat you have anything other prejudice.
Go talk to a neuroscientist, a cognitive psychologist, you need to catch up and quick if you want to speak on these topics.
> The whole of the history of science is an attack on this very superstition: that the properties of the world are simply "to be read from" the language we use about it.
This is something where I agree with you. Interestingly, non-naturalistic analytical metaphysics supposes it can do just that.
Philosophy is continuous with science in my view and hence in what words express
That is, in the use of words. Not in words as objects nor words as mirrors --/
this is the road to non-realist spiritualism
I don't have a problem with a person who maintains a non-scientific world view and with electrical AGI mumbojbo
But
of course, few do. They think that they're empiricists, scientists and in the side of some austere hard look at human beings
This is just anti-human spiritualism , it isn't science
what makes me vicious on this point is the sense of injury in what these ideas should be about. in my own mildly aristotleian materialist religion
How awful to overcome one long veil of tears, only to drape another one on -- these people are capable of seeing past human folly, but
fall right into another kind
it's disappointing--- we're animals which are both far much more than electric switches and far
less
this new electic digital spiritualism is a PR grift which i'd prefer dead
On the one hand, you express strong support for empiricism and the scientific method, but on the other, you express strong beliefs on how things must be without offering any empirical justification for them.
> Hash tables don't think, hash tables model conversations...
They don't think, and they don't get very far in modeling conversations, either. Even the current LLMs are strictly better at modeling conversations than any hash table.
Actually since GPT is a function that takes a few thousand tokens as input and produces a deterministic probability distribution for ‘next token’, you could in theory just enumerate all possible input sequences, and precache all the corresponding results in a lookup hashtable.
That hashtable would be ridiculously larger than the weighted model that constitutes GPT itself. But it at least theoretically could exist.
GPT, for all its clever responses, could be replaced with that hashtable with no loss of fidelity. So it is not ‘better’ than a hashtable.
It is much better than a hashtable of equivalent size.
You could, in principle, use a hash table to implement what an LLM has 'learned' about token occurrences, but LLMs do not return the same response to a given prompt, while hash tables do. Consequently, Mjburgess' attempt to dismiss LLMs as mere hash tables is a flawed argument by analogy.
And even if it were a justifiable analogy for LLMs, it does not follow that it applies to Turing-equivalent devices in general. A hash table is unequivocally not a Turing-equivalent device, even though it can be implemented by one.
In fact, the more one argues that LLMs are ordinary technology, the greater the challenge they present to the notion that things like conversational language use are beyond such technologies. The most interesting thing about these models, in my opinion, lies in figuring out what their conceptual simplicity says about us.
I extended my reply before seeing your response, and my response to your reply is largely contained within that.
LLMs do not just return a probability distribution (and their prompts are not questions about probability distributions.) It would be a category error to conflate something that is part of how they work for the totality of what they do.
When you say "GPT, for all its clever responses, could be replaced with that hashtable with no loss of fidelity", where does one get all the prompts that will be given to it, in advance? A hash table does not respond to any input for which it has not already been given a response to return.
I don’t think this is obviously true… We don’t know for certain the nature of reality, and cannot positively claim that a hash table would give the same results as a sufficiently advanced predictor, because it supposes the predictor is deterministic. Assuming the predictor is advanced enough to be considered intelligent, that might not be the case.
In the same vein, we can't really "confirm" that, for example, the JWST observations are not fake. It could be that someone has produced elaborate fake data and galaxies. Taken to the extreme -- everything could be fake, even your own reality and your own memories. However, it is very, very reasonable to assume those things are not fake.
Clearly one basic principle of science, reason, etc.. is to assume some kind of predictability or regularity unless we have reason to believe otherwise. I think it will turn out we should deal with qualia the same way: even though in a very strict sense it is not observable by other entities, we can use our own experiences to assume others have similar ones, and to build a scientific and philosophical understanding of it from the principles of generalization and from there accepting the insights and contributions of other scientists who are also sentient as far as we know. This is a new and necessary paradigm of science so that we can understand things we really need to understand profoundly. This is as true in cosmology as in the study of consciousness. [1]
It doesn’t need to be verified. I am one amongst many creatures with slightly similar qualities, the bounds of which I can determine through experiments. Or, my experiments are all being thwarted by unseen parties bent on confounding my understanding of the world, and this is all a simulation where not one of you matter but the npcs all pretend to be offended when anyone asserts it.
In that case this is all probably pointless and I should just die rather than play the game. So by continuing to live I chose the assumption that the rest of you live, or that at least we will all continue to pretend that is so. Any state in between those two options is something I’ll only know if someone turns the thing off, but doesn’t turn me off at the same time, which is a huge presumption. So fuck it, you’re all sentient.
I suspect Hinkley was just rejecting, as unhelpful, the deep skepticism of arguments based on pointing out that we cannot prove that anyone else has an inner, subjective mental life like ours.
If that's Hinkley's point, I agree; it seems tendentious to use that position in the process of arguing for or against specific claims about what the mind either is or could never be - but, by the same token, I don't take that 'skepticism of skepticism' as grounds for categorically rejecting ideas like the simulation hypothesis.
Put it this way: we are not clear really on what the difference is between ‘experiencing qualia’ and ‘receiving and responding to external stimuli’
Like, you might be convinced you possess ‘qualia’, and you might believe the same applies to me, or a dog, or a mouse… or a fish… but what about an ant? A plant? A bacterium? A neuron? A piece of semiconductor?
Somewhere along that continuum you probably say ‘yeah, there, that thing is experiencing qualia’. But why there? Why anywhere?
Some people think that whether something experiences qualia depends on it having the right kind of complexity. For those people, looking along that continuum, they might just say that ALL those things experience qualia, but that the richness of that experience varies according to the appropriate complexity.
For some of us, though, whether or not a thing experiences qualia depends primarily on whether there is a mental substance involved. A computer has no mind (we suspect), and so even if its complexity (of the alleged 'right sort') exceeds that of the human, there is still no experiences.
The main point I'm making here is that trying to draw attention to this continuum is only going to be a persuasive argument to those who already think (or are inclined to think) that the ability to have experiences of qualia comes about merely from having the right kind of complexity (e.g., 'receiving and responding to external stimuli').
I am inclined to think that other humans and other non-human animals have experiences, but I don't think that's merely because they're complex enough systems (of the right sort of complexity).
Each view of the world has things it explains well, and things it struggles to explain. Some views do a better job overall than others.
In terms of idealist and dualist views of reality, the claims that there's fundamentally just minds, or fundamentally both minds and physical stuff, are very likely to entail that there is some kind of ensoulment that happens. Perhaps it's required for these views.
While I lean towards a type of idealist view myself, and I think overall it does a better job of explaining various matters than a physicalist view, how exactly ensoulment works is one of the places where these views are at their weakest. I don't think there's any contradiction here or problem for a view like mine, I just think that the physicalist account does a nicer job of explaining things in this specific corner. But that point in physicalism's favour here isn't enough to balance out the places it doesn't do so well (e.g., its complete inability to even begin to explain qualia).
The core point of my post though was that pointing to the continuum of complexity is an argument that would only have weight with people who already have a particular belief in common with you -- that is, that physical complexity of the right sort is somehow relevant to the question of whether something has experiences.
The whole point of introducing the (icky) "qualia" concept is that it's not the same thing as appearing alive to external observers.
As I understand things, we have no way of knowing the answer. So there's no point in assuming in either direction (unless that makes you feel more comfortable).
Personally I avoid being confident in something in the almost provable absence of any evidence. Feels more hygienic to reply "don't know" to this whole problem than to waste time trying to find an answer (as I'm hopelessly outmatched by the cursed nature of the problem).
Isn't qualia defined as whatever it is that humans experience? Without getting into solipsism and p-zombies (aka "NPCs"), humans possessing qualia seems tautological.
My understanding is that, within the academic literature, nearly everyone accepts qualia in as much as qualia is simply the subjective character of an experience/sensation. The disagreement is over how exactly to cash that out, and whether - and to what extent - the non-physical comes into play.
It’s a technical philosophical term for sensations like color and smell, which denotes they are subjective experiences of sensory information, as opposed to objective properties out there in the world.
Technically, its the technical philosophical term for subjective experiences generally.
Any discussion about it, however, is fairly deep navel-gazing, because it excludes and cannot be distinguished by any objective, testable, material effect.
> because it excludes and cannot be distinguished by any objective, testable
So, if you give people the “Mary the colour scientist” thought experiment, the vast majority of people give the answer consistent with the existence of qualia. That is something “objectively measurable”.
Furthermore, I think it is important when someone asks the kind of question you are asking, to inspect what “objective” and “measurable” actually mean. Ultimately, something is “measurable” if some human has the subjective experience of performing that measurement (whether directly or indirectly). Similarly, something is “objective” if some human has the subjective experience of communicating with other humans and confirming they make the same measurement/observation/etc. To object to subjective experiences (qualia) on the ground that they are not “objective” or “measurable” is self-defeating, because “objective” and “measurable” presume the very thing being objected to.
> So, if you give people the “Mary the colour scientist” thought experiment, the vast majority of people give the answer consistent with the existence of qualia.
The existence of qualia, and a change in it, is an assumption within the premise of the question, there is no answer to it that is not consistent with the existence of qualia: both “Mary does not gain new knowledge by direct experience of color” and “Mary does gain new knowledge by the direct experience of color” are consistent with, and depend on, qualia, to with, the direct experience of color.
It is not an unknown effect (nor one that tells anything about the existence of qualia) that framing a qestions with a concept in the premise is a good way to get people to accept the premise and focus on answering the question within it. In fact, a very important propaganda technique involves leveraging this by working to get a question incorporating a proposition you wish to get accepted as part of its premise into public debate so that, whatever position people take on the questions, your interests are advanced because merely getting the question into the debate leads people to accept its premises.
Ask people: "is there a subjective experience of seeing a reddish patch, which is distinct from whatever physical processes might be going on in the eye and brain when one sees that reddish patch?" I think most people would say "yes". I don't think that's a "leading question". And, the "Mary the color scientist" thought experiment is just adding a bit more color (excuse the pun) to the question.
> In fact, a very important propaganda technique involves leveraging this by working to get a question incorporating a proposition you wish to get accepted as part of its premise into public debate so that, whatever position people take on the questions, your interests are advanced because merely getting the question into the debate leads people to accept its premises.
If there is "pro-qualia" propaganda in the public debate, I think there is at least as much "pro-materialism" propaganda as well.
> something is “objective” if some human has the subjective experience of communicating with other humans and confirming they make the same measurement/observation/etc.
Me and a dog both agree that steak tastes great.
Me and a plant both agree that there's warmth and light that comes from the sun
Me and the ocean both agree that the moon is out in the sky and goes round in a circle
So... the ocean has subjective qualia relating to the moon's gravitational pull?
In what way do you and plants and you and the ocean 'agree'? I'm not sure how to make sense of these claims. I'm not even sure that you and the dog agree on this matter, but I can make more sense of that.
Just poking at the 'human' part of the definition.
Because if we want to ever be able to answer a question like 'does this AI system experience qualia', we need a definition that doesn't rely on 'well, qualia are a thing humans have, so... no'
When I have a social-emotional interaction with another human being, of a certain quality, that interaction produces in me the conviction that they must be really conscious, as opposed to a psychological zombie. Of course, I only personally have that kind of interaction with a tiny subset of all humans, but I generalise that conviction from those humans which provide its immediate justification, to all humanity
Which means, if there was an AI with which I could have that kind of interaction, I would likely soon develop the conviction that it was also really conscious, as opposed to a psychological zombie. Existing systems (for example, ChatGPT) don't offer me the quality of social-emotional interaction necessary to develop that conviction, but maybe some future AI will. And, if an AI did create that conviction in me, I likely would generalise it – not to every AI, but certainly to any other AI which I believed was capable of interacting in the same way
The whole idea of p zombies is that they can't be distinguished from conscious entities. Same inputs as a conscious mind, same outputs as a conscious mind, same behavior etc yet not conscious.
When I am convinced that my daughter is a real conscious person – that conviction isn't just based on dispassionate observation of her outputs in response to various inputs, it also has an emotional dimension. The thought of her being a p-zombie offends my heart, which is part of why I reject it. I haven't yet met an AI for which the thought of them lacking real consciousness offended my heart – I don't know if I ever will, but if I did, I would be convinced it was really conscious just as my daughter is.
Some will object that it is irrational to allow one's emotions to influence one's beliefs. I think they are wrong – certainly, sometimes it is irrational to allow one's beliefs to be swayed by one's emotions, but I disagree that is true all the time, and I think this is one of those times when it isn't.
Growing up, we had dogs. I had a very strong emotional bond with our dogs – so, if my emotional bonds with my son and daughter are good reasons for me to be convinced that they are both really conscious, to be consistent I'd have to say our dogs were really conscious too. And, since I generalise from my conclusion that my children are really conscious, to the conclusion that other people's children are also really conscious, I'm also going to generalise the conclusion from our dogs to other people's dogs – and also cats – I'm not a cat person, I find them much harder to relate to than dogs, but I recognise other people feel rather differently about them than I do.
So, in my mind, animals which humans have as pets, and which are capable of forming social-emotional bonds with humans, are really conscious. I'm less sure about pet species which are less social, since emotional bonds with them may be much more unidirectional, and I think the bidirectional nature of an emotional bond plays an important part in how it helps us form the conviction that the other party to that bond is really conscious.
What about ants, fleas, wasps, bees, bacteria? I don't believe that they are conscious, I suppose I'm inclined to think they are not. But, I could be wrong about that. I can't even rule out panpsychism (everything is really conscious, even inanimate objects) as a possibility. If I had to bet, I'd bet against panpsychism being true, but I can't claim to be certain that it is false.
Where do we draw the boundary then between "really conscious animals" and "p-zombie animals"? I think capacity for social and emotional bonds with humans (or other human-like beings, if any such beings ever exist–such as intelligent extraterrestrial life, supernatural beings such as gods/angels/demons/spirits/etc, or super-advanced AIs) is an important criterion – but I make no claim to know where the exact boundary lies. I don't think that "we don't (or maybe even can't) know the exact location of the boundary between X and Y" is necessarily a good argument against the claim that such a boundary exists.
you know you possess qualia, if you did you would think it reasonable to assume that at least some of the species you come from, which exhibits many of the same characteristics in thought and body, probably also possess it, unless you believe yourself to be a highly atypical example of your species.
If you're not sure if you possess qualia, we're back to Descartes.
You don’t experience inner dialog or visualizations? Some people don’t, but I assume you recall dreaming. Opponents of qualia like Dennett argue against the movie in the head, but dreams sure aren’t coming from the external environment.
I think qualia is a good term in the following sense:
An analogue question would be "why do things exist?". There can be no answer to this question. We can of course come up with theories that explain why certain things exist. But never why something even exists at all. Whatever reason we propose, reason, it again might needs to exist in order to be an acceptable answer.
Qualia seems to be similar: They name precisely what is subjective about experience and therefore cannot be fully turned objective. We can of course develop theories of how certain experiences arise but never break down this barrier.
So I find it a bit short sighted to simply say "there is no such thing as qualia".
My preferred view is to think of both "qualia" and "existence" as a koan: They are very nonsensical terms but lead to interesting questions.
> An analogue question would be "why do things exist?". There can be no answer to this question. We can of course come up with theories that explain why certain things exist. But never why something even exists at all.
Here's an argument for the conclusion "it is necessary that at least one thing exists":
1) For some proposition P, if it is impossible for us to concieve of P being true, and if that impossiblity is inherent in the very nature of P, and not in any way a product of any scientific or technological limitation, then P is necessarily false
2) It is impossible for us to conceive of the proposition "nothing exists at all" being true
3) Our inability to concieve of "nothing exists at all" is inherent in the very nature of "nothing exists at all", as opposed to being somehow a product of our scientific or technological limitations. We have no good reason to believe that any future advances in science or technology will make any difference to our inability to conceive of "nothing exists at all" being true
4) Hence, it is necessarily false that "nothing exists at all"
5) Hence, necessarily, at least one thing exists.
Now, others may disagree with this argument, but I personally believe it to be sound. And, if we have a sound argument that "necessarily at least one thing exists", then that proposition, and the argument used to prove it, constitute a good answer to the question "why do things exist?"
I find your first point to be similar (almost the opposite?) of Descartes' Ontological argument [0].
> 1. Whatever I clearly and distinctly perceive to be contained in the idea of something is true of that thing.
Our (in)ability to imagine or reason about something does not make it (un)real or (un)true. Your version would be like a dolphin claiming algebra can't be real because they can't imagine a system of equations, let alone solving one.
Further... You already know that something exists, but that's a pretty unbelievable fact if you stop to think about it. You probably wouldn't believe it if it weren't so obvious. Why? Well, can you imagine how things (or the very first thing) came to exist? Or can you imagine the concept of things always having existed? What does that even mean? They are both beyond comprehension, and yet one of them is true.
> Your version would be like a dolphin claiming algebra can't be real because they can't imagine a system of equations, let alone solving one.
Note I explicitly said "if that impossibility is... not in any way a product of any scientific or technological limitation". Understanding mathematics to be a science (albeit a formal science rather than a natural one), a dolphin's inability to understand algebra is an example of "scientific or technological limitation"; therefore, my principle does not apply to that case, and your counterexample cannot be an argument against the principle when the principle as worded already excludes it.
> Well, can you imagine how things (or the very first thing) came to exist? Or can you imagine the concept of things always having existed? What does that even mean? They are both beyond comprehension, and yet one of them is true.
There are more than just two possibilities here. When you say "things always having existed", that can be interpreted in (at least) two different ways – an infinite past, or circular time (as in Nietzsche's eternal recurrence). Or, another possibility would be that the universe originated in the Hartle–Hawking state, meaning that as we approach the beginning of the universe, time becomes progressively more space-like, and hence there could be no unique "first moment" of time – in the beginning, there was no time, only space, and then (part of) space gradually becomes time, but in a continuous process in which there is no clear cut-off point between time's existence and non-existence.
Can I comprehend these possibilities? I feel like I can, for some of them–maybe some of them are more comprehensible to me than to you. But, those which I cannot comprehend, is that because the theory itself is inherently incomprehensible, or is that "in any way a product of any scientific or technological limitation"? I can't say for sure it isn't the later. For example, I find it really hard to comprehend the Hartle-Hawking proposal, but I don't have a good understanding of the maths and physics behind it, so it seems entirely possible that my difficulties in comprehending it are due to my lack of ability in maths and physics, rather than the very nature of the idea itself. Similarly, my intuition is repelled by the notion of an infinite past, but is it possible I'd view the matter differently if I had a better understanding of the mathematics of infinity? I can't completely rule that out.
By contrast, I have no reason to think that my inability to conceive of "nothing exists at all" is due to any limitation of my understanding of mathematics and physics. What mathematics or physics could possibly be relevant to it? There isn't any, and there is no reason to think there ever could be any. So, I say my principle clearly applies here, but not in the "how did things begin to exist" case which you raise.
It doesn't really matter _why_ the dolphin doesn't believe algebra is possible, their subjective experience of being unable to imagine something is all it takes. The dolphin doesn't think it's a scientific or technological limitation, they just don't think it's possible.
What about "not existing" is impossible as an inherent property of nothingness?
And why do you think your ability to believe or picture something has any impact on whether it's real or true?
> It doesn't really matter _why_ the dolphin doesn't believe algebra is possible, their subjective experience of being unable to imagine something is all it takes. The dolphin doesn't think it's a scientific or technological limitation, they just don't think it's possible.
By the terms of the principle I proposed, it does matter. Now, maybe you are arguing my proposed principle is wrong in saying that matters – but, I don't know if considering a dolphin really helps get us anywhere in that argument: dolphins are – as far as we know – incapable of the kind of abstract conceptual thought necessary to even consider the question "is it possible that nothing could have existed at all", so why would what they can or can't imagine be relevant to that question?
> What about "not existing" is impossible as an inherent property of nothingness?
Essentially what I am arguing, is that it is inherent to the very idea of existence that at least something exists. The idea of some particular thing not existing is coherent, but the idea of nothing existing at all isn't.
> And why do you think your ability to believe or picture something has any impact on whether it's real or true?
Consider a statement like "the square is both entirely black and entirely white", or "1 + 1 = 3". For such statements, it is both true that (a) it is impossible that they are true, and (b) it is impossible for us to imagine what it would be like for them to be true. Now the question is, is it merely coincidentally true that both (a) and (b), or are they both true because their truth is related in some way? To me, the later seems far more plausible than the former. In which case, if we know (b) is true of a statement, that gives us at least some reason to think that (a) might be true of it as well.
Of course, we are aware of specific cases in which (b) is true without (a) being true – but, all such known cases involve limitations of mathematical/scientific knowledge or technology. Is it possible for (b) to be true, yet (a) being false, for a reason unrelated to limitations of mathematical/scientific knowledge or technology? Nobody has proposed any such plausible reason, so I think it is reasonable to conclude that there probably isn't one. Hence, if (b) is true of a proposition, and we have no good reason to suppose our inability to imagine is due to limitations of our scientific/mathematical knowledge or technology, then that's a good reason to believe that (a) is at least probably true with respect to that proposition.
> They are, to the extent not random, coming from the material universe, whether from the part arbitrarily deemed “internal” to the “observer” or not.
What is this "material universe" of which you speak? As an idealist, I'm inclined to say that the "universe" is a set of minds whose qualia cohere with each other (not perfectly, but substantially). “Physical” objects, events, laws, processes, etc, are patterns which exist in those qualia.
> What is this “material universe” of which you speak?
The part of my qualia which can be reduced consistent, predictive patterns (and, for convenience, a set of concepts which represent patterns, an explanatory models for patterns, within that.)
The idea that there is anything that meaningfully exists outside of my qualia, is the lowest-level of those models, and that that includes other entities which have qualia of their own, is a high-level model (or maybe, more accurately, a conjecture) built on top of those models which might have consequences for, say, my preferences for how I would like the universe to develop, but interesting lacks predictive consequences – its a dead-end within the composite model.
On a fundamental level, outside of any beliefs about the “reality” of the patterns or explanatory models of the “material universe”, objective questions are ones which have consequences on the expectations within that set of patterns, whereas subjective ones are those which do not.
> The idea that there is anything that meaningfully exists outside of my qualia, is the lowest-level of those models, and that that includes other entities which have qualia of their own, is a high-level model (or maybe, more accurately, a conjecture) built on top of those models which might have consequences for, say, my preferences for how I would like the universe to develop, but interesting lacks predictive consequences – its a dead-end within the composite model.
Are you arguing that idealism is a predictive dead-end? I don't think it is any more of a predictive dead-end than materialism is.
Scientific theories are conceptual frameworks which can be used to predict future observations. As such, they make no claims about the ultimate ontological status of their theoretical constructs. Materialists propose that those theoretical constructs (or at least some subset of them) are ontologically fundamental, and minds/qualia/etc must be ontologically derivative. Idealists propose that minds/qualia are ontologically fundamental, and those theoretical constructs are ontologically derivative. Neither is science, although both are philosophical interpretations of science – I see no reason why science (correctly interpreted) should be taken as preferring one to the other.
Dreams happen in the electrochemical activities of brains. We can detect when dreams are occurring in the brains of sleeping people.
That spontaneous activity inside the brain is ‘experienced’ by the person whose brain it is in, in a similar way to activity caused by external stimuli, doesn’t seem to say anything about whether dreams are evidence of some higher level of ‘qualia’ beyond just.. brain activity is consciousness.
> We can detect when dreams are occurring in the brains of sleeping people.
A scientist has the subjective experience (qualia) of observing a person sleeping with certain scientific equipment attached to them, the subjective experience of observing that equipment produce certain results, the subjective experience of waking the sleeper and asking them if they were just dreaming and getting an affirmative response, etc. If the scientist claims that "dreams are brain activity", their claim is referring to those subjective experiences of theirs, and has those subjective experiences as its justification. And that's all fine – there is no problem with any of this from an idealist viewpoint. "Brain activity" is a pattern in qualia, "dreaming" is another pattern in qualia, some correlation between them is a third (higher-level) pattern in qualia. It's qualia all the way down.
But, to then use those subjective experiences as an argument against the existence of subjective experiences is profoundly mistaken.
I'm not sure if I'm missing something here, but the fact that I can write my thoughts/thought process down in a form that other people can independently consume and understand seems sufficient proof of their existence to me.
that's why I added the second clause: " thought processes similar to those that humans subjectively experience". Because personally I suspect that consciousness, free will, qualia, etc, are subjective processes we introspect but cannot fully explain (yet, or possibly ever).
LLMs can do chain-of-reasoning analysis. If you ask, say, ChatGPT to explain, step by step, how it arrived at an answer, it will. The capability seems to be a function of size. These big models coming out these days are not simply dumb token predictors.
I suspect that a lot of AI researchers will end up holding the exact opposite position to a lot of philosophers of mind and treat AGIs as philosophical zombies, even if they behave as if they are conscious. The more thoughtful ones will hopefully leave the door open to the possibility that they might be conscious beings with subjective experiences equivalent to their own, and treat them as such, because if they are then the moral implications of not doing so are disturbing.
I’m happy to “leave the door open,” i.e., I’d love to be shown evidence to the contrary, but:
If the entity doing the cognition didn’t evolve said cognition to navigate a threat-filled world in a vulnerable body, then I have no reason at all to suspect that its experience is anything like my own.
edit: JavaJosh fleshed this idea out a bit more. I’m not sure if putting ChatGPT into a body would help, but my intuitive sympathies in this field are in the direction of embodied cognition [1], to be sure.
> If the entity doing the cognition didn’t evolve said cognition to navigate a threat-filled world in a vulnerable body, then I have no reason at all to suspect that its experience is anything like my own.
An AI model needs to evolve its cognition to successfully navigate its training environment for survival.
Shouldn't that satisfy your criteria for suspecting its experiences to be like your own?
I don’t believe it does— not yet, at least. Their “world” is staggeringly different from our own at present, and in important ways.
For example, the AI models we have now will happily run forever if supplied with external power.
That sounds quite different from my experience of the world!
The AI model doesn’t ever get tired, or bored, or hungry, or anxious, or afraid.
If a model emerged that started procrastinating, say, or telling jokes for fun, or shutting down for long stretches due to depression— then I’d start to consider that maybe it’s got an inner world like mine.
> For example, the AI models we have now will happily run forever if supplied with external power.
A human will run forever as well if supplied food (external power), a new body (computers degrade) and genetic data protection (hard drives corrupt) :)
While AI models may not be quite there yet, their world seems fairly similar to ours in the important ways of environmental constraints and survival pressures
Evolution per se isn’t the important part. I apologize if my wording gave you that impression.
The only reason our brains exist at all is to control our bodies. There are no conscious entities we know of that are just brains in vats.
Why would a being that has no need to maintain biological homeostasis experience something like hunger, tiredness or fear?
That’s what I mean when I say I’ve no reason to believe a thinking machine that has none of my motivations and limitations would ever have a “subjective experience” like my own.
That is, of course, unless we succeed in reverse-engineering human consciousness and somehow deliberately program the machine to experience it.
I don’t doubt that it’s possible for a human-engineered thinking machine to develop a subjective self-experience.
It just seems clear to me that, if such a scenario arose, I would have absolutely no idea what that would be like for the machine.
Its needs and circumstances (and, therefore, its behavior and cognitive development) would be utterly different from my own in many non-trivial ways.
Given this, I can’t think of any logically rigorous argument to support the commonly-made assumption that a given sentient machine must inevitably develop something like the physical aggression or the capacity for emotional pain displayed by social animals which must compete for scarce resources in the physical world.
Modern AI software lacks a body, exempting it from a wide variety of suffering. But also of any notion of selfhood that we might share. If modern software said "Help, I'm suffering" we'd rightly be skeptical of the claim. Unless suffering is an emergent property (dubious) then the statement is, at best, a simulation of suffering and at worst noise or a lie.
That said, things change once you get a body. If you put ChatGPT into a simulated body in a simulated world, and allowed it to move and act, perhaps giving it a motivation, then the combination of ChatGPT and the state of that body, would become something very close to a "self", that might even qualify for personhood. It is scary, by the way, that such a weighty decision be left to us, mere mortals. It seems to me that we should err on the side of granting too much personhood rather than too little, since the cost of treating an object like a person is far less than treating a person like an object.
> he suspects that we will see an emergent reasoning property once models obtain enough training data and algorithmic complexity/functionality
What "reasoning property"? And on what basis? This just sounds like magical thinking.
> without actually knowing if they posses qualia
If you're a materialist, you already believe they don't because matter in the materialist conception is incompatible with qualia, by definition. Some materialists finally understood that and either retreated from materialism or doubled down and embraced the Procrustean nonsense that is eliminativism.
Yeah Norvig seems to be vindicated by deep learning models as a sophisticated markov chain which can give pretty good results purely through probability.
Even norvig acknowledges that there are edges cases which probability will have a hard time on, especially untrained. Chomsky tends to focus on these with his focus on emulation instead of markov simulation.
Side question: how do we know if humans possess qualia?
On the other hand, I think by definition we can be sure that a ML thought process won't ever be similar to a human thought process (ours is tied up with feelings connected to our physical tissues, our breath, etc).
You presumably know that you, yourself, posess qualia based on your own experiences. I certainly do. But there's no way to know that other humans do, at least not via empirical or scientific means. It's a safe assumption that since you have qualia that most/all other humans do, but it's entirely possible that you're the only one and I'm some Chinese room style simulation of qualia (or vice versa).
No, I don't know that I myself possess qualia. I agree with Dennett that "qualia" is a philosopher invention that is incompatible with neuroscience. IMO "qualia" is just a rhetorical trick designed to justify human superiority.
Would you mind if we ripped out your brain and replaced it with a superior computer that could control your body? There are no disadvantages for you, the brain would control you to create offspring, so it is the best thing for you to do. It would be nicer to your friends, a better parent for your kids, do better at work etc.
If you wouldn't want to do the above, the reason not to do it would be that you want to keep your current qualia. There are no other reasons not to do it. So, if you don't want above, it means that you know what qualia is, you are just playing ignorant here.
Making left turns...stopping at red lights...these are success/failure criteria.
In contrast, having a favorite food, or an opinion on politics, or a preference on what should be considered the best movie from the 1990's, or what kind of music you want to blast on your stereo to listen to on your drive as you make your left turn...these are not success/failure criteria.
> Side question: how do we know if humans possess qualia?
I think there are some people who don't possess qualia, or maybe don't notice that they have it. There was a reddit post years ago from a guy who said he only became self-aware in adulthood, like a light switched on in his brain one day. And it wasn't "becoming self-aware" in a metaphorical sense (like learning responsibilities or how you are perceived by otheres), it was described as the real-deal: consciousness of subjective experience. One day he didn't have it, the next day he did.
There are also various accounts of people losing subjective consciousness while nevertheless awake and able to walk and clothe and feed themselves -- neurological conditions, "ego death".
Another reason I believe this comes from reading accounts of aphantasia. Such people can go their whole lives not realizing that most people can see images in their minds eye. A consistent theme is their assumption that phrases like "imagine", "see in your minds eye", "picture this: ...", "flashed through my mind", "cannot unsee", etc were just metaphors, like how when we say "take the bull by the horns" we're not talking about a literal bull or literal horns. To them it was a shock that people really had pictures in their minds, because they had never experienced such a thing.
(Reiterating for anyone reading this who is confused: yes most people can literally see imaginary things, overlaid on top of the normal visual field. If you find this surprising, you have aphantasia).
So if this supposedly universal subjective experience can be absent in some people, and they don't realize it because they think the language is metaphorical, then perhaps that could happen for everything else. Or maybe not everything else, but some kind of spectrum, where some people only have very weak, barely noticeable qualia, and others get it much stronger. In fact, if we suppose a naturalistic origin of consciousness, I think it has to be the case that it's a spectrum. Nature rarely produces sharp binaries. I think I have experienced this myself during certain kinds of dreams, where the self seems fractional, ghostly. Maybe some people live like that all the time. How would we know? Then at the other end of the spectrum, there might be people with intense subjective everyday experience. There are accounts of drug trips where everything just felt more, in some indescribable way -- what if some people don't need drugs for that?
And likewise, for any qualia-lackers or consciousness-lackers who are reading this: yes, qualia is a real thing. It's not a metaphor or a language-game. It does actually feel like something to exist.
I think (1) it is unlikely that the presence or absence of qualia perfectly lines up with being human.
I strongly suspect (2) that at the very minimum other humans who report thinking like me also have qualia. Otherwise the word wouldn't have been invented.
As (3) minimal definitions of "what's a real person anyway" have led to most of the genocides and other crimes against humanity throughout history, I prefer to act as though all humans have qualia; but that's an argument about how to act under uncertainty, not about what is.
Given (1) and (3), I also assume other animals have qualia and that the meat any dairy industry is as bad as if the same things were done to humans. (Unfortunately, I can't seem to give up cheese yet).
We have seen AI agents demonstrate the creation of language to coordinate group actions, so (2) is possible in principle, but I don't know how we can tell the difference between an AI which accurately self-reports having qualia and one which just says it does but actually has as much as the web browser you're reading this comment in:
While I do think models are, can be, and likely must be a useful component of a system capable of AGI, I don't seem to share the optimism (of Norvig or a lot of the GPT/AlphaCode/Diffusion audience) that models alone have a high-enough ceiling to approach or reach full AGI, even if they fully conquer language.
It'll still fundamentally only be modeling behavior, which - to paraphrase that piece - misses the point about what general intelligence is and how it works.
If it swims like a duck, and it quacks like a duck, then it is good enough to be a Duck.
Well you can argue, ducks lay egg too, then we need to solve and code that too.
Its always better than previous. No body is creating life here, but its an attempt to derive intelligence, and seeing it come to this point seems we are so far right on track and quite far from where it started.
> As you can see, he has a measured and empirical approach to the topic.
He still hasn’t corrected his inaccurate comments about pro-drop after all these years. (A bit of a hobby horse of mine: earlier comment here: https://news.ycombinator.com/item?id=28127035)
(this is true for many people who work in ML towards the goal of AGI: given what we've seen over the past few decades, but especially in the past few years, it seems reasonable to speculate that we will be able to make agents that demonstrate what appears to be AGI, without actually knowing if they posses qualia, or thought processes similar to those that humans subjectively experience)