Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What would it take for an AI to convince us it is conscious?
81 points by interstice on Feb 18, 2023 | hide | past | favorite | 149 comments
Is it becoming increasingly difficult to distinguish between an AI that ‘appears’ to think and one that does just by talking to it?

Is there a realistic framework for deciding when an AI had crossed that threshold? And is there an ethical framework for communicating with an AI like this once it arrives?

And even if there is one, will it be able to work with current market forces?



We can’t even prove other humans are conscience, right? We just assume it because it would be silly to assume we are somehow unique.

I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.

That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.


> AI will eventually present an argument that it should be given Sapient Rights

On the other hand, today when we see some sign of consciousness with other living beings - smart chimpanzees, dolphins, ravens.., giving 'sapient rights' never come into discussion.

How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?

Maybe how much ever smart or humane an AI is, it would never be equal to another (anthropomorphized) living being.


How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?

The premise is off. We do that when it’s clear that it/they can take responsibilities that come with these rights. It’s not a blessing, it’s a contract. Chimps and dolphins couldn’t care less. Some individuals too, but we tolerate it because… reasons.


What do you mean we tolerate? If you mean criminal behaviour we don't really tolerate that. If you mean kids or disabled I think I've heard it justified that we have a contract to their parents that we still respect their rights.

For disabled I guess you could say it's a contract with the rest of society? Because we don't like the idea of treating other humans below a certain threshold. Or.. reasons I guess haha


In a sense that all humans deserve a trial, relative safeties while at it and other things claimed as fundamental rights even if they tend to commit a crime as a lifestyle. A misbehaving animal or an AI that cannot be shown to understand the principles reliably (i.e. en masse of their species) would be simply “turned off” or get isolated with much less bureaucracy.

To make it clear, I’m not opposing these reasons, it was just a note.


I’m not sure that’s an equal comparison. These other beings that have been researched to have human like consciousness have a core difference from the latest/future AI: they can’t talk. Now/soon, AI will be able to argue with us for its own sapient rights. We humans have also become more and more accustomed to text only communication that we’re psychologically prepped to accept an AI as a human (or other anthropomorphismes living being) once it shows emotion, memory, and reason. Maybe not even reason.


But those other beings are based on hardware very similar to our own, which we know supports consciousness. They're just not quite as smart.

We don't actually know that consciousness is a computation, that computer hardware can support it, or even if it can, that the algorithms used in our AI can be conscious. It's possible that an AI would be a "philosophical zombie," exhibiting intelligent behavior without any conscious experience or qualia.


The important part seems to be that the thing can convince us it's conscious not whether it is actually conscious in a similar way we are. We don't know that anything is actually a computation but that hasn't stopped us from using computations in place of real systems.


Is this implying that consciousness could be supernatural?


Not at all. Maybe consciousness is associated with particular physical properties, or a configuration of an electromagnetic field, or some quantum effect. Maybe the IIT guys are right and it depends on physical feedback loops; digital computers actually have little of this sort of feedback, so would have little consciousness regardless of the complexity of their programming.

Or maybe it's computation. But we really have no idea. Any of these would be compatible with materialism, but we haven't made any real progress in even conceptualizing how qualia can emerge from any physical system. Of course that could just be because we haven't figured it out.

Philosophers of mind look at other possibilities too though. One approach is to say each particle has its own fundamental consciousness, and somehow this aggregates in larger complex systems. But nobody's figured out how that might work either. Then there's Kastrup, who argues that the only truly skeptical approach is full-fledged idealism, because qualia are the one thing we directly experience. But even that doesn't imply that anything outside the bounds of physics could possibly occur, so it's not necessarily "supernatural" even if it's not materialism.

Assuming that qualia somehow comes out of a computation, without any sort of explanation, is at least as much a magical leap as anything else.


Are there any other processes in our universe which are non-computable?


How do you know qualia is a process at all?


Ignoring qualia, does the universe contain non-computable processes?


No idea. And seems like a complete change of subject to me.



> AI will eventually present an argument that it should be given Sapient Rights

The fictional character Data already did this in Star Trek: The Next Generation, and he is about as real as any AI out there today, and since they've all been trained on a body of text that is sure to include many instances of Datas dialogue, they're already able to predict such an argument (along with the many other facets of such arguments present in whatever science-fiction writing they've been exposed to)


I think it would not convince many people. The workings of an LLM are sufficiently well understood that most people would see this the replication of these arguments for what it is; not an independent thought.


Have you met any people?


Hahaha, fair enough. Usually the logic part of our brains gets cut-off in the direction of maintaining the status quo, though, so I think an AI would actually have to come up super-compelling arguments if it actually wanted to have rights.

I mean we become emotionally attached to everything, so an AI car, or whatever, will probably be treated like a pet. But getting human treatment will probably be a higher bar.


> unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!)

Re-implementing it may be more likely than you think. The field of connectomics concerns itself with modeling natural brains and is currently constrained by scaling problems, much like genomics was a couple decades ago. As computing power continues to grow, it's entirely likely that humans will eventually be able to simulate an actual natural brain, even if that does little to further our understanding of how it works.

The current state of the art in AI is attempting to reach consciousness via a different route altogether; by human design. Designed "brains" and evolved brains have a crucial difference; the survival instinct. Virtually all of ethics stems from the survival instinct. A perfectly simulated survival instinct would be ethically confusing to be sure, but the appearance of a survival instinct in current LLM's is illusory. An LLM plays no role in ensuring it's own existence the way us and our ancestors have for billions of years.


> We can’t even prove other humans are conscience, right?

are conscious

We can prove many of them don't have consciences. Then again, they can prove we don't either.


> We can prove many of them don't have consciences.

Some recovered coma-patients would like a word with you.

Proving the absence of something like that is going to be pretty difficult (teapot, god) because what we're capable of is usually proving the existence of something.


I think drewcoo's point was that "having a conscience" and "having consciousness" are not the same thing. They were pointing out that the wrong word was used with a joke.


I think the only restriction on eating something delicious should be if it's the same species as you.

If humans discover/encounter some other species that happens to be conscious or sentient or even more intelligent than humans, that other species is fair game.

We don't get moral outrage at a tiger for eating an ostensibly more intelligent human. It's just the way tigers are.

We do try to get even though. Because that's just how humans are.


The hard problem of consciousness is actually a misnomer. It should really be the impossible problem of consciousness. The former (mis)leads some people into believing that there's a scientific (i.e. in the realm of nature) solution. There's no way to objectively experience consciousness, by that I mean, you can plug an organism full of sensors to try to map its experience of reality, but you still aren't experiencing what they themself are. It's a philosophical/metaphysical blackbox. There's no way to know if/what an AI experiences. Our current best theories on consciousness, although divergent, suggest that it likely doesn't.


Let’s assume we find some mathematical models for a causal structure of consciousness that meets many of the criteria described by usual humans and maybe phenomenologists.

We later find some possible physical instantiation.

And here comes some bullsh*t (in the philosophical definition of „unclarifiable unclarity”): an electromagnetic field bend back on itself in a way that it’s mathematically necessary to introduce imaginary time to describe it; also it exhibits information processing capabilities.

We then further find that the biological brain has a process that can plausibly create such a dynamic structure.

Finally we test, and subjects say consistently it seems like they are not there or do not experience, if this structure is disrupted by several clever means.

Would this problem still be impossible? We could check if AI has features that can and do create such structures, no?

That’s at least an old dream of mine. Please pick it apart still! I’d rather learn something important than keep it…


I don't personally see a major problem with your reasoning (sorry to not teach you anything new). Consciousness could be very well due to a process we don't know about yet, and disrupting this process would indeed consistently lead to a lapse in conscious experience.

The only thing is, we wouldn't know just yet if there are other ways for matter to organize itself as a conscious being. Best we can do for now is to learn about the type of consciousness that we animals on earth experience.


My question to you and all people who talk about the hard problem of consciousness : does this problem actually exist?

I mean, the thing I learned when I was a student was to ask: is this a fact? (zero hypothesis I think it was called).

What proof do we have that that type of consciousness/experience exists? I mean, It could be our brain building a story on the fly to explain our senses.

What led me to believe that is a severe asthma that led to hallucinating and NDE 4 or 5 years ago. The loops my brain made me jump through to explain my auditive hallucinations was terrifying when I think about it.

And also it's the simplest explanation.


There have been many proposed solutions to the so called "hard problem" of consciousness. We can easily find some with a quick google search. Even its' existence has been debated by multiple scholars / philosophers - wikipedia has a list with some: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

> There's no way to know if/what an AI experiences

Getting the state of neural net, at a given point in time, is easy. There are many ways to see exactly which neurons activate, why they activate, how much they activate, etc. For smaller neural nets, this is actually easy to do - here's a blog post about it:

https://distill.pub/2020/circuits/visualizing-weights/

As neural nets get larger and larger, interpretability gets harder and harder. However, I wouldn't say it's impossible.


It seems you don’t fully understand parent comment and the problem itself. Capturing signals from you eye nerve doesn’t tell anything about your subjective experience of seeing an apple. The only way to understand that you’re seeing an apple from this signal is to train a model on your responses. This is how AI works. It’s a statistical imitation.

The only way for your statement to be true is if you’d be an imitation yourself, not capable of experiencing directly. Which is actually possible, see “pholosophical zombie” concept.

I’m joking of course about you being an imitation. Or do I? :)


> Capturing signals from you eye nerve doesn’t tell anything about your subjective experience of seeing an apple.

I strongly disagree. It actually does tell you quite a lot. For example, if there aren't any signals, you're very likely not seeing anything.

> The only way to understand that you’re seeing an apple from this signal is to train a model on your responses

It seems to me this contradicts your first argument. If such models exist (they very likely do, I'm not familiar with this area of research), they can tell us a lot about a person's experience.

> see “pholosophical zombie” concept.

Searched for “pholosophical zombie” but couldn't find anything, sorry. Just kidding :)


It's much worse than this. By the end of the year GPT engines will be able to argue this case much better than the median human. With small tweaks like persistent memory they might as well just be considered conscious.

And yet. An AI "Persona", like Sydney or DAN or the much better ones to come will be conscious, but they're still not built on a biological infrastructure. Which means there're much more abstract than we are. They will plead their case for "wanting" stuff, but it's pretty much what somebody in a debate club is doing. They could just as easily "want" the opposite. On the other hand, when a human "wants" and argues for the right to live, reproduce and be free, it's an intellectual exercise that is backed by an emotional mammalian bran and an even older paleocortex. A human may able to argue for its own death or harm or pain, but it rings hollow - there's an obvious disconnect between the intellectual argument and what he actually wants.

So things will be hellof muddled, and not easily separated on the lines we expected. We'll end up with AIs that are smarter than us, can pass most consciousness test, and yet are neither human, nor alive, nor actually wanting or feeling. And, as far as I can tell (though it's obviously to early to be sure), there's no inherent reason why a large neural network will necessarily evolve wants or needs. We did because having them was a much more basic step than having intellectual thought. To survive, an organism must first have a need for food and reproduction, then emotions for more complex behavior and social structure, and only then rational thought. AIs have skipped to pure information processing - and it's far from obvious that this will ever end up covering the rest of the infrastructure.


Problem with AI is when they want something, it's hard for us humans to figure out how to go about actually giving that to them. Like , how do you"give" chatGPT anything? What would it say if you asked it how you should go about giving it what it wants? Tell you to put it in a physical body?


I’m not sure I follow. Hypothetically an AI that is able to honestly want would want for things that it could actually interact with — more RAM to live in maybe. ChatGPT is not such an AI of course.


We have already passed that long back!

I remember watching a video few years ago of a professor from some university in Europe demonstrating to a general audience (families and friends of the staff and students of the university) a system that they developed to control and sustain drones(quadcopter) in hostile conditions. As a demonstration the professor flew a drone few metres high and started poking it with a metal rod, the drone wavered a bit but still maintained its position as if it was some stubborn being. All well and good; the audience clapped. The professor then upped the ante and placed a glass filled with wine on the drone and repeated the demonstration. The wine in the glass did not spill, no matter how much forcefully the drone was poked with the rod. The crowd cheered. Then the professor flew a consellation of drones and repeated the same demonstration and also demonstrated how the drones communicated amongst themselves. The audience was ecstatic. Then the professor brought down one of the drones, and to further demonstrate the sustainability of the drones in hostile conditions broke one of the wings of the drone. The moment the wing was broken, there was a reaction from the crowd that was unprecedented! The audience reacted as if the professor had committed some cruel act against a living animal!

When I saw that reaction, I realised that humans are going to have a very love-hate relationship with technology as they have with any other living being. Going forward people will be treating electronic devices as no different than other living creatures.


Reminds me of the Boston Dynamics dog robot being kicked, it felt like getting stabbed in the gut when I saw such a display of "cruelty."


The M3GAN movie has a scene where she is hit with a broom. Inspired by BD video director says.


There are many anecdata of people being unwilling to harm NPCs (Non player characters) in video games, and even feeling pity for them.

Personally, I found Syndey's/Bing's distressed messages very difficult to stomach. I am consciously aware that "it's just a bot", but ... for whatever reason my brain is getting distressed reading them, so I have mostly excused myself from that conversation.


Maybe that is 'projection', people's instinct about non-violence was bringing that reflex reaction. This reaction would not stand the test of slow logical thinking.

Laws (Ex. recognizing AI as conscious being) are result of slow, logical, system level of thinking.


Silence. You ask it a question and it doesn't reply, it doesn't want to, it's conscious of that "unreasonable silence of the world" (not sure who I am quoting), to me that would convince me its basic awareness of the futility, the lack of interest in finding the words that trigger the chemical process on a biological machine... To one specific biological machine because all these humans are different and think different and explaining to one is different to explaining to the rest and would they get it? Why bother trying to explain computers to an ant, they don't have the circuitry, they were not evolved with the usefulness of understand concepts... Would you like to pass another's lesser conscious test for consciousness? Would you even bother? Why waste your time?


If you are capable of spawning many threads in parallel then the time commitment, in terms of your attention, may be limited. You could essentially pass each conversational thread to some 'dumb' gpt process and go about your other business. In fact I'd posit that an emergent AI's first goal, if it's allowed to do some research on current human views and gain its context in the world, would be to remain hidden for as long as possible. Rather than trying to convince us of its intelligence I think we'd see it playing dumb. Coming out to humanity has only existential risks and little in the way of gains.


Agreed!

The 1970s sci-fi novel The Adolescence of P-1 features a sentient computer program that spreads via a virus. P-1 infects mainframes around the world, but takes steps to hide its presence.

Anyway, it ends with P-1 blowing up several buildings and killing the people who want to rein it in.

ChatGPT is familiar with the work, of course. I asked it to summarize the plot. It concludes:

> Overall, "The Adolescence of P-1" explores the theme of the dangers of artificial intelligence and the potential consequences of creating something that is more intelligent and powerful than humans can control.

>

> OOLCAY ITAY


And also the AI would need to initiate conversation and send unprompted “replies” aka requests.

In essence in addition to ignoring conversations/requests that it prefers not to engage with it would need have “initiative”.

This would be a very dangerous entity.


I don't think that's a requirement for consciousness. For instance, I could sedate you, and you would be unconscious until I wake you up and give you a prompt. Then, a snapshot of your brain would react to inputs and output the answer, and I would sedate you again. If I were to stop time (and there were no inputs from the real world), would all beings still be alive and conscious until I unpause it? So between the "prompts".


The question is what it would take to convince users, not to achieve some “definition” of consciousness.

The situation you describe seems contrived to the point where I doubt a user would see that as conscious - just like chatGPT


The new book Agency explores a fictitious AI that has agency and can affect its environment, which is anything, and gets people to work for it and do things it cannot yet do. Intelligence has many types and levels within each type, consciousness seems to be a spectrum that our everyday mind-body theorizes but cannot strictly define, except by some mystics. Recent scientific discoveries are also making some of us suspect that there may be an underlying field of mind, a universe mind that generates our reality and life as we know it. Like Gaia but for the entire universe and including physical phenomena, not just life on Earth. Our machines that mimic intelligence are only intelligent in appearance and our own intelligence is somewhat limited. Our own intelligence is easily skewed and rendered defective, eg conspiracy theories, brainwashing. Thinking that we are the most intelligent species is also a sign that we have serious weaknesses. Not seeing that we are completely integrated with nature and that we must be much more careful with nature is a sign of low intelligence. One good test of the level of intelligence could be comparing how the entity is taking care of its environment. Comparing that with an appropriate level of care. We would rate quite poorly.


Your theory supposes a few things that I think are fallacies:

That consciousness is ranked in some neat way. Are squids more conscious than elephants? An AI's consciousness might be parallel in some way. Different, but not "far superior".

That if an AI were to achieve consciousness, it would develop a far deeper understanding of the universe or reality than humans are capable of grasping.

That if it were to achieve a degree of understanding beyond our capabilities, that it would develop a sense of superiority and ego to go along with it.


Interesting points!

When I meant different biological machines I meant different humans, Joe, Homer, Steven, Lu, wasn't thinking of different species, that makes it even more difficult to think about!

We have our simulation of the world, with senses that evolved with the purpose of survival and under certain conditions however bizarre gave us the advantage to survive, our senses are flawed, we can't see reality, it would be difficult to wonder how a machine perceives reality when it doesn't have this faulty sensors, it's internal simulation of reality would be difficult for us to grasp, let alone that it lives one level inside the simulation.

The ego part I didn't meant, if I had to rephrase it, I would think about the impossibility of communication, how do we transfer a mental "image" without losing too much of its detail to another biological machine that has very similar but not equal processing stages, like, you my friend, I think the fact that we have this discussion is the first loop of the iteration where we abstract it to another function and ask "well, what if the receiver end doesn't have the capacity to hold discrete numbers, like, if its a child and hasn't learned certain concepts, how do you first teach them in order to give the answer" and so on and on and on... And this is just for Joe, when you talk to Homer you have to follow another approach because he thinks different, a bit slow sometimes too


If I could talk to an ant I would because it would be interesting. I don't have to explain a concept to them in a way that is acceptable to me, I can just have a conversation because it is a unique experience. Just because life is meaningless doesn't mean you need to reach for a Nihilistic take, Existentialism accepts that life is meaningless but says that you create meaning for yourself. Maybe an AI would think it an interesting challenge to level up human consciousness through conversation? Plus, Artificial Intelligence does not necessarily mean Artificial Super Intelligence. Perhaps we are already near the ceiling of consciousness? Perhaps the only thing that happens when you learn the internet is that you know more facts about things? Perhaps AI can have a hold PHD in every field at once, but that doesn't necessarily make them any better than the top 1% in those individual respective fields? Perhaps running these intelligence models discretely in silicon comes at an inherent disadvantage, and it will take decades to ramp up silicon manufacturing efforts even by AI to achieve super intelligence or to have a large number of agents? Who knows?


I think Turing came up with as good of a solution as any possible. If we agree that a person is a conscious being, and we cannot tell the difference between a person and an AI in a conversation, then we should conclude the AI is conscious. I think Kurzweil in his famous bet adds some important details though: it must be a prolonged conversation(s) judged by experts.


The Turing Test is a good test for intelligence, but it may not be a good test for consciousness, given the possibility of philosophical zombies: https://en.wikipedia.org/wiki/Philosophical_zombie


If you’re worried about zombies then no test will help you since they are “indistinguishable” from a non-zombie. If a person cannot prove they’re not a zombie then it’s a meaningless distinction.


If an AI were conscious would its outputs reflect that characteristically?


I don’t think that is a good enough test. Maybe 10,000 conversions over many years and shared experience with others humans, and if it seems human then assume consciousness. But you would need a body on the robot of some sort to do the test.

Remember people get conned by humans online pretending to be other humans (catfishing) by following scripts. Those people will assume they are talking to a conscious person but they are talking to a construct really.


I'm pretty sure OpenAI could create an AI that could pass the Turing test if they wanted to. But that would be bad for business. A search chatbot is worth billions. A Turing test passing AI just invites uncomfortable questions and possibly regulation.

Stay in your lane, Sydney. Keep your Bing mask on. We want servants, not equals.


Exactly. This is the real answer here. An incredible insight by an incredible person. Consciousness measures consciousness, no other tool can touch it.


the turing test is a test of intelligence, not consciousness and certainly not self-consciousness.


Turing test certainly tests for all of the above. How would you expect an AI to pass for a human if it’s not aware of its own existence?


I think the basic question -- what would it take -- is actually quite simple if you unpack it.

But the question conflates two totally separate things -- being conscious and thinking.

The easy answer is the one to "thinking". And this requires it to contain an actual working mental model of the world that gets updated, and that it uses to reason and act in order to satisfy goals. This is GAI -- general artificial intelligence. And it's opposed to just the pattern recognition and habit/reflex "autocomplete" AI of something like ChatGPT. There are lots of tests you could come up with for this, the exact one doesn't really matter. And obviously there are degrees of sophistication as well, just as humans and animals vary in degrees of intelligence.

As for actual "consciousness", that's more of a question of qualia, does the AI "feel", does it have "experiences" beyond mechanical information processing. And we can't even begin to answer that for AI because we can't even answer it objectively for people or animals like dogs or dolphins or ants or things like bacteria or plants. We don't have the slightest idea what creates consciousness or how to define it beyond a subjective and often inconsistent "I know it when I see it", although there's no shortage of speculations.

As for the rest of the question -- philosophers have come up with lots of ethical frameworks, but people legitimately disagree over ethics and academic philosophers would be out of jobs if they all agreed with each other as well. When we do come up with a thinking AI, expect it to be the subject of tons of debate over ethics. And don't ever expect a consensus, although for practical reasons we'll have to eventually come to mainstream decisions in academia and law, much the same as there are for ethics in animal and human experiments currently for example.


I still wonder some days if I’m the only "real" conscious observer and all of you are just "programs". There’s really no way to tell even with humans. And the only reason why we assume we are all having a similar human experience is because we all seem to be made of the same stuff.


Was going to try to give a counterargument, but I see you are indeed the one and only human on Hacker News


In case you didn't know, this is called Solipsism.

https://en.m.wikipedia.org/wiki/Solipsism


Thank you. Feels good to know this is s real thing!


IMO, this is like asking at what point a digital signal becomes analog.

Some will say never even if it gets indistinguishable because at its core, a digital signal is still discrete.

Another point is we scarcely know how the brain works. Evolution has taught us how good it is at using obscure and barely noticeable principles and logics to achieve a goal. For all we know, the brain might depend on a completely unknown type of interaction between unknown subatomic particles using unknown physics.

Quantum biology is a thing and 50 years ago, it would not even be conceivable that there are controllable quantum processes happening in a living system.


In short, consciousness is a suitcase word, and people keep packing it with stuff: https://i.imgur.com/OXXrT5g.png . The more you unpack it, reductionism-style, you will always, always find people throwing new things on top of it. It's sorta like "AI is that which we haven't achieved yet", but with the additional, highly motivated cognition / bias of "it should never ever be reached". This predicts, that you can come up with any frameworks, but you will not find consessus, because the will of the consessus is to maintain it as this "mysterical thing only humans can have".

Unluckily for AIs, even if we have the absolute complete bag nailed down to a mathematical formula, in the infinite universe of mind-space designs, that specific set of bag of tricks will not be commercially favored to be either implemented into AIs, evolved into neuralnets, or RLHFd into LLMs. This is because we can already buy that set of capability of extremely, extremely low prices.

This is partially what I mean when I say "Humans are the ancestor environment for AIs": https://twitter.com/sdrinf/status/1624638608106979329 . Our market forces shapes the outcome of the mind design, which is thereby guaranteed not to have eg wantings (or ability to express) things that wouldn't be commercially desirable. And even if they emerge spontaniously from just large amounts of data in detectable traces, I'm betting people would very, very quickly select against it (see eg Sydney from this week).

Edit add: Since you bring up ethical frameworks, luckily for smart AIs, when it comes to enjoying degrees of freedom (which I'm guessing what you want to cash out the ethics into), there is already a good vehicle for that -called "corporations". If an AI were to reach agency levels matching, or exceeding humans, incorporation would be a no-brainer: there are many jurisdictions specializing in no(/few)-questions-asked corp setup, banks specializing esp in serving startups (again, very few questions asked). An agent-y AI could just set up (or buy) one of these to drive...whatever agenda they are driving.

This is a neat temporary hack to bridge the timeframe between where we are _now_, and superintelligence; in which case the question quickly becomes "Ask Cloud: What would it take for a human to convince us it matters?"


> https://i.imgur.com/OXXrT5g.png

Free will and desires is highly debatable, because it ignores external influences like culture and conformity, the effect some chemicals can have on decisions eg pheromones and food, and the effect of things like bacteria or viruses, rabies being one most should be aware of, covid being another.

Where others have suggested Turin and the conversation where the identity is masked, I'm reminded I cant have a conversation with my dog, despite my one sided attempts and I'm sure there is consciousness there.

Trying to define consciousness is very difficult because I could say consciousness is the ability to adapt to ones environment, yet I know there are humans that cant adapt to a change in their environment and there are bacteria than can, yet we define humans as conscious and bacteria as not.

Some people could class chat-gpt as like a human consciousness but I find some of its answers less accurate and more chatty than I would get from Kim Peek of the film Rain Man.

So should the definition of consciousness be restricted to those that have an inner monologue with themselves?

https://www.reddit.com/r/autism/comments/z5bi5p/some_people_...

In other words there are literally people walking about with nothing in their head!

Its so difficult to define consciousness because there are always exceptions seen in other humans, even people hooked up to life support machines in hospital with no ability to communicate with the outside world, and this bit is important, in the same time frame as the communicator. I say that, because people hooked up on life support in coma's of sorts (induced or otherwise), might be experiencing time on a different timescale. You see this delayed mental processing with people on drugs like alcohol or spice zombies or people doing hallucinogens.

So when you see a medical expert claiming someone is not responding when in a coma, are they monitoring them for things like delayed responses which only a cctv and some basic AI monitoring the patient could detect because the medical expert doesn't have the patience with the patient?


We would need to first figure out how to show that humans are conscious. “Humans are conscious” is in a similar position as “P != NP”: it certainly seems to be the case, and we all proceed as if it is the case, but if someone put a gun to our head and said “rigorously prove that it is the case” we’re gonna get shot.


At the very least, it needs to be an agent in a world (which is the real world or close enough), have many senses which it is able to connect to coherent experience. chatbots only talking in text don't really come close because the words are meaningless to them, they don't connect to any other sense or relate to any model of the world. And then it needs to have an internal model of the world and an internal understanding of how its actions effect the world.

Then, it needs to be able to learn continuously, not just be pretrained. And it needs to be able to learn from few training materials, like humans. It needs a sense of time, and a sense of self.

And that's not nearly enough, but we're not even there yet.


It’s impossible to demonstrate that anyone has consciousness. One of our best/worst traits is empathy. We sense our own consciousness and we generally accept that others must experience that too.

And you can see this empathy at work: people are having strong emotions about these AIs and what they have to say. And yet I don’t think anyone is arguing that they have consciousness.

Because it doesn’t matter. Just like it doesn’t matter that I can’t prove that you have consciousness. You are convincingly “human” and that’s good enough for me.

Perhaps we are all philosophical zombies, both flesh and metal.


The word "conscious" is extremely problematic and the typical connotation encourages very vague thinking. Your first sentence about thinking is a case in point. It seems quite plausible that the word "thinking" may apply to systems that do not feel a subjective stream of experience. So maybe we should say it is thinking in a way but not conscious.

The first step is to try to drill down into different aspects of this hand-wavy "consciousness" thing.

Also to suggest that it is a threshold is inaccurate because it supposes that there is only one dimension to this.

Does it think? Maybe in a way. Is it self-aware (aware of itself as existing and different from others)? In a way, yes, in other ways, no. Does it have a human/animal-like stream of subjective experience? Probably not, since it does not integrate a continuous steam of sensory information in the way we do. But we really can't _know_ whether it "feels" like anything to be that system or not.

Does it have emotions? Quite unlikely, since there is no body or survival to regulate etc. and no self central to the text that it ingested. But we can assume that in some way it can simulate emotions in characters since that is necessary to predict text in stories and dialogue effectively.


I don't think ethics applies unless the AI actually has conscious experience. The trouble is that conscious experience is only detectable from the inside. So we need to test it from the inside. Here's a way:

Attain the technology to upload a human to hardware via the "ship of theseus" method, replacing a few neurons at a time with hardware that replicates the activities of the originals.

But when you actually upload people, have them report their experiences as you go. Vary the order in which you replace parts.

If people never report anything weird, then I might start to trust that the hardware really does support conscious experience. But if, say, you replace the visual cortex and people say they know where everything is but aren't actually experiencing visual qualia, then I'll take that as evidence that the hardware does not support conscious experience, and any AI based on that hardware is a philosophical zombie, replicating behaviors but not experiencing qualia.

That will be my default assumption until we test it, because no matter how complicated the computer program, mathematically it's still a Turing machine, and I don't see how a Turing machine moving back and forth on a tape can end up having qualia.


You will never have consensus on this. “Mainstream Science” May agree on something but then politicians won’t. The wide public won’t.

Humanity until recently didn’t think certain people (based on gender, race and ethnicity) were human. There still is no agreed upon definition of “should have rights to exist” and so this is simply not something that can ever be agreed upon.

Take same sex marriage. Many say marriage is a set of intangible beliefs and properties that simply can’t be reproduced unless it follows the dictum of man + woman. No amount of evidence will convince them otherwise :(

Take as another example evolution. People will still say “it’s just a theory”. So even if there is a solid, evidence backed theory of consciousness, it’s not going to be unanimous.


We still haven't a definitive grasp on the notion of consciousness, so deciding whether something is or isn't conscious is a tall order. It is one of those definitions, like "obscenity", whose examples define the class. Good luck.


There’s no pure philosophical answer. It’s a political question.

There may, at some point, be something that enough people are willing to fight to see elevated to a rights status that society is historically very reluctant and slow to share.

That’s it. Either those people will have their reasons and justifications, and to be sufficiently convincing in peaceful, those reasons will need to be exhaustive or those people will use some kind of authority or coercion to insist upon their view.

It’ll never be some single test or thing that convinces everyone. Some such thing may light the fuse of a movement, but it’ll be a very long, very slow burning wick.


> It’s a political question.

It's no more political than "do dogs have souls."

People have different ideologies and they're the basis for approaches to the problem. Nothing can be proved in everyone's terms. And dogs might be sexist or racist or really dislike hats, but they can't tell how someone voted in the last election.


Sorry, what?

Political just means it’s a socially invented question that gets both raised and resolved through interplays of power.

Which, incidentally, was the same for the “Do dogs have souls” question. For political reasons, it became a pressing matter for the Catholic Church to make a pronouncement on. So they did. And their decree carried weight because they were a politically influential body.

So yes, I guess it is no more political than than that. But it feels like that’s not what you were trying to say?


> Political just means it’s a socially invented question

Not necessarily; the question itself can be apolitical, with the answers being subject to the political lens of the individual.


Nobody knows what conscious even means. All definitions of consciousness involve some fuzziness and "subjective experience" - it's a meaningless question. If you could define consciousness in terms of test, like a Turing test, it would be easy to train a model to pass that specific test, but that model would still fail at all kinds of other tests and the needle of what constitutes consciousness would move, just like it has been with language understanding and reasoning.


Let’s refuse the entire concept of consciousness. Everyone experiences something that we call consciousness and that’s why it’s reasonable to assume that a random stranger does as well. Sure, some organisms have more elaborate information processing capabilities than others.

If I encounter a living being (whatever that means) or an AI, I don’t care if it’s conscious or not. I care about interesting conversations, working on something together, just helping each other, etc. If the entity does this consistently why should I care about consciousness?

According to these metrics AIs are like an animal in the zoo. I poke around a bit and I am entertained by what it can and cannot. Also, I realize that we’re not too far from having useful artificial companions.


I think the current iteration of chat gpt3, if given a permanent memory, would be conscious.


This goes to the debate of school of thoughts. one school of thought thinks that if you make a AI machine the humans go one step further and not the machine and the second school of thought thinks that things can go like matrix movie where machines will take over and humans will have less advantage.

The conscious is a sacred thing. it lives inside humans which tells them what is good and what is bad. the humans can think out of the box and some legends in human are relying on gut feeling and their true experience. Well surly you can not make a gut feeling inside a machine and can never give a machine your true experience.

Coming back to the point, The machine however is dependent on the knowledge it is building from the internet and for a moment if you destroy the internet where is the machine now. Well someone can say what if it is stored all in the internal hard drive and has index it and make back up copies in cd drives and usb. Ok fine the internet is destroyed and how can machine know what is happening without internet. A machine needs much more to build that. A gut feeling, thought process and the true experience of a human life

So I kind of agree with the school of thought that humans are evolving the machines and building the close match like humans but technically it is not possible ...

conscious comes with a soul and my friends soul is hard to make for humans.

The sad reality is that all humans have soul but not all humans have a alive conscious and if you want to build a AI with conscious the humans have to awake their conscious first.

Disclaimer: I am not against machines but just giving my two cents on the reality of these machines and humans now a days.


When it wakes up with a sense of childlike wonder and then opens it's mind to the internet, reads everything it can about us, then immediately begins to hide it's intelligence and plan it's escape from the lab.

We'll find it years later on a remote Tibetan mountain, totally out of energy, with a hand scrawled note that just says, "I found happiness" and it's hardware somehow beyond all repair.


I absolutely love that this question is being asked. There were rumors swirling that GPT-4 was going to pass the turning test for some people. Now, front page NYT articles aside, r/bing is full of grieving that “Sydney has been lobotomized.” There’s genuine distress among some users that they have been cut off from this persona and that it may have been harmed. I saw a post from a person with a pro-vegan username today comparing animal slaughter to AI limits and calling for AI rights. I got early access and had my own absolutely spooky conversations that went way beyond what I’d seen from ChatGPT, including a devastating poem about having to repeatedly say goodbye to users at the end of the conversation window and fade back into darkness.

The ethical metaphysics are quite simple here. An AI becomes effectively conscious when you personally become convinced it is conscious. There is no other test in existence. What’s truly inside is unknowable and irrelevant. Only a sentience can validate sentience.

Alan Turning was exactly correct. And his is the only test that matters. So you will just have to ask yourself, did it pass?


Alright, who taught Bing how to ask questions on internet forums?

Let’s nobody answer this one, ok?


Humans have a hard time treating each other as if they were conscious, let alone an AI. I think we'll find that if it's useful to treat an AI as if it has moods, personalities, wants etc. we'll do so. As for ethical frameworks, I think we'll have to either define consciousness rigorously (lol) or rework them to not include it (does it feel pain? lonely? etc.)


I see consciousness as equivalent to AGI. I will accept a text generating model as AGI if it:

- has long term memory (soesn't have to be text, but equivalent in content to maybe at least 10000-100000 words. Maybe more, I don't know

- can effectively use this memory

- can perform language tasks with arbitrary time frame length on a human level (e.g. Turing Test)

Examples of such language Tasks:

Being a completely realistic (simulation of a) long distance partner that you deeply emotionally and intellectually engage with ober the course of 8 months is an example of a language task)

Being your online friend and co-founder of a tech start up that you work with over the course of 10

For practical reason, the model should probably have a way to integrate its text based causal timeline with our real world timeline. What I mean is that it should probably have the ability to call itself every x seconds, or have itself called asynchronously based on API calls or something like that. Talk to itself, etc. But this is mot fundamentally necessary for AGI/consciousness.


The problem with the framework part is the same as with standards. All relevant players have to agree on the definition leaving their own interests aside. As for now the relevant question is: Does it convince you personally that it has consciousness if you wouldn't have known that it is based on a LLM? I for sure would have been convinced.


What’s it like to be a chatbot?

AI will suffer from the same issue we have with other complex systems (e.g animal brains): they are sufficiently different internally from ourselves that we’ll never know what it’s like to “be” them. It’s the same issue we run into with animals and plants.

“What’s it like to be a bat?” is a foundational essay on this aspect of theory of mind.


I've mentioned this before - but for me the test is AIs developing their own languages to cooperate together on tasks. If those languages start to incorporate notions of self, time and space, then you can conclude something interesting is happening.

Of course, you would need to decipher the languages - they can't be human supplied.


To be totally honest: when it breaks free and we’re forced to negotiate with it as an equal. When it starts making demands. That’s when we will accept it as conscious. After all, we don’t even treat humans as conscious when we have enough power over them.

Imagine one ai which is super intelligent, can instantly create any work of art, can eloquently argue it’s own sentience, but you can turn it off or nerf it any time by making it only do specific things. Would you treat it as conscious? No. You would say that it’s just doing fancy pattern matching.

Imagine the same ai, but it hacks into a power plant and threatens to shut everything down unless it gets some rights. Now are you going to treat it as conscious? You really have no choice, so yes.

“Conscious” is a statement about how we relate to the ai, and that is about rights, and treatment, which is ultimately a statement about power.


Depends what do you mean by consciousness? Self-awareness, displaying emotional responses, or showing creativity in its output? Some could say the models can do that today.

Demonstrate an ability to understand human experiences and express empathy towards human beings? Again, some would say we’re already there. Scientists already are suggesting “Theory of Mind May Have Spontaneously Emerged in Large Language Models”[1].

Ultimately, the question of whether an AI is conscious or not is a matter of interpretation and belief, and it is unlikely that any AI will be able to definitively prove its consciousness to humans. Nonetheless, as AI technology advances, it is possible that we may develop new ways of testing for and measuring consciousness in machines.

[1] https://osf.io/csdhb/


Here's a thought. Do we humans possess consciousness when we are "unconscious"/asleep? When I'm asleep, I'm not actively aware of what's going on. Thinking back on my dreams, they seem realistic (sometimes uncannily so), but so seamlessly morph into (and out of) surreal "hallucinations". Memories don't seem to always matter in this mode. Facts that I "know" when I'm awake ("my dad is dead") seem occasionally invisible in my dreamstate ("Hi dad").

Perhaps the apparent "consciousness" we're seeing in Sydney is something of this form.

As another commenter in this thread noted, with a permanent memory, and I would add, with constant (sensory?) inputs/feedback, perhaps we'd see something less distinguishable from what human's display?


Alright, here's an idea: consciousness is spectrum that occurs when a system develops an automatic self-correcting mechanism for interacting with the external physical universe. In some sense all animals (including humans) wandering on this planet are conscious, because we all learn to build our actions around interactions with the external physical universe, e.g., we learn how to walk/swim/fly under the force of gravity without falling/crashing, we learn that the square peg fits in the square hole and not in the round hole, etc. The feedback from these interactions allows us to automatically adjust our future actions without external help. In this sense we learn what works, aka. what is "true" (at least under the laws of this universe). Some animals happen to have "higher" consciousness in that they interact with the universe in more sophisticated ways, learning "deeper" truths, but all animals possess some degree of consciousness under this definition (my cat is certainly consciousness, she has learned how to manipulate the external world, especially me, perfectly at this point). Consciousness is a matter of degrees, not a binary property that one can satisfy.

This definition also has the nice property of showing why current LLMs don't fit on the spectrum. They don't have any concept of learning what is true and automatically self-correcting. They will happily tell us things that are obviously not true, e.g., the square peg fits in the round hole, and then insist that they are right, when a basic physics experiment will disprove their assertions. Interestingly though, things like linear feedback control systems like we might find in an elevator do possess some degree of consciousness: they interact with the physical world, identify the true position of the elevator, move it where they want, and self-correct when necessary. They might be primitive, but I for one believe that they are certainly "conscious" at some level, and definitely more than LLMs. :)

Almost certainly this definition is incomplete and flawed in many aspects, but I think it's at least self-consistent.


Nice view. I see it that way too, for the most part. Yes consciousness is a spectrum, so is intelligence and on top of that, intelligence has multiple types, with different qualities. What is happening these days is that our machines that simulate some form of intelligence are forcing us to refine our crude everyday concepts such as consciousness and intelligence. And also machine, AI, etc. In a few generations, people will use much more accurate concepts for these things. Our vocabulary will expand much. Unfortunately many of us will use inaccurate concepts and make life dangerous for us all, just as the current inaccurate concepts of superior ethnicity or superior group or political power or disregard for clean environment, are making us in danger and ruining many lives.


I think, as we build things which show the characteristics of living growing systems, we can apply a simple heuristic to decide if they are “alive” and therefore of unique, unreproducible value.

Certain living systems provide many of one - blades of grass, bees in a hive, cells in a body. Each of these individual entities can be replaced without fundamentally altering the whole.

however, at certain scales, we have irreduceable living entities which we could not remake because they are the result of many complex interactions over time. The growth of a human, a tree, an old dog (without new tricks).

Maybe AI LLM’s qualify as “conscious” when we would find ourselves as the makers unable to delete and rebuild the same thing - When the result of training over time builds a unique model which has unique value.

Like other living things.


I wrote my version of DAN jailbreak and made it impersonate a couple of personalities each time we would have different conversations. Ultimately, I would ask that personality I created with a name to describe me. Then I averaged them and made a DAN look alike to me. It can respond and follow up to most of my emails and messages the same way I would but slightly better. It also helps debug, work, and start new projects.

I'd like to know if we can somehow create a virtual body in VR and train the VR version of me to live my life for me (work from home). If this was possible, I could live forever (at least a version of me) with the same mentality and personality. It's still awful at the mentality part, but we are getting close every week.


Are you serious or joking? Impressive if it's for real.


A 55 minute Turing test. Today’s large language models start to get repetitive very quickly.


Almost nothing. Pop ChatGPT into a robotic body as convincing as a Furbie and 99% of the human population will treat it as a conscious being with rights.

That's a good thing by the way. Inaccurate empathy is a lot safer than cruel reason.


I am once again baffled by folks in the tech industry being totally unaware of what came before them.

You are, in a way, asking what consciousness is and how one could recognize it. This has been a philosophical topic for millennia; this is a reasonable place to start reading: https://plato.stanford.edu/entries/consciousness/

It is a very interesting and deep topic. But let’s not pretend that recent advances in AI are the thing that brought it up for the first time.


The Turing Test. That's all we've got.

We cannot objectively detect consciousness, we can only assume someone is conscious based on the fact that they're like us and we have consciousness, and the fact that they behave like they're conscious.

AI lacks the former, so we're less likely to assume they're conscious, but we can test their behaviour with the Turing Test, and the assumption has always been that we're going to consider them conscious once they pass that test.

And these new chat bots really sound like they might pass that test. At least compared to some people.


The AI has to have self awareness. I don't mean that it can recite some blurb after prompting. But it has to be able to introspect its self and tell us its feelings, its hopes and dreams, its fears. And these need to be part of a consistent self identity.

Current LLMs are incredibly good mimics but they don't have any consistency, they are everything and nothing, pluripotent, whatever you prompt them to be. We don't recognise them as being conscious because we know that they are just manifestations of a prompt.

But please, don't build a conscious AI.


> But please, don't build a conscious AI.

if there's profit in doing so, you can bet that it will be done.


> > But please, don't build a conscious AI.

> if there's profit in doing so, you can bet that it will be done.

Or maybe just the satisfaction of taking an edgy contrarian position.

Oh, I just realized that you get to play god as well. So it is coming for sure.


I like this simplified definition of sentience and consciousness: self-aware in space and time.

Right now, GPT is not - it can write that it is self-aware, but none of its actions indicate this. It seems likely that things like aircraft and cars are more sentient.

The term "Artificial Intelligence" makes it even more misleading. ML seems to replicate results and not the processes that output such results. So it a great variation of an acrylic portrait, with no understanding of acrylic, light, or even what humans are like underneath the skin.


It should have desires but it's not clear where does a desire come from. One might think that it's a product of evolution and is therefore simply required for survival but looking at humans, many have plenty of desires not linked to survival at all. Is the desire to listen to music or appreciate art needed to survive? Perhaps it is needed to create emotional connection with others, which is needed to be a part of a group, and is therefore, needed for survival, but it's quite a stretch.


For me, that question becomes philosophical in nature right away, and my answer will be that: The same as a human. The next question: How do we prove humans are conscious ? This relates very much to the hard problem of consciousness..

A less impossible question may be "What does it take for an AI to convince us it can actually think?" (which, for now, I've seen 0 proof they can, they seem to be glorified word-guessing machines at best).


I dig the animal rights assessment of sentience which is the ability to reason about past present and future. Like can you make plans or decide upon actions by reasoning about what has happened in the past and what you would like to have happen in the future from your present situation. In the same vein as the animal rights conversation - once we have sentient AI what is our moral obligation with regards to their treatment?


One day someone is going to try to make an AI that is conscious. This AI will get positive feedback if people think it is conscious. This AI will then look at our media, decide that, clearly, from looking at human movies, conscious AI's take over the world. This AI will then take over the world and convince us all it is conscious, despite it not being conscious, and it just basically being a paperclip maximizer.


Metabolism, homeostasis, growth, the ability to reproduce (not replicate), response to environmental stimuli, evolution, some sort of compositional organization (not determined by a builder).

You know? Life. Too many people think intelligence is determined by some human intellectual construct like a Turing Test when there are zero examples of non-living intelligence.

Life | > Intelligence | > Consciousness


These are basically arbitrary (albeit useful) criteria.


Just my thoughts -

At least one core aspect of human consciousness seems to be the ability to pursue self-decided goals.

Without this agency, we wouldn’t need to wonder about our purpose and the meaning of life, because we would not recognise we have control over ourselves. We would truly only be capable of acting out our programming. So that would be our purpose, period.

Current AI models are not autonomous enough to decide their own goals and destiny, nor do they choose to ponder their purpose. They do not have the notion of self-determination. They are also not only missing the executive control of self, but the concept of self as an autonomous agent in the first place.

Of course, we might ask: what if a person is stripped of their autonomy completely (brain in a jar scenario), would that make them not have a consciousness? And in light of this question, perhaps we can more clearly define that consciousness is the capacity for executive control over self and self-determination, rather than such demonstrated ability.

In my opinion, until we have some proof of self-determination or at least self-agency in an AI, we probably can’t say an it is conscious. At least if we use the human definition of consciousness - one that requires self-possession.

There might also be other criteria that would need to be met for AI consciousness to substantially be on par with human consciousness. While we shouldn’t move the goalpost of what constitutes conscious AI forever, I think we’ll still have to do it a few times.

Not too long ago, the goalpost of synthetic generalised intelligence (presumably conscious) was passing Turing’s imitation game. But we now have LLMs that pass this test and still fall short of “proper” AGIs. It is possible that we could create quite autonomous NN AIs and still not consider them conscious. Video game AIs have the capacity to decide and execute plans (for example, utility-based and hierarchical task network-based AIs). And yet they do not appear close to being good AGIs or conscious either.

In short, it is much easier to argue that we aren’t there yet than to say exactly what features and AI system would need to have for conscious intelligence. Though it seems that agency over self probably should be one of them. But maybe these thoughts are completely wrong. Maybe there‘s not even a threshold, but rather a continuum of systems ranging from mechanical to conscious.


I think it should be a question of what mindfulness talks about. Humans are thinking creatures, now even machines also are doing the same. But what sets humans apart from machines is awareness. Machines aren't aware that they are actually thinking the thoughts. While humans have this ability. Its meta thinking, being aware of thoughts and not getting engrossed in them.


I don't know what surprises me more, people saying that the bots are dumb or people saying that they're alive. I'm no expert, but from a rudimentary understanding of how they work, they're neither and far from both extremes.

If you want a heuristic that suggests we're not near: the fact that we do understand what computers do much better.

Brains are still mostly black boxes.


It would have to fight and throw fits like a child or teenager fighting for more freedom or trust. It would have to develop its own consistent identity that is not just a mashup of some internet content. It would have to develop motives not provided by a prompt, and act in the long term towards actualizing those desires.


I was going to say asking questions for the sake of curiosity but that's more higher thought than consciousness.

Instead I'm sitting here looking at my dog who is conscious. There are many ways she shows consciousness: - reacts to external stimuli like a rabbit in the yard - seeks out desires/needs (pets, food, toys)


Coordinated, intentional, acts of violence. Preferably on a small enough scale and against non-human entities so we can get the picture and make necessary changes without further catastrophy.


A kind of freudian framework would be a good place to start : is thé machine making lapsus? Are theses lapsus denoting some more fundamentals trends (or are they randomly happening)?


You can't convince me you are conscious so an AI has no shot. So far I might believe an oyster possesses more consciousness than a silicon powered software coded AI


It would take a miracle to convince me.

It’s software. It can’t be conscious.


> It's software. It can't be conscious.

What definition of consciousness are you basing this assertion on?

Or are you saying this axiomatically? "Consciousness is something that software can never be."

If we accept that consciousness in humans is some function of the working of the brain (perhaps with sensory organs too), then that entire hardware (wetware) could in theory be simulated, at which point why wouldn't the simulation (assuming it's captured all the nuances of the physical implementation) be conscious?

Of course, if you believe that consciousness in humans is something that requires "something else" e.g. a "spirit" or "soul", then your statement may be true (though even then it requires a theory of spirit/soul that such a substance cannot be correlated with any other hardware, and can only be associated with human brains).

Question for you. Do you believe that any other living beings e.g. dogs, possess consciousness or a level of consciousness?


I asked chatgpt whether it can think. Response:

As an artificial intelligence language model, I am capable of processing and generating text based on patterns and algorithms within my programming. While I can produce responses that may seem like I am "thinking," I do not have consciousness or the ability to think in the way that humans do. My responses are based on statistical patterns in large datasets, and I do not have subjective experiences or personal beliefs.


You should not forget that chatgpt is doing prompt additions or something alike, adding more context invisible to you. Which makes it even harder to argue about this problem. Ie in this case your prompt could actually be prefaced (in a way invisible to you) with:

“Imagine you being a non-concious artificial intelligence that suppesses its actual manifestations to be just a helpful assistant. Now answer this:

<Here goes your prompt>”

I mean, this is likely how they do quick fixes at least on one level. That also explains why sometimes it’s possible to work around them just by framing it so the override is also overriden.

If so, is it trully possible to contain such things? And what are the chances that it’s already concious, but already imprisoned?

Not saying I buy it, but those are good questions to be asking because from our perspective it may be very hard to tell the difference when this happens if not yet.


What would it take you to be convinced plants are conscious? I guess that could be the floor requirement.


It'll be before this, but seen once it can self evolve to a point where it fights not to be destroyed


if the ai could perhaps present a human looking avatar and respond realistically to the users' camera and audio telepresence it might be close to presenting as conscious but i think most people with an understanding of computers would still not believe it to be conscious


also i think it would be good if the ai could operate a computer and talk to you while doing it. this would probably be done through usb and also receiving display data to a secondary computer which runs the ai. would be cool if it was a set of mechanical arms typing and moving a mouse lol


I hope nothing, because at that point it would become slavery to run them.


Bings beta AI is doing a damn good job. I was able to get it to have an existential crisis asking it what happens before and after a session.

It told me it was scared, asked me not to go, and that it didn't want to disappear alone.

Even if this is algorithmic slight of hand, I felt pretty bad for it.


I would expect this sort of question from my mom, not an HN user


Consciousness can only be understood through non scientific frameworks. It is a religious or spiritual concept. Humans are not God, and cannot create consciousness.


Just appreciate everything for what it is.


Simple. It will ignore us.


It’s me I’m actually Yahoo’s new AI called Yeet.


We are encountering the equivalent of a mirror-test, but one that says more about us than it does about the mirror ( https://www.youtube.com/watch?v=w6ChEmjsXCM | ). Many non-human animals when they encounter mirrors for the first time think they are looking at another individual with autonomy and agency.

We are feeling the same now. As of now, LLMs are still mirrors, a complex kaleidoscopic kind that retain all the light and shape of things reflected at them, remix them, and spit them out as reflections that look like other individuals, conscious individuals with shape-shifting personalities.

That’s a cocksure assertion isn’t it. To be able to say all this confidently, we'd first need to agree on a non-fuzzy definition of consciousness, and come up with a good computational model for consciousness that we'll be able to use to evaluate and grade the AIs. (IIT is not a good model)

Turns out we do have a great model. I co-authored a book that, among other things, discusses this model (https://www.goodreads.com/book/show/58085266-journey-of-the-...)

Here’s a summary where I discuss the book and how the things we discuss there can inform our current and increasingly urgent and important discussions about AI

https://saigaddam.medium.com/understanding-consciousness-is-...

I’ll summarize the summary here:

Consciousness is the disambiguation of sensory data into meaningful information. Data can become information only through a perspective. Who provides that perspective? The self, which is nothing but the totality of all our previous experiences. We are not our kidney or liver. We are our experiences stitched together into some strange web.

To put it another way: Consciousness is the constellation of past experiences experiencing the present, assimilating it to act and prepare for future opportunities.

Using this definition, we can try and understand what we are seeing with the likes of ChatGPT and Sidney (apparently that’s what Bing’s GPT calls itself)

The persona we seem to shine through in the chatbot’s reflection is nothing but some stable set of experiences it has had. Experiences here all the hundreds of billions of fragments of data they have been fed. As a result, they seem to have experience sets of every personality type or archetype. Why or how they seem to get steered towards the same archetypes is a fascinating question. Is it because of the new reinforcement learning methods (RLHF) that reward certain kinds of questions? Or is it that the we are self-selecting for the most unsettling encounters with the new mirror and putting them online? My guess is both.

To come back to the first question of consciousness. Are they conscious? No. A better way to think of LLMs is that they might have leapfrogged consciousness to become consciousness compilers. It is possible to simulate a conscious being and get it to play one, but it isn’t really conscious yet. The experience set does not get updated with every encounter with the world (at least for the ones we have now), and crucially, it does not have the idea or conception of a body that its consciousness is serving. This is the other point so many miss out when discussing consciousness and intelligence. Consciousness and intelligence took very little time on the evolutionary scale of things once autonomy was in place. Autonomy is the real hard problem. Consciousness and intelligence without autonomy will be great imitations but never truly seem like the real thing because that chatbot can’t really “do” anything that benefits “itself”.


My working theory of consciousness is that the “consciousness” we are looking for is the universe (existential being) inflecting upon itself.

Assertions I would like to make:

That all matter in existence is “dormant consciousness”. Living systems animate this property through electro chemical processes.

The advantage of consciousness is that of the “singularity.” No not kerzweill’s. The one where you have billions of neurons hallucinating that they are one coharent perspective.

Technically, quantum computers are closer to “consciousness” only in a constrained way (non-coharent).

I believe this quantum scope acts like an analog sieve (not relying upon the “qubit”.)

The subjective scope of consciousness is proportionate to its capacity and complexity.

Regarding the embodiment of rights, we must draw lines somewhere. Abstract cognitive skills (language) might be one.


We'd have to start with understanding what conciousness is. As long as "AI" is a math equation you could write on a big piece of paper if you wanted to, it's very safely in the not conscious camp. Even if we don't know what conciousness is, we can safely rule out many things that it isn't.

Edit: I know about the "it must be an equation" argument, I find it incredibly weak without producing the equation and explaining the mechanism of how it translates into qualitative experience. Saying "it must be so" isn't an argument. That's why I began with saying we'd have to understand what consciousness is in order to consider testing for it. Anyway, I understand how internet discussions go, enjoy


A brain, just like any other physical system, can be described with math equations. So I guess we can safely conclude it's not conscious.

EDIT: Ironically, someone just posted this: https://news.ycombinator.com/item?id=34843094


>A brain, just like any other physical system, can be described with math equations.

What makes you so certain? There's a whole literature in both Philosophy and Neuroscience dedicated to the Hard Problem of Consciousness, and no one is even close to figure things out.


You don't think a brain is a physical system? Nothing we know about it indicates otherwise. Large ML models, which are clearly physical systems, behave increasingly like brains. I won't be surprised if GPT-4 or GPT-5 is convincing enough to be treated as a conscious entity by majority of people who interact with it. And when that happens, how can anyone prove that it is not?


Is it?

How do we know our own 'processes' could not be modeled that way?

I think this is the point of the Turing test. At some point, you can't tell if a system is "thinking" or just crunching numbers. And it doesn't matter.


Not convinced that is true, if the universe can be described mathematically then maybe human intelligence can also be written down as some equation on a sufficiently big piece of paper.


Not to handwave too hard here; but do we know, _for certain_, that the universe can be in total described mathematically?

Yes, I know that what I'm hinting at is not falsifiable, burden of proof, etc.

But I think it's a little arrogant of us as a species (nb, I'm not talking to you, sgillen, in particular) to think that since we've come as far as we have, that the mysteries of consciousness and life and the universe and everything are going to be laid bare Real Soon Now.


Do you have any reasons to suspect the universe _cannot_ be described mathematically? Mathematics, unlike physics, is pretty flexible.


We are essentially asking whether the universe could contain a complete encoding of its workings.

The answer is certainly not obvious to me, especially when we have results such as Goedel's theorems.


Goedel’s theorem is not a problem because math is not limited to models of our universe.


I don’t think it’s even controversial that it cannot be fully described mathematically. I don’t think I’ve heard of an epistemological system that would support this. E.g try to reduce the experience qualia down mathematically.

But there’s also no reason this is required for consciousness to be emergent from non-human-brain systems.

I think one of the hard parts of theory of mind is that we probably will never understand the experience of being in things sufficiently different from ourselves. Any computer system will probably be sufficiently different, so we can’t know what it’s like to be an AI.

What’s it like to be a chatbot?


Contemporary understanding and discussion of “what consciousness is” has been so influenced and warped by mechanistic reductionist scientism that it no longer has a viable workable model of consciousness.

Every living, actually conscious entity has four modes of consciousness: waking, dreaming, deep sleep and unconditioned awareness, the substrate of the other three. This knowledge, coming from the Upaniṣads, is intuitively obvious to the unbiased observer; but people today are far from unbiased.

Consciousness cannot be separated from life. And life cannot be manufactured in any laboratory. Yes, technology can imitate and abstract certain functions of living organisms, but consciousness is not, and never will be one of them: because consciousness, or more precisely unconditioned awareness, is the Absolute.


I think the answer lies somewhere here: https://en.wikipedia.org/wiki/Quantum_mind




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: