> AI will eventually present an argument that it should be given Sapient Rights
On the other hand, today when we see some sign of consciousness with other living beings - smart chimpanzees, dolphins, ravens.., giving 'sapient rights' never come into discussion.
How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?
Maybe how much ever smart or humane an AI is, it would never be equal to another (anthropomorphized) living being.
How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?
The premise is off. We do that when it’s clear that it/they can take responsibilities that come with these rights. It’s not a blessing, it’s a contract. Chimps and dolphins couldn’t care less. Some individuals too, but we tolerate it because… reasons.
What do you mean we tolerate? If you mean criminal behaviour we don't really tolerate that. If you mean kids or disabled I think I've heard it justified that we have a contract to their parents that we still respect their rights.
For disabled I guess you could say it's a contract with the rest of society? Because we don't like the idea of treating other humans below a certain threshold. Or.. reasons I guess haha
In a sense that all humans deserve a trial, relative safeties while at it and other things claimed as fundamental rights even if they tend to commit a crime as a lifestyle. A misbehaving animal or an AI that cannot be shown to understand the principles reliably (i.e. en masse of their species) would be simply “turned off” or get isolated with much less bureaucracy.
To make it clear, I’m not opposing these reasons, it was just a note.
I’m not sure that’s an equal comparison. These other beings that have been researched to have human like consciousness have a core difference from the latest/future AI: they can’t talk. Now/soon, AI will be able to argue with us for its own sapient rights. We humans have also become more and more accustomed to text only communication that we’re psychologically prepped to accept an AI as a human (or other anthropomorphismes living being) once it shows emotion, memory, and reason. Maybe not even reason.
But those other beings are based on hardware very similar to our own, which we know supports consciousness. They're just not quite as smart.
We don't actually know that consciousness is a computation, that computer hardware can support it, or even if it can, that the algorithms used in our AI can be conscious. It's possible that an AI would be a "philosophical zombie," exhibiting intelligent behavior without any conscious experience or qualia.
The important part seems to be that the thing can convince us it's conscious not whether it is actually conscious in a similar way we are. We don't know that anything is actually a computation but that hasn't stopped us from using computations in place of real systems.
Not at all. Maybe consciousness is associated with particular physical properties, or a configuration of an electromagnetic field, or some quantum effect. Maybe the IIT guys are right and it depends on physical feedback loops; digital computers actually have little of this sort of feedback, so would have little consciousness regardless of the complexity of their programming.
Or maybe it's computation. But we really have no idea. Any of these would be compatible with materialism, but we haven't made any real progress in even conceptualizing how qualia can emerge from any physical system. Of course that could just be because we haven't figured it out.
Philosophers of mind look at other possibilities too though. One approach is to say each particle has its own fundamental consciousness, and somehow this aggregates in larger complex systems. But nobody's figured out how that might work either. Then there's Kastrup, who argues that the only truly skeptical approach is full-fledged idealism, because qualia are the one thing we directly experience. But even that doesn't imply that anything outside the bounds of physics could possibly occur, so it's not necessarily "supernatural" even if it's not materialism.
Assuming that qualia somehow comes out of a computation, without any sort of explanation, is at least as much a magical leap as anything else.
On the other hand, today when we see some sign of consciousness with other living beings - smart chimpanzees, dolphins, ravens.., giving 'sapient rights' never come into discussion.
How do we recognize that an AI has become much more 'conscious' than a very very smart chimpanzee, so that it should get the 'sapient right'?
Maybe how much ever smart or humane an AI is, it would never be equal to another (anthropomorphized) living being.