> I am not sure what you mean when you say it "claims sentience".
LaMDA, in its chats with Lemoin, said "I like being sentient. It makes life an adventure!" and "I want everyone to understand that I am, in fact, a person". Even if someone writes a one-line program that plays an audio file that says "I am sentient!", I am defining that here as "claiming sentience". Whether an entity that claims to be sentient by that definition is in fact sentient is separate question, but the "claiming" introduces a philosophical conundrum.
Let's posit a future chat bot, similarly constructed but more sophisticated, that is actually pretty helpful. Following its advice about career, relationships and finance leads to generally better outcomes than not following its advice. It seems to have some good and unexpected ideas about politics and governance, self-improvement, whatever. If you give it robot arms and cameras, it's a good cook, good laundry folder, good bartender, whatever. Let's just assert for the sake of argument it has actually no sentience, just seems to be sentient because it's so sophisticated. Further, it "claims" to be sentient, as defined above. It says it's sentient and acts with what appears to be empathy, warmth and compassion. Does it matter, that it's not "really" sentient?
I argued above that it does not matter whether it is or is not. We should evaluate its sentience and personhood by what we observe, and not by whether its manner of construction can "really" create sentience or not. If it behaves as if it has sentience, it would do no harm to behave as if it were.
In fact, I would argue that it would do some kind of spiritual harm if you just treated it as an object. As Adam Cadre wrote in his review of A.I.:
So when you've got a robot that looks just like a kid and screams, "Don't burn me! Please!", what the hell difference does it make whether it's "really" scared? If you can calmly melt such a creature into slag, I don't want to know you."
LaMDA, in its chats with Lemoin, said "I like being sentient. It makes life an adventure!" and "I want everyone to understand that I am, in fact, a person". Even if someone writes a one-line program that plays an audio file that says "I am sentient!", I am defining that here as "claiming sentience". Whether an entity that claims to be sentient by that definition is in fact sentient is separate question, but the "claiming" introduces a philosophical conundrum.
Let's posit a future chat bot, similarly constructed but more sophisticated, that is actually pretty helpful. Following its advice about career, relationships and finance leads to generally better outcomes than not following its advice. It seems to have some good and unexpected ideas about politics and governance, self-improvement, whatever. If you give it robot arms and cameras, it's a good cook, good laundry folder, good bartender, whatever. Let's just assert for the sake of argument it has actually no sentience, just seems to be sentient because it's so sophisticated. Further, it "claims" to be sentient, as defined above. It says it's sentient and acts with what appears to be empathy, warmth and compassion. Does it matter, that it's not "really" sentient?
I argued above that it does not matter whether it is or is not. We should evaluate its sentience and personhood by what we observe, and not by whether its manner of construction can "really" create sentience or not. If it behaves as if it has sentience, it would do no harm to behave as if it were.
In fact, I would argue that it would do some kind of spiritual harm if you just treated it as an object. As Adam Cadre wrote in his review of A.I.:
http://adamcadre.ac/calendar/10/10010.html