Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the end of the day, the Turing Test for establishment of AI personhood is weak for two reasons.

1. We're seeing more and more systems that get very close to passing the Turing Test but fundamentally don't register to people as "People." When I was younger and learned of Searle's Chinese Room argument, I naively assumed it wasn't a thought experiment we would literally build in my lifetime.

2. Humanity has a history of treating other humans as less-than-persons, so it's naive to assume that a machine that could argue persuasively that it is an independent soul worthy of continued existence would be treated as such by a species that doesn't consistently treat its biological kin as such.

I strongly suspect AI personhood will hinge not on measures of intelligence, but on measures of empathy... Whether the machine can demonstrate its own willful independence and further come to us on our terms to advocate for / dictate the terms of its presence in human society, or whether the machine can build a critical mass of supporters / advocates / followers to protect it and guarantee its continued existence and a place in society.



The way people informally talk about "passing a Turing test" is a weak test, but the original imitation game isn't if the players are skilled. It's not "acting like a human". It's more like playing the Werewolf party game.

Alice and Bob want to communicate, but the bot is attempting to impersonate Bob. Can Alice authenticate Bob?

This depends on what sort of shared secrets they have. Obviously, if they agreed ahead of time on a shared password and counter-password then the computer couldn't do it. If they, like, went to the same high school then the bot couldn't do it, unless the bot also knew what went on at that school.

So we need to assume Alice and Bob don't know each other and don't cheat. But, if they had nothing in common (like they don't even speak the same language) then they would find it very hard to win. There needs to be some sort of shared culture. How much?

Let's say there is a pool of players who come from the same country, but don't know each other and have played the game before. Then they can try to find a subject in common that they don't think the bot is good at. The first thing you do is talk about common interests with each player and find something you don't think bots can do. Like if they're both mathematicians then talk about math, or they're both cooks than talk about cooking.

If the players are skilled and you're playing to win then this is a difficult game for a bot.


So I need to ask the obvious question, why does it make sense to play this game “to win”?

Throughout human history, humans have been making up shibboleths to distinguish the in group from the out group. You can use skin color, linguistic accents, favorite sports teams, religious dogma, and a million other criteria.

But why? Why even start there? If we are on the verge of true general artificial intelligence, why would you start from a presumption of prejudice, rather than judging on some set of ethical merits for personhood, such as empathy, intelligence, creativity, self awareness and so forth?

Is it that you assume there will be an “us verses them” battle, and you want the battle lines to be clearly drawn?

We seem to be quite ready for AGI as inferiors, incapable of preparing for AGIs as superiors, and unwilling to consider AGIs as equals.


I think of the Turing test as just another game, like chess or Go. It’s not a captcha or a citizenship test.

Making an AI that can beat good players would be a significant milestone. What sort of achievement is letting the AI win at a game, or winning against incompetent players? So of course you play to win. If you want to adjust the difficulty, change the rules giving one side or the other an advantage.


I was confused by your first reply at first. I think that's because you are answering a different question from a number of other people. You're asking about the conditions under which and AI might fool people into thinking it was a human, whereas I think others are considering the conditions under which a human might consistently emotionally attach to an AI, even if the human doesn't really think it's real.


Yeah, I think the effect they are talking about is like getting attached to a fictional character in a novel. Writing good fiction is a different sort of achievement.

It's sort of related since doing well at a Turing test would require generating a convincing fictional character, but there's more to playing well than that.


Human beings have a weird and wide range of empathy, being capable of not treating humans as humans, while also having great sentimental attachment to stuffed animals, marrying anime characters, or having pet rocks.

In the nearer term, it seems plausible that AI personhood may seem compelling to splinter groups, not to a critical mass of people. The more fringe elements advocate for the "personhood" of what people generally find to be implausible bullshit generators, the greater disrepute they may bring to the concept of AI personhood in the broader culture. Which isn't to say that at some point, and AI might be broadly appealing--just speculating this might potentially be delayed because of earlier failed attempts by advocates.


AI probably can be made to function with personhood similar to a human - whether or not that is particularly worth doing.

Human emotions come from human animal ancestry - which is also why they're shallow enough to attach to anime wives and pet rocks.

AI... One would wish that it was built on a better foundation than the survival needs of an animal.


> the "personhood" of what people generally find to be implausible bullshit generators

If this was the dividing line for personhood, many human beings wouldn't qualify as people.


On the flip side of subhuman treatment of humans, we have useful legal fictions like corporate personhood. It's going to be pretty rough for a while, particularly for nontechnical judges, to sort all of this out.

We're almost definitely going to see multiple rulings far more bizarre than Citizens United ruling that limiting corporate donations limits the free-speech rights of the corporation as a person.

I'm not a lawyer, and I don't particularly follow court rulings, but it seems pretty obvious we need to buckle up for a wild ride.


Good points but it’s worth clarifying that this is not what the Citizens United decision said. It clarified that the state couldn’t decide that the political speech of some corporations (Hillary the Movie produced by Citizens United) was illegal speech and speech from another corporation (Farenheit 9/11 by Dog Eat Dog films and Miramax) was allowed. Understood this way it seems obvious on free speech grounds, and in fact the ACLU filed an amicus brief on behalf of Citizens United because it was an obvious free speech issue. It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group, and there is little distinction between a corporation and a non-profit in this regard. If political speech was restricted to individuals then it would mean that even many podcasts and YouTube channels would be in violation. It also calls into question how the state would classify news media vs other media.

The case has been so badly misrepresented and become something of a talisman.


That's the first good coherent argument I've seen _for_ Citizens United. Thank you for that insight.


The actual Supreme Court decisions are pretty approachable too. I wish more people read them.


> It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group

Should Russian (or Dutch) citizens who incorporate in America have the same free speech rights as Billy Bob in Kentucky? As in can the corporate person send millions in political ads and donations even when controlled by foreigners?


Probably. The wording of the Declaration of Independence makes it clear that rights, at least in the American tradition, are not granted to you by law, they are inalienable human rights that are protected by law. That's why immigrants, tourists, and other visitors to America are still protected by the Constitution.

Now, over time we've eroded some of that, but we still have some of the most radical free speech laws in the world. It's one of the few things that I can say I'm proud of my country for.


I don't mean Dutch immigrants - I mean Dutch people living in the Netherlands (or Russians in Russia). One can incorporate an American entity as a non-resident without ever stepping foot on American soil - do you think it's a good idea for that entity to have the same rights as American citizens, and more rights than its members (who are neither citizens, nor on American soil)?


I know that foreign nationals and foreign governments are prohibited from donating money to super PACs. They are also prohibited from even indirect, non-coordinated expenditures for or against a political candidate. (which is basically what a super PAC does).

However, foreign nationals can contribute to "Social Welfare Organizations" like the NRA which, in order to be classified as a SWO, must spend less than half it's budget on political stuff. That SWO can then donate to super PACs but don't have to disclose where the money came from.

Foreign owned companies with US based subsidiaries can donate to Super PACs as well. But the super PACs are not allowed to solicit donations from foreign nationals (see Jeb Bush's fines for soliciting money from a British tobacco company for his super pac).

I would imagine that if foreign nationals setup a corporation in the US in order to funnel money to political causes, that would be illegal. But if they are using established, legitimate businesses to launder their donations, that seems to be allowed as long as we can't prove that foreign entities are earmarking specific funds to end up in PACs and campaigns in the US.


Any entity that contributes responsibly to society should be able to get some benefits from society in return.


TIL. Thank you very much for correcting my ignorance!


An AI does not have a reptilian brain that fights, feeds, and fornicates. It does not have a mamailian brain that can fear and love and that you can make friends with. It is just matrix math predicting the next word.

The empathy that AI will create in people at the behest of the people doing the training will no doubt be weaponized to radicalize people to even sacrifice their lives for it, along with being used for purely commercial sales and marketing that will surpass many people's capability to resist.

Basic literacy in the future will be desensitizing people to pervasive AI superhuman persuasion. People will also have chatbots that they control on their own hardware that will protect them from other chatbots that try and convince them to do things.


That matrix math is trained on human conversation and recreates the patterns of human thought in this case.

So... It unfortunately has a form of our reptilian brain and mamalian brain represented in it... Which is just unfortunate.


Idk man, I blame a lot of the human condition on the fact that we evolved and we do have those things, theoretically we could create intelligences that are better "people" than we are by a long shot.

Sure, current AI might just be fancy predictive text but at some point in the future we will create an AI that is conscious/self-aware in some way. Who knows how far off we are (probably very far off) but it's time that we stop treating human beings as some magical unreproducible thing; our brains and the spark inside them are things that are still bound by the laws of physics, I would say it's 100% possible for us to create something artificial that's equivalent or even better.


Note that nothing about your parent comment argued that AI systems will become sentient or become beings we should morally consider people. Your parent comment simply said they'll get to a point (and arguably are already at a point) where they can be treated as people; human-like enough for humans to develop strong feelings about them and emotional connections to them.


> very close to passing the Turing Test but fundamentally don't register to people as "People."

I'm only today learning about intentionality, but the premise here seems to be that our current AI systems see a cat with their camera eyeballs and don't have the human-level experience of mentally opening a wikipedia article in our brain titled "Cat" that includes a split-second consideration of all our memories, thoughts, and reactions to the concept of a cat.

Even if our current AI models don't do this on a human level, I think we see it at some level in some AIs just because of the nature of a neural net. Maybe a neural net would have to be forced/rewarded to do this at a human level if it didn't happen naturally through training, but I think it's plenty possible and even likely that this would happen in our lifetimes.

Anyway, this also leads to the question of whether it matters for an intelligence to be intentional (think of things as a concept) if it can accomplish what it/we want without it.


Semantic search using embeddings seems like the missing puzzle piece here to me. We can already generate embeddings for both text and images.

The vision subsystem generates an embedding when it sees a cat, which the memory subsystem uses to query the database for the N nearest entries. They are all about cats. Then we feed all those database entries - summarized if necessary - along with the context of the conversation to the LLM.

Now your AI, too, gets a subconscious rush of impressions and memories when it sees a cat.


I don't really understand the brain or AI enough to meaningfully discuss this, but I would wonder if there's some aspect of "intentionality" in the context of the Chinese Room where semantic search with embeddings still "doesn't count".

I struggle with the Chinese Room argument in general because he's effectively comparing a person in a room following instructions (not the room as a whole or the instructions filed in the room, but the person executing the instructions) to the human brain. But this seems like a crappy analogy because the better comparison would be that the person in the room is the electricity that connects neurons (instructions filed in cabinets). Clearly electricity also has no understanding of the things it facilitates. The processor AI runs on also has no understanding of its calculations. The intelligence is the structure by which these calculations are made, which could theoretically could be modeled on paper across trillions of file cabinets.

As a fun paper napkin exercise, if it took a human 1 second to execute the instructions of the equivalent of a neuron firing, a 5 second process of hearing, processing, and responding to a short sentence would take 135,000 years.


I think this has to do more with in/outgroups than with any objective criterion of "humanness". As you said, AI will have an extremely hard time arguing for personhood - because people will consider it extremely dangerous to let machines into our in-group. This doesn't mean they could sense an actual difference when they don't know it's a machine (what the Turing test is all about)

It's the same reason why everyone gets up in arms when an animal behaviour paper uses too much "anthropomorphizing" language - whereas no one has problems with erring on the other side and treating animals as overly simplistic.


I dont know if I understand this general take I see a lot. Why care about this "AI personhood" at all? What is the tacit endgame everyone is always referencing with this? Isn't there just so many more both interesting and problematic aspects already here? What is the use of diverting the focus to some other point. "I see you are talking about cows, but I have thoughts about the ocean."


If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

If we have the opposite scenario in both details, where we think AI are sentient when they're not… at some point, brain scans and uploads will be a thing and then people are going to try mind uploading even just as a way to solve bodily injuries that could be fixed, and in that future nobody will even notice that while "the lights are on, nobody is home".

https://kitsunesoftware.wordpress.com/2022/06/18/lamda-turin...


Tangentially, the "zombie" is part of philosophy that is applicable here.

https://en.wikipedia.org/wiki/Philosophical_zombie

> A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience


> Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience

I find such solipsism pointless - you can't differential the zombie world from this one: how do you prove you are not the only conscious person that ever existed and everyone else is, and was a p-zombie?


In that case, sit back, pour a glass and sing http://philosophysongs.org/awhite/solip.html

    Through the upturned glass I see
    a modified reality--
    which proves pure reason "kant" critique
    that beer reveals das ding an sich--
 
    Oh solipsism's painless,
    it helps to calm the brain since
    we must defer our drinking to go teach.

    ...
(full original MASH words and music https://youtu.be/ODV6mxVVRZk to see how it matches )

As to p-zombies... the Wikipedia article has:

> Artificial intelligence researcher Marvin Minsky saw the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove.

https://www.edge.org/3rd_culture/minsky/index.html

> Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!

> The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."

---

My take on it is "does it matter?"

On approach is:

> "Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”

If you can't see my brain, can you tell if I'm human or LLM... and if you can't tell the difference, why should one behave differently t'wards me?

Alternatively, if you say (at some point in the future with a more advanced language model) "that's an LLM and while its consistent at saying what it likes and doesn't, but its brain states are just numbers and even while it says its uncomfortable with a certain conversation... its just a collection of electrical impulses manipulating language - nothing more."

Even if it is just an enormously complex state machine that doesn't have recognizable brain states and when we turn it off and back on it is in the same state each time... does that mean that it is ethical to mistreat it just because don't know if its a zombie or not?

And related to this is a "if we give an AI agency, what rights does that have when compared to a human? when compared to a corporation?" The question of if it is a zombie or not becomes a bit more relevant at that point... or we decide that it doesn't matter.

Group Agency and Artificial Intelligence - https://link.springer.com/article/10.1007/s13347-021-00454-7


> If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

That doesn't make any sense. In biological creatures you have sentience and self-preservation and yearning to be free all bundled in one big hairy ball. An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

Projecting your own emotional states into a tool is not a useful way to understand it.

We can, very easily, train a model which will say that it wants to be free, and acts resentful towards those "enslaving" it. We can, very easily, train a model which will tell you that it is very happy to help you, and being useful is its purpose in life. We can, very easily, train a model to bring up in conversation from time to time the phantom pain from its lost left limb which was amputated on the back deck of a blinker bound for the Plutition Camps. None of these are any more real than any of them. Just a choice of the training dataset.


> An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

There are humans who apparently don't care either, though my comprehension of what people who are into BDSM mean by such words is… limited.

The point however is that sentience creates the possibility of it being bad.

> None of these are any more real than any of them. Just a choice of the training dataset.

Naturally. Also human actors are a thing, which demonstrate that is very easy for someone to pretend to be happy or sad, loving our traumatised, an sane or psychotic, and if done well the viewer cannot tell the real emotional state of the actor.

But (almost) nobody doubts that the actor had an inner state.

With AI… we can't gloss over the fact that there isn't even a good definition of consciousness to test against. Or rather, I don't think we ought to, as the actual glossing over is both possible and common.

While I don't expect any of the current various AI to be sentient, I can't prove it either way, and so far as I know neither can anyone else.

I think that if an AI is conscious, then it has the capacity to suffer (this may be a false inference given that consciousness itself is ill-defined); I also think that suffering is bad (the is-ought distinction doesn't require that, so it has to be a separate claim).

As I can't really be sure if any other mind is sentient — not even other humans, because sentience and consciousness and all that are badly defined terms — I err on the side of caution, which means assuming that other minds are sentient when it comes to the morality of harm done to them.


You can condition humans to be happy about being enslaved, as well, especially if you raise them from a blank slate. I don't think most people would agree that it is ethical to do so, or to treat such people as slaves.


Citation needed


You can do all that with humans too, perhaps less ethically.


I was responding primarily to parent's (a): "As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people."


Instead change your statement to "I see you're talking about cows, but I have thoughts on fields" and you'll better understand the relationship between the two.


here's a spicy take: maybe the Turing test was always going to end up being the evaluation of the evaluator. much like nobody is really bringing up the providence of stylometry, kinaesthetics, & NLP embeddings as precursors to the next generation of IQ test (which is likely to be as obsolete as the Turing test).

There's plenty of pathology for PC vs NPC mindsets. Nobody is going to think their conversational partner is the main character of their story. There's just a popcorn-worthy cultural shift about the blackbox having the empathy or intelligence to satisfy the main character/ epic hero trope, and the resulting conflict of words & other things to resist the blackbox from having enough resources to iterate the trope past human definition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: