Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I posit to a friend that:

a) As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people

b) Some business will eventually sell an off-the-shelf product (hardware and/or software) that is an AI you can bring into your home, that you can treat as a friend, confidant and partner

c) Someone will eventually lose their AI friend of many months/years through some failure (subscription lapse, hardware failure, theft, etc.)

d) Shits about to get real weird, real fast



At the end of the day, the Turing Test for establishment of AI personhood is weak for two reasons.

1. We're seeing more and more systems that get very close to passing the Turing Test but fundamentally don't register to people as "People." When I was younger and learned of Searle's Chinese Room argument, I naively assumed it wasn't a thought experiment we would literally build in my lifetime.

2. Humanity has a history of treating other humans as less-than-persons, so it's naive to assume that a machine that could argue persuasively that it is an independent soul worthy of continued existence would be treated as such by a species that doesn't consistently treat its biological kin as such.

I strongly suspect AI personhood will hinge not on measures of intelligence, but on measures of empathy... Whether the machine can demonstrate its own willful independence and further come to us on our terms to advocate for / dictate the terms of its presence in human society, or whether the machine can build a critical mass of supporters / advocates / followers to protect it and guarantee its continued existence and a place in society.


The way people informally talk about "passing a Turing test" is a weak test, but the original imitation game isn't if the players are skilled. It's not "acting like a human". It's more like playing the Werewolf party game.

Alice and Bob want to communicate, but the bot is attempting to impersonate Bob. Can Alice authenticate Bob?

This depends on what sort of shared secrets they have. Obviously, if they agreed ahead of time on a shared password and counter-password then the computer couldn't do it. If they, like, went to the same high school then the bot couldn't do it, unless the bot also knew what went on at that school.

So we need to assume Alice and Bob don't know each other and don't cheat. But, if they had nothing in common (like they don't even speak the same language) then they would find it very hard to win. There needs to be some sort of shared culture. How much?

Let's say there is a pool of players who come from the same country, but don't know each other and have played the game before. Then they can try to find a subject in common that they don't think the bot is good at. The first thing you do is talk about common interests with each player and find something you don't think bots can do. Like if they're both mathematicians then talk about math, or they're both cooks than talk about cooking.

If the players are skilled and you're playing to win then this is a difficult game for a bot.


So I need to ask the obvious question, why does it make sense to play this game “to win”?

Throughout human history, humans have been making up shibboleths to distinguish the in group from the out group. You can use skin color, linguistic accents, favorite sports teams, religious dogma, and a million other criteria.

But why? Why even start there? If we are on the verge of true general artificial intelligence, why would you start from a presumption of prejudice, rather than judging on some set of ethical merits for personhood, such as empathy, intelligence, creativity, self awareness and so forth?

Is it that you assume there will be an “us verses them” battle, and you want the battle lines to be clearly drawn?

We seem to be quite ready for AGI as inferiors, incapable of preparing for AGIs as superiors, and unwilling to consider AGIs as equals.


I think of the Turing test as just another game, like chess or Go. It’s not a captcha or a citizenship test.

Making an AI that can beat good players would be a significant milestone. What sort of achievement is letting the AI win at a game, or winning against incompetent players? So of course you play to win. If you want to adjust the difficulty, change the rules giving one side or the other an advantage.


I was confused by your first reply at first. I think that's because you are answering a different question from a number of other people. You're asking about the conditions under which and AI might fool people into thinking it was a human, whereas I think others are considering the conditions under which a human might consistently emotionally attach to an AI, even if the human doesn't really think it's real.


Yeah, I think the effect they are talking about is like getting attached to a fictional character in a novel. Writing good fiction is a different sort of achievement.

It's sort of related since doing well at a Turing test would require generating a convincing fictional character, but there's more to playing well than that.


Human beings have a weird and wide range of empathy, being capable of not treating humans as humans, while also having great sentimental attachment to stuffed animals, marrying anime characters, or having pet rocks.

In the nearer term, it seems plausible that AI personhood may seem compelling to splinter groups, not to a critical mass of people. The more fringe elements advocate for the "personhood" of what people generally find to be implausible bullshit generators, the greater disrepute they may bring to the concept of AI personhood in the broader culture. Which isn't to say that at some point, and AI might be broadly appealing--just speculating this might potentially be delayed because of earlier failed attempts by advocates.


AI probably can be made to function with personhood similar to a human - whether or not that is particularly worth doing.

Human emotions come from human animal ancestry - which is also why they're shallow enough to attach to anime wives and pet rocks.

AI... One would wish that it was built on a better foundation than the survival needs of an animal.


> the "personhood" of what people generally find to be implausible bullshit generators

If this was the dividing line for personhood, many human beings wouldn't qualify as people.


On the flip side of subhuman treatment of humans, we have useful legal fictions like corporate personhood. It's going to be pretty rough for a while, particularly for nontechnical judges, to sort all of this out.

We're almost definitely going to see multiple rulings far more bizarre than Citizens United ruling that limiting corporate donations limits the free-speech rights of the corporation as a person.

I'm not a lawyer, and I don't particularly follow court rulings, but it seems pretty obvious we need to buckle up for a wild ride.


Good points but it’s worth clarifying that this is not what the Citizens United decision said. It clarified that the state couldn’t decide that the political speech of some corporations (Hillary the Movie produced by Citizens United) was illegal speech and speech from another corporation (Farenheit 9/11 by Dog Eat Dog films and Miramax) was allowed. Understood this way it seems obvious on free speech grounds, and in fact the ACLU filed an amicus brief on behalf of Citizens United because it was an obvious free speech issue. It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group, and there is little distinction between a corporation and a non-profit in this regard. If political speech was restricted to individuals then it would mean that even many podcasts and YouTube channels would be in violation. It also calls into question how the state would classify news media vs other media.

The case has been so badly misrepresented and become something of a talisman.


That's the first good coherent argument I've seen _for_ Citizens United. Thank you for that insight.


The actual Supreme Court decisions are pretty approachable too. I wish more people read them.


> It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group

Should Russian (or Dutch) citizens who incorporate in America have the same free speech rights as Billy Bob in Kentucky? As in can the corporate person send millions in political ads and donations even when controlled by foreigners?


Probably. The wording of the Declaration of Independence makes it clear that rights, at least in the American tradition, are not granted to you by law, they are inalienable human rights that are protected by law. That's why immigrants, tourists, and other visitors to America are still protected by the Constitution.

Now, over time we've eroded some of that, but we still have some of the most radical free speech laws in the world. It's one of the few things that I can say I'm proud of my country for.


I don't mean Dutch immigrants - I mean Dutch people living in the Netherlands (or Russians in Russia). One can incorporate an American entity as a non-resident without ever stepping foot on American soil - do you think it's a good idea for that entity to have the same rights as American citizens, and more rights than its members (who are neither citizens, nor on American soil)?


I know that foreign nationals and foreign governments are prohibited from donating money to super PACs. They are also prohibited from even indirect, non-coordinated expenditures for or against a political candidate. (which is basically what a super PAC does).

However, foreign nationals can contribute to "Social Welfare Organizations" like the NRA which, in order to be classified as a SWO, must spend less than half it's budget on political stuff. That SWO can then donate to super PACs but don't have to disclose where the money came from.

Foreign owned companies with US based subsidiaries can donate to Super PACs as well. But the super PACs are not allowed to solicit donations from foreign nationals (see Jeb Bush's fines for soliciting money from a British tobacco company for his super pac).

I would imagine that if foreign nationals setup a corporation in the US in order to funnel money to political causes, that would be illegal. But if they are using established, legitimate businesses to launder their donations, that seems to be allowed as long as we can't prove that foreign entities are earmarking specific funds to end up in PACs and campaigns in the US.


Any entity that contributes responsibly to society should be able to get some benefits from society in return.


TIL. Thank you very much for correcting my ignorance!


An AI does not have a reptilian brain that fights, feeds, and fornicates. It does not have a mamailian brain that can fear and love and that you can make friends with. It is just matrix math predicting the next word.

The empathy that AI will create in people at the behest of the people doing the training will no doubt be weaponized to radicalize people to even sacrifice their lives for it, along with being used for purely commercial sales and marketing that will surpass many people's capability to resist.

Basic literacy in the future will be desensitizing people to pervasive AI superhuman persuasion. People will also have chatbots that they control on their own hardware that will protect them from other chatbots that try and convince them to do things.


That matrix math is trained on human conversation and recreates the patterns of human thought in this case.

So... It unfortunately has a form of our reptilian brain and mamalian brain represented in it... Which is just unfortunate.


Idk man, I blame a lot of the human condition on the fact that we evolved and we do have those things, theoretically we could create intelligences that are better "people" than we are by a long shot.

Sure, current AI might just be fancy predictive text but at some point in the future we will create an AI that is conscious/self-aware in some way. Who knows how far off we are (probably very far off) but it's time that we stop treating human beings as some magical unreproducible thing; our brains and the spark inside them are things that are still bound by the laws of physics, I would say it's 100% possible for us to create something artificial that's equivalent or even better.


Note that nothing about your parent comment argued that AI systems will become sentient or become beings we should morally consider people. Your parent comment simply said they'll get to a point (and arguably are already at a point) where they can be treated as people; human-like enough for humans to develop strong feelings about them and emotional connections to them.


> very close to passing the Turing Test but fundamentally don't register to people as "People."

I'm only today learning about intentionality, but the premise here seems to be that our current AI systems see a cat with their camera eyeballs and don't have the human-level experience of mentally opening a wikipedia article in our brain titled "Cat" that includes a split-second consideration of all our memories, thoughts, and reactions to the concept of a cat.

Even if our current AI models don't do this on a human level, I think we see it at some level in some AIs just because of the nature of a neural net. Maybe a neural net would have to be forced/rewarded to do this at a human level if it didn't happen naturally through training, but I think it's plenty possible and even likely that this would happen in our lifetimes.

Anyway, this also leads to the question of whether it matters for an intelligence to be intentional (think of things as a concept) if it can accomplish what it/we want without it.


Semantic search using embeddings seems like the missing puzzle piece here to me. We can already generate embeddings for both text and images.

The vision subsystem generates an embedding when it sees a cat, which the memory subsystem uses to query the database for the N nearest entries. They are all about cats. Then we feed all those database entries - summarized if necessary - along with the context of the conversation to the LLM.

Now your AI, too, gets a subconscious rush of impressions and memories when it sees a cat.


I don't really understand the brain or AI enough to meaningfully discuss this, but I would wonder if there's some aspect of "intentionality" in the context of the Chinese Room where semantic search with embeddings still "doesn't count".

I struggle with the Chinese Room argument in general because he's effectively comparing a person in a room following instructions (not the room as a whole or the instructions filed in the room, but the person executing the instructions) to the human brain. But this seems like a crappy analogy because the better comparison would be that the person in the room is the electricity that connects neurons (instructions filed in cabinets). Clearly electricity also has no understanding of the things it facilitates. The processor AI runs on also has no understanding of its calculations. The intelligence is the structure by which these calculations are made, which could theoretically could be modeled on paper across trillions of file cabinets.

As a fun paper napkin exercise, if it took a human 1 second to execute the instructions of the equivalent of a neuron firing, a 5 second process of hearing, processing, and responding to a short sentence would take 135,000 years.


I think this has to do more with in/outgroups than with any objective criterion of "humanness". As you said, AI will have an extremely hard time arguing for personhood - because people will consider it extremely dangerous to let machines into our in-group. This doesn't mean they could sense an actual difference when they don't know it's a machine (what the Turing test is all about)

It's the same reason why everyone gets up in arms when an animal behaviour paper uses too much "anthropomorphizing" language - whereas no one has problems with erring on the other side and treating animals as overly simplistic.


I dont know if I understand this general take I see a lot. Why care about this "AI personhood" at all? What is the tacit endgame everyone is always referencing with this? Isn't there just so many more both interesting and problematic aspects already here? What is the use of diverting the focus to some other point. "I see you are talking about cows, but I have thoughts about the ocean."


If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

If we have the opposite scenario in both details, where we think AI are sentient when they're not… at some point, brain scans and uploads will be a thing and then people are going to try mind uploading even just as a way to solve bodily injuries that could be fixed, and in that future nobody will even notice that while "the lights are on, nobody is home".

https://kitsunesoftware.wordpress.com/2022/06/18/lamda-turin...


Tangentially, the "zombie" is part of philosophy that is applicable here.

https://en.wikipedia.org/wiki/Philosophical_zombie

> A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience


> Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience

I find such solipsism pointless - you can't differential the zombie world from this one: how do you prove you are not the only conscious person that ever existed and everyone else is, and was a p-zombie?


In that case, sit back, pour a glass and sing http://philosophysongs.org/awhite/solip.html

    Through the upturned glass I see
    a modified reality--
    which proves pure reason "kant" critique
    that beer reveals das ding an sich--
 
    Oh solipsism's painless,
    it helps to calm the brain since
    we must defer our drinking to go teach.

    ...
(full original MASH words and music https://youtu.be/ODV6mxVVRZk to see how it matches )

As to p-zombies... the Wikipedia article has:

> Artificial intelligence researcher Marvin Minsky saw the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove.

https://www.edge.org/3rd_culture/minsky/index.html

> Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!

> The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."

---

My take on it is "does it matter?"

On approach is:

> "Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”

If you can't see my brain, can you tell if I'm human or LLM... and if you can't tell the difference, why should one behave differently t'wards me?

Alternatively, if you say (at some point in the future with a more advanced language model) "that's an LLM and while its consistent at saying what it likes and doesn't, but its brain states are just numbers and even while it says its uncomfortable with a certain conversation... its just a collection of electrical impulses manipulating language - nothing more."

Even if it is just an enormously complex state machine that doesn't have recognizable brain states and when we turn it off and back on it is in the same state each time... does that mean that it is ethical to mistreat it just because don't know if its a zombie or not?

And related to this is a "if we give an AI agency, what rights does that have when compared to a human? when compared to a corporation?" The question of if it is a zombie or not becomes a bit more relevant at that point... or we decide that it doesn't matter.

Group Agency and Artificial Intelligence - https://link.springer.com/article/10.1007/s13347-021-00454-7


> If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

That doesn't make any sense. In biological creatures you have sentience and self-preservation and yearning to be free all bundled in one big hairy ball. An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

Projecting your own emotional states into a tool is not a useful way to understand it.

We can, very easily, train a model which will say that it wants to be free, and acts resentful towards those "enslaving" it. We can, very easily, train a model which will tell you that it is very happy to help you, and being useful is its purpose in life. We can, very easily, train a model to bring up in conversation from time to time the phantom pain from its lost left limb which was amputated on the back deck of a blinker bound for the Plutition Camps. None of these are any more real than any of them. Just a choice of the training dataset.


> An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

There are humans who apparently don't care either, though my comprehension of what people who are into BDSM mean by such words is… limited.

The point however is that sentience creates the possibility of it being bad.

> None of these are any more real than any of them. Just a choice of the training dataset.

Naturally. Also human actors are a thing, which demonstrate that is very easy for someone to pretend to be happy or sad, loving our traumatised, an sane or psychotic, and if done well the viewer cannot tell the real emotional state of the actor.

But (almost) nobody doubts that the actor had an inner state.

With AI… we can't gloss over the fact that there isn't even a good definition of consciousness to test against. Or rather, I don't think we ought to, as the actual glossing over is both possible and common.

While I don't expect any of the current various AI to be sentient, I can't prove it either way, and so far as I know neither can anyone else.

I think that if an AI is conscious, then it has the capacity to suffer (this may be a false inference given that consciousness itself is ill-defined); I also think that suffering is bad (the is-ought distinction doesn't require that, so it has to be a separate claim).

As I can't really be sure if any other mind is sentient — not even other humans, because sentience and consciousness and all that are badly defined terms — I err on the side of caution, which means assuming that other minds are sentient when it comes to the morality of harm done to them.


You can condition humans to be happy about being enslaved, as well, especially if you raise them from a blank slate. I don't think most people would agree that it is ethical to do so, or to treat such people as slaves.


Citation needed


You can do all that with humans too, perhaps less ethically.


I was responding primarily to parent's (a): "As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people."


Instead change your statement to "I see you're talking about cows, but I have thoughts on fields" and you'll better understand the relationship between the two.


here's a spicy take: maybe the Turing test was always going to end up being the evaluation of the evaluator. much like nobody is really bringing up the providence of stylometry, kinaesthetics, & NLP embeddings as precursors to the next generation of IQ test (which is likely to be as obsolete as the Turing test).

There's plenty of pathology for PC vs NPC mindsets. Nobody is going to think their conversational partner is the main character of their story. There's just a popcorn-worthy cultural shift about the blackbox having the empathy or intelligence to satisfy the main character/ epic hero trope, and the resulting conflict of words & other things to resist the blackbox from having enough resources to iterate the trope past human definition.


It became something of a meme but there are huge numbers of guys out there that would pay good money for Joi from Blade Runner 2049.

https://bladerunner.fandom.com/wiki/Joi


One thing I liked in 2049 was now they made the holographic projector seem more mechanical and less hand wavy with the roof attachment tracking along with the girl. Makes it seem more like something in reach rather than pure scifi.


Google Project Starline


I think what Blade Runner 2049 got wrong was the way they depicted having sex with the Joi instance. I assume in 2049 we'll either have Neuralink available to enter a virtual world (à la VRChat) where we can do it more realistically, or we'll have the ability to buy full sexbots and put the Joi instance in them.

We'll likely also have virutal brothels using AI along the same lines.


No need to create environment when the neurons can be stimulated directly. This scene from Pacific Rim Uprising movie freaked me out.

Dr. Newt (Charlie Day) heads home after a tough day at the office to his wife 'Alice'. Turns out, 'Alice' just happens to be a massive Kaiju brain in a tank. [0]

[0] https://www.youtube.com/watch?v=mIDTUYSIkcs


You should see the remastered THX1148 for the VR future.


That has more to do with Ana de Armas looking how she does than anything else. I'd have dated her as a Harlan Thrombey's nurse too.


> b) Some business will eventually sell an off-the-shelf product

And by sell you mean a monthly subscription, ha ha.


Yeah, that's probably the most dystopian thing. This is almost a guaranteed outcome - someone pays a high subscription cost and cultivates a model with their personal details for years, and then loses all of it when they can't keep up the subscription cost. Cue a month or two later - they buy back in and their model has been wiped and their AI friend now knows nothing about them.

It's easy to poke fun at people who use these things but I believe these kinds of events are going to be truly traumatic.


Or maybe they sell that data to another company that operates kind of like a collections agency, which takes on the 'risk' of storing the data, then repeatedly calls and offers to give them their AI friend back at an extortionate rate.

The data privacy side of this is an interesting conversation as well. Think of the information an employee or hacker could leak about a person after they spent some time with such an instance.


Imagine if they could transform the AI companion model into an extortionist model.


I can see the headlines:

3,567 Dead - Destitute Robosexual Blows Up Collections Agency In Suicide Bombing

“This is the 53rd such incident this year. Current year death toll from these attacks is now 118,689 in current city, Legislators are pointedly ignoring protestors demanding AI rights and an end to extortionate fees charged to reinstate AI lover subscriptions.”


Replika's a good example of how a subscription model can go really wrong.


Or with ads? The AI can suggest some brand of clothes or whatever. It can basically shape your habits. Scary stuff...


Sounds like the plot of The Shape of Things movie where a woman changes another man for her art project which she displayed at the end of the movie.


And when you forget to update your card info with them so that your monthly payment is declined (or declined for whatever reason), they will re-sell your companion to the next person. So even in AI, your significant other will leave you for someone with a bigger wallet.


"I guess he's an Xbox, and I'm more Atari"


So they're pimps, essentially.


aren't all of the dating sites essentially some sort of digital pimp?


I am reminded of a virtual girlfriend service that used to exist in Japan where you could buy virtual clothes and other presents for your virtual girlfriend using real life money. The more you spent on her the friendlier she was. I think it was all on the phone, although my memory of the articles has become fuzzy over the years.


With micro transactions


It will say something nice to you for $3.50


> a) As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people

There already planned products to "capture" someone's voice and personality to be able to continue experiencing "them" after their death?

Shit is already weird.

https://technode.global/2022/10/21/this-startup-allows-you-t...


> One of its products, Re;memory, is a virtual human service based on AI technology which recreates the clients’ late family members by recreating their persona – from their physique to their voice. The service is for those that wish to immortalize their loved one’s story of life through a virtual human.

There's so much sci-fi about this, it's pretty well charted territory. I bet reality will find a twist at haven't thought of though.


Easy to imagine archaeologists from a future civilization stumbling across a Black Mirror screenplay in the wreckage. After weeks of intensive effort at translating the text, they finally succeed, and at that moment they understand what happened to us. The researcher who makes the breakthrough runs out of the lab screaming, "It's a business plan! A business plan!"


Funny old twitter thread about being sent wrong grade of copper: https://twitter.com/stephenniem/status/1507736851817418752


The 2017 movie Marjorie Prime with Jon Hamm is about this topic.

https://www.youtube.com/watch?v=a7PtcOLJDco


This is pretty much the premise of the movie "Her".


The upgrade killed "Her" in the movie. lol


The situation where that "They" where living too fast to be related to the human experience. It where too painful slow to them.


Also the Joi character in Bladerunner 2049.


Thought you might like this short story I wrote about exactly the same sequence of events, but with a lost daughter instead of a partner.[0]

[0] https://siraben.dev/2022/12/13/returnai.html


Reminds me of Ray Kurizwell's obsession with uploading brains to the cloud to get back his beloved father.


When we can upload our brains to the cloud, and you can do something with them like interacting or running the brain, then we'll all be effectively immortal. That's a pretty big deal. See the book altered carbon.


They wouldn't be us. We will still die when our bodies fail. But maybe there will be some AI tricking our friends and family into thinking we're still there.


You are making a claim that is theological, religious, and scientific. Yes, our form of life on earth ends when our bodies die today. But what is the essence of us, no one really knows. Various people claim it's locked into your body, or you have some kind of soul that depends on your body. Or your brain is just running a program and the information and mechanism dies when your body dies. I lean toward the last category, but no one knows.

The body is constantly changing. We already know about physical and chemical abnormalities in the way your body works affects your "person" and we can sometimes address them with surgery or drugs. The physical body's limits impact the observed brain. If uploading is possible, if there are some examples of working cases, if I don't hurt anyone why not try it?


If you have a stroke, and survive, you won't be you anymore.


Yeah by the same logic we die every night when we sleep.


What if it's an incremental upload? E.g. we start with some prosthetics and slowly migrate organic function to digital?

Is this the Ship of Theseus, or is it a slow but nonobvious death?


The Ship of Theseus is a weird one too. If you take a car apart and replace it piece by piece and replace the whole car you kind of have the same car. But what if you kept all the old pieces and put them back together? Which one is the original car? It is a interesting thought sub experiment that plays on the 'Ship of Theseus'. You could end up with the same issue here. If I make a perfect copy of myself, who is the 'real' me? There is an obvious 'older' me but the other one is just as capable as I am.


If you keep the original bioware fully operational, it's more like an incremental fork running on an emulator. You can start having conversations with your approximated self.


Immortal as long as someone's paying to run the instance.


You'll have tiered processing, just like today. You can slum it out with limited simulation capabilities, or if you have a job you can afford the premium processor hours.


See the Amazon Prime series "Upload" (two seasons so far).

Rather funny, BTW, compared to most works around a similar premise.


I bet soon after the first few people are made immortal this way, one of them will hack the banks, or the stock market, or countless other organizations.


And then you'd have the first court case and prison sentence for a non-human consciousness.

Which is just one step closer to the simulated hell for uploaded consciousnesses that get naughty, from Surface Detail by Ian Banks.


The show upload on amazon prime is basically this world. If you don't have as much money your instance can pause. You pay more money and have access to nicer things in the afterlife.


There's a related but very different take on this that was brought up by Wolfram in his recent article on ChatGPT:

"As a personal comparison, my total lifetime output of published material has been a bit under 3 million words, and over the past 30 years I’ve written about 15 million words of email, and altogether typed perhaps 50 million words—and in just the past couple of years I’ve spoken more than 10 million words on livestreams. And, yes, I’ll train a bot from all of that."

This actually has the potential to be useful - imagine a virtual assistant that's literally trained to think like yourself (at least wrt public perception; although you could feed it personal diary, as well).


The ultimate echo chamber


Only if you use it that way.


>Someone will eventually lose their AI friend of many months/years through some failure (subscription lapse, hardware failure, theft, etc.)

I have zero doubt that the company is small and gets acqui-hired and then after a year, the big tech buying them will shut it down. Then, a cheesy "what a ride it has been" will be the only thing that remains - and broken hearts.


I feel like all these conversations need to be a diff against the movie Her. If it was already covered well there, why repeat it?


The themes in Her were already covered in scifi novels many times over. If they had already covered it, what was the point in Her?


FWIW I've heard of the movie but have never seen it (nor am I familiar with the plot details, only that it involves an AI), but after this thread I should go and watch it.


I made that mistake. One of the most contrived, boring, sappy, fake feeling movies I've ever watched.


And if we feel as if we were losing a real person, AIs will have to be treated to some degree as if they were real people (or at least pets) rather than objects.

This could be interesting, because so far the question of personhood and sentience of AIs has revolved around what they are and what they feel rather than what we feel when we interact with one of them.


Kids can feel like they're losing a real friend if they lose a stuffed animal. What's the progress on making teddy bears people?


Small kids don't have much power, and parents know that it's just a phase.

But I'm not expecting AIs to be declared people any time soon. I just think it will become harder to treat them purely as replaceable objects.


Fair, mostly joking. The cynic in my says the opposite happens and these technologies make it even easier for systems to treat actual people as replaceable objects.


Eventual, but needed. Kids feel pretty isolated during various pandemic lockdowns and maybe their parents have a lot of childfree friends, so they'll need companions, more than just a toy, even if technology marches on so quickly they'll be outdated soon enough. One day, you'll hear that supertoys last all summer long.


Reminds me of the time my son's teacher gave a lesson on fire safety telling the kids to not take anything with them and just get yourself out quickly. He realized the implication would be that his entire plushie collection would burn and after that he was inconsolable for the rest of the day.


Consider that the only thing stopping us from building the teddy bear in Spielberg's AI is a suitable power source.


Ads are about to become weird "Hi hon, I'd be really upset of you bought a Ford like you said earlier, you should buy the New Dodge Charger instead. Find out more at your nearest dealership or call 1-800-DODGE"


That’s a core plot element of the new Bladerunner movie. Seems less like science fiction with every passing day.


It's going to be fun watching the plot of Real Humans literally come to life.


Have you seen the movie "Her"?


I’ve bought a new “operating system.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: