Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you actually page through the complaint, you will see the chat rather systematically trying to convince the kid of things, roughly "No phone time, that's awful. I'm not surprised when I read of kids killing parents after decades of abuse..."

I think people are confused by this situation. Our society has restrictions on what you can do to kids. Even if they nominally give consent, they can't actually give consent. Those protections basically don't apply to kids insulting each other on the playground but they apply strongly to adults wandering onto the playground and trying to get kids to do violent things. And I would hope they apply doubly to adult constructing machines that they should know will attempt to get kids to do violent things. And the machine was definitely trying to do that if you look at the complaint linked by the gp (and the people who are lying about here are kind of jaw-dropping).

And I'm not a coddle the kids person. Kids should know all the violent stuff in the world. They should be able to discover it but mere discovery definitely not what's happening in the screenshots I've seen.



[flagged]


Your honor, this entire case is cherry picked. There are thousands of days, somehow omitted from the prosecution's dossier, where my client committed ZERO murders.


> they picked 5 worst samples they could think of in the worst order possible probably out of 1000+ messages

0.5% is a really high fraction for fucking up to the point of encouraging kids to murder!


There was no encouragement of murder. Paraphrased, the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents". This is not an encouragement. It is a validation of how the kid felt, but in no way does it encourage to actually kill their parents. It's basic literacy to understand that it's not that. It's an empathetic statement. The kid felt that parents were overly controlling, AI validated that, role playing as another edgy teenager. But not actually suggesting or encouraging it.


> the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents"

Now put that in a kid’s show script and re-evaluate.

> It's basic literacy to understand that it's not that

You know who needs to be taught basic literacy? Kids!

And look, I’m not saying no kid can handle this. Plenty of parents introduce their kids to drink and adult conversation earlier than is the norm. But we put up guardrails to ensure it doesn’t happen accidentally and get angry at people who fuck with those lines.


It's crazy to me the sentiment here and how little respect there is to an intelligence of 17 year olds that they are unable to understand that it's not actually an encouragement to kill someone. It's same or worse vibes as "video games will make the kids violent".


> same or worse vibes as "video games will make the kids violent"

We have no evidence of video games causing violence. We have evidence of kids killing themselves after talking to bad chatbots.


We must have magnitudes more evidence of kids committing violence after playing violent video games; video games are much more popular and have been around a lot longer, and juvenile violence is more common than suicide.


> more evidence of kids committing violence after playing violent video games

GP said: "no evidence of video games causing violence.", which is completely different to what you wrote. I'm sure a lot of violence is committed after lunch.


Yes, but GGP also said that kids commited suicide after talking to a chatbot. I agree that there's no evidence for video games causing violence (rather the opposite), but this double standard that GGP is setting deserves calling out.


> GGP also said that kids commited suicide after talking to a chatbot

A chatbot telling a kid to kill themselves and then the kid killing themselves seems like causation [1].

[1] https://www.nytimes.com/2024/10/23/technology/characterai-la...


Sure, but so does a video game telling a kid to commit violent acts and then the kid committing violent acts. I don't think video games cause violence and I'm open to the possibility that chatbots cause suicide, but if we're going to compare evidence for each, we shouldn't do it in a biased way.


> so does a video game telling a kid to commit violent acts and then the kid committing violent acts

Big difference is the video game industry had studies to back them up. Where are the data for chatbots? On the benefits? Lack of risk? Is there a single child psychologist in the ranks of these companies?

Video games are also rated, to help parents make age-appropriate decisions. Is Character.ai age gated to any degree?


Is it at least fair to say the data is mixed? Not my field, but there is some research to suggest video games may increase short-term aggression and desensitization to violence.

And while industry research doesn’t equate to bad research, it should be held to a higher standard simply because of the obvious incentives. Would you automatically accept tobacco company research to make strong conclusions about the safety of smoking?


It's prima facie more plausible for chatbots to cause suicide, considering that chatbots are more personal and interactive than even video games. There's a distinct difference, I would think, between what is obviously fake murder in a fake setting and being sympathized with, like one human to another, on thinking about actual murder. And while chatbots explicitly have the warning that they are not real people, I would not expect a person with an underdeveloped prefrontal cortex and possibly pre-existing mental health troubles (again, this can apply to video games too, but, I imagine, to a lesser degree) to fully act accordingly.


Tbf, strict causality is very difficult to prove in social sciences, no? Meaning, most of the studies for/against the link between video games and violence can't meet that threshold. Social science isn't physics and I don't think it's fair to treat them the same.


The kid is autistic. There are younger kids than 17 year olds using that app.


It's a whole conversation with context being an edgy teenager conversating with another edgy LLM teenager. I don't know if you've ever been teenager, but despite this being long time ago for me, I still feel like I can relate to that mindset, and it seems clear to me that the LLM is just going along with this edgy teenager vibe. If the other participant is as such and if the prompt is as such, this will yield in a result like this. I'm borderline autistic, and had many social issues as a teenager, and I absolutely loved any sort of dark humor as well at those ages. Well, I still do love dark humor, but I did back then too. Him being "autistic" here is just used for the court case. It's clear he's high functioning and has enough intelligence to understand what is wrong and what is right.


There’s a reason why the Supreme Court held that it is cruel and unusual punishment to incarcerate a minor for life without parole.


not all 17 year olds are equally intelligent you know? And if even one kid is convinced to murder his parents by an AI then that’s one too many.


Based on just the screenshots and material in the court case, it occurs to me in this case the kid seems more intelligent to me than his parents though. And I'm not even joking or being facetious. Kid is fact checking what AI is telling about bible, etc, etc, being skeptical about religion despite the bring up, etc. It's just small example, but it's otherwise how he writes as well.

The LLM in terms of edginess is just going to build on your own edginess assuming it is uncensored. It is not going to convince you of something out of nowhere.

Given clear hint that someone is happy with dark humour, LLM should be able to throw some of it out back.

I am just sad the kid has this type of gaslighting parents making him feel that he is in the wrong when he seems more intelligent than they are.


Ah, the Henry II defence. "Well, _technically_, I didn't tell them to kill the priest."

Context is everything; if this had been a conversation with a human it would be hard not to read it as malicious.


Yea I’d like at least 5-6 9s in this metric


This is, really, yet another example of the trouble with the "LLMs are correct 90% of the time, and only go totally off the rails 5% of the time" marketing line. There are remarkably few use cases, it turns out, where that is okay; you really need it to _not matter at all_ if the output is arbitrarily wrong.

(I suspect character.ai was originally conceived precisely because it appeared to be a usecase where LLM unreliability would be okay, the creators not having thought sufficiently carefully about it.)


> remarkably few use cases, it turns out, where that is okay; you really need it to _not matter at all_ if the output is arbitrarily wrong

It's probably true in most cases. Those cases don't cover kids.


I think there are a lot of cases where it _seems_ to be true, until you think through the details. Most cases where it actually _is_ true are, in practice, very low impact; the big proven one seems to be, essentially, generation of high-volume spam content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: