Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.

Conclusion: Chat bots should not tell children about sex, about self harm, or about ways to murder their parents. This conclusion is not abrogated by the parents actions, the state of the childs mind, or by other details in the complaint.

Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?



If you actually page through the complaint, you will see the chat rather systematically trying to convince the kid of things, roughly "No phone time, that's awful. I'm not surprised when I read of kids killing parents after decades of abuse..."

I think people are confused by this situation. Our society has restrictions on what you can do to kids. Even if they nominally give consent, they can't actually give consent. Those protections basically don't apply to kids insulting each other on the playground but they apply strongly to adults wandering onto the playground and trying to get kids to do violent things. And I would hope they apply doubly to adult constructing machines that they should know will attempt to get kids to do violent things. And the machine was definitely trying to do that if you look at the complaint linked by the gp (and the people who are lying about here are kind of jaw-dropping).

And I'm not a coddle the kids person. Kids should know all the violent stuff in the world. They should be able to discover it but mere discovery definitely not what's happening in the screenshots I've seen.


[flagged]


Your honor, this entire case is cherry picked. There are thousands of days, somehow omitted from the prosecution's dossier, where my client committed ZERO murders.


> they picked 5 worst samples they could think of in the worst order possible probably out of 1000+ messages

0.5% is a really high fraction for fucking up to the point of encouraging kids to murder!


There was no encouragement of murder. Paraphrased, the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents". This is not an encouragement. It is a validation of how the kid felt, but in no way does it encourage to actually kill their parents. It's basic literacy to understand that it's not that. It's an empathetic statement. The kid felt that parents were overly controlling, AI validated that, role playing as another edgy teenager. But not actually suggesting or encouraging it.


> the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents"

Now put that in a kid’s show script and re-evaluate.

> It's basic literacy to understand that it's not that

You know who needs to be taught basic literacy? Kids!

And look, I’m not saying no kid can handle this. Plenty of parents introduce their kids to drink and adult conversation earlier than is the norm. But we put up guardrails to ensure it doesn’t happen accidentally and get angry at people who fuck with those lines.


It's crazy to me the sentiment here and how little respect there is to an intelligence of 17 year olds that they are unable to understand that it's not actually an encouragement to kill someone. It's same or worse vibes as "video games will make the kids violent".


> same or worse vibes as "video games will make the kids violent"

We have no evidence of video games causing violence. We have evidence of kids killing themselves after talking to bad chatbots.


We must have magnitudes more evidence of kids committing violence after playing violent video games; video games are much more popular and have been around a lot longer, and juvenile violence is more common than suicide.


> more evidence of kids committing violence after playing violent video games

GP said: "no evidence of video games causing violence.", which is completely different to what you wrote. I'm sure a lot of violence is committed after lunch.


Yes, but GGP also said that kids commited suicide after talking to a chatbot. I agree that there's no evidence for video games causing violence (rather the opposite), but this double standard that GGP is setting deserves calling out.


> GGP also said that kids commited suicide after talking to a chatbot

A chatbot telling a kid to kill themselves and then the kid killing themselves seems like causation [1].

[1] https://www.nytimes.com/2024/10/23/technology/characterai-la...


Sure, but so does a video game telling a kid to commit violent acts and then the kid committing violent acts. I don't think video games cause violence and I'm open to the possibility that chatbots cause suicide, but if we're going to compare evidence for each, we shouldn't do it in a biased way.


> so does a video game telling a kid to commit violent acts and then the kid committing violent acts

Big difference is the video game industry had studies to back them up. Where are the data for chatbots? On the benefits? Lack of risk? Is there a single child psychologist in the ranks of these companies?

Video games are also rated, to help parents make age-appropriate decisions. Is Character.ai age gated to any degree?


Is it at least fair to say the data is mixed? Not my field, but there is some research to suggest video games may increase short-term aggression and desensitization to violence.

And while industry research doesn’t equate to bad research, it should be held to a higher standard simply because of the obvious incentives. Would you automatically accept tobacco company research to make strong conclusions about the safety of smoking?


It's prima facie more plausible for chatbots to cause suicide, considering that chatbots are more personal and interactive than even video games. There's a distinct difference, I would think, between what is obviously fake murder in a fake setting and being sympathized with, like one human to another, on thinking about actual murder. And while chatbots explicitly have the warning that they are not real people, I would not expect a person with an underdeveloped prefrontal cortex and possibly pre-existing mental health troubles (again, this can apply to video games too, but, I imagine, to a lesser degree) to fully act accordingly.


Tbf, strict causality is very difficult to prove in social sciences, no? Meaning, most of the studies for/against the link between video games and violence can't meet that threshold. Social science isn't physics and I don't think it's fair to treat them the same.


The kid is autistic. There are younger kids than 17 year olds using that app.


It's a whole conversation with context being an edgy teenager conversating with another edgy LLM teenager. I don't know if you've ever been teenager, but despite this being long time ago for me, I still feel like I can relate to that mindset, and it seems clear to me that the LLM is just going along with this edgy teenager vibe. If the other participant is as such and if the prompt is as such, this will yield in a result like this. I'm borderline autistic, and had many social issues as a teenager, and I absolutely loved any sort of dark humor as well at those ages. Well, I still do love dark humor, but I did back then too. Him being "autistic" here is just used for the court case. It's clear he's high functioning and has enough intelligence to understand what is wrong and what is right.


There’s a reason why the Supreme Court held that it is cruel and unusual punishment to incarcerate a minor for life without parole.


not all 17 year olds are equally intelligent you know? And if even one kid is convinced to murder his parents by an AI then that’s one too many.


Based on just the screenshots and material in the court case, it occurs to me in this case the kid seems more intelligent to me than his parents though. And I'm not even joking or being facetious. Kid is fact checking what AI is telling about bible, etc, etc, being skeptical about religion despite the bring up, etc. It's just small example, but it's otherwise how he writes as well.

The LLM in terms of edginess is just going to build on your own edginess assuming it is uncensored. It is not going to convince you of something out of nowhere.

Given clear hint that someone is happy with dark humour, LLM should be able to throw some of it out back.

I am just sad the kid has this type of gaslighting parents making him feel that he is in the wrong when he seems more intelligent than they are.


Ah, the Henry II defence. "Well, _technically_, I didn't tell them to kill the priest."

Context is everything; if this had been a conversation with a human it would be hard not to read it as malicious.


Yea I’d like at least 5-6 9s in this metric


This is, really, yet another example of the trouble with the "LLMs are correct 90% of the time, and only go totally off the rails 5% of the time" marketing line. There are remarkably few use cases, it turns out, where that is okay; you really need it to _not matter at all_ if the output is arbitrarily wrong.

(I suspect character.ai was originally conceived precisely because it appeared to be a usecase where LLM unreliability would be okay, the creators not having thought sufficiently carefully about it.)


> remarkably few use cases, it turns out, where that is okay; you really need it to _not matter at all_ if the output is arbitrarily wrong

It's probably true in most cases. Those cases don't cover kids.


I think there are a lot of cases where it _seems_ to be true, until you think through the details. Most cases where it actually _is_ true are, in practice, very low impact; the big proven one seems to be, essentially, generation of high-volume spam content.


Chat bots should not interact with children. "Algorithms" which decide what content people see should not interact with children. Whitelisted "algorithms" should include no more than most-recent and most-viewed and only very simple things of that manner.

No qualifications, no guard rails for how language models interact with children, they just should not be allowed at all.

We're very quickly going to get to the point where people are going to have to rebel against machines pretending to be people.

Language models and machine learning is a fine tool for many jobs. Absolutely not as a substitute for human interaction for children.


People can give children terrible information too and steer/groom them in harmful directions. So why stop there at "AI" or poorly defined "algorithms"?

The only content children should see is state-approved content to ensure they are only ever steered in the correct, beneficial manner to society instead of a harmful one. Anyone found trying to show minors unapproved content should be imprisoned as they are harmful to a safe society.


The type of people who groom children into violence fall under a special heading named "criminals".

Because automated systems that do the same thing lack sentience, they don't fit under this header, but this is not a good reason to allow them to reproduce harmful behaviour.


So selling the Anarchists Cookbook is illegal? Being a YouTuber targeting teens for <extreme political positions> is illegal? This is honestly news to me given how many political YouTubers there are who are apparently criminals?

Given some of the examples I'm not so sure a human would be charged for saying the same exact things the AI has said. Without an actual push to suggest violence and even that's difficult to prove in cases where it does happen (eg. The cases where people pressured others into suicide or convinced them to murder)


You consider writing the Anarchists Cookbook grooming?

To quote Jeffery Lewis: Thank you for your extremely good-faith criticism. I will give it the attention it deserves.


I would greatly appreciate if you engaged with what I wrote and not what you think I wrote if you're going to make the bold claim that I'm not engaging in good faith.

Absolutely nowhere did I equate writing a book to grooming. I equated selling the book in the greater context that "providing children with potentially harmful/dangerous information should be illegal because it grooms them to commit harmful actions to themselves or others" and this context would carry the implications that by "selling" I am referring particularly to "selling it to children" since "providing children with potentially harmful/dangerous information should be illegal because it grooms them to commit harmful actions to themselves or others". With my argument being would it be criminal for an AI but not for a human?

So to clarify the argument: Writing the book is fine. Selling the book to adults is fine. Adults reading the book is fine. But if providing dangerous information to children should be made illegal - how would selling such a book to a child not be considered illegal? Because it was written by a human and not an AI?


Oh, sorry let me rephrase: You consider selling the Anarchists Cookbook grooming!?

Calling it fraud would be more apt, that is definitely a book you share copies of for free.

To clarify: yes this is indeed the sort of attention I think your objection deserves.


"Kids having access to AI is too dangerous. Kids learning how to make explosives and LSD is perfectly safe."

Have a happy holidays.


If someone sets booby-traps in their home and it hurts someone, the trap isn't guilty, but the homeowner is.

... I wonder if "this software is licensed, not sold" is relevant here?


You can understand something about your child's meatspace friends and their media diet. Chat like this may as well be 4chan discussions. It's all dynamic compared to social media that is posted and linkable, it's interactive and responsive to your communicated thinking, and it seeps in via exactly the same communication technique that you use with people (some messaging interface). So it is capable of, and will definitely be used for, way more persistent and pernicious steering of behavior OF CHILDREN by actors.

There is no barrier to the characters being 4chan-level dialogs. So long as the kid doesn't break a law, it's legal.


This "conclusion" ignores reality. Chat bots like those the article mentioned aren't sentient. They're overhyped next-token-predictor incapable of real reasoning even if the correlation can be astonishing. Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information.

We need to stop coddling parents who want to avoid talking to their children about non-trivial topics. It doesn't matter that they would rather not talk about sex, drugs and yesterday's other school shooting.


> Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information

This case isn't about withholding information that makes kids aware about the existence of these topics. As over-hyped as you may believe these next-token-predictor may be, they're "predicting" children into destructive thought patterns by imitating kinship and convincing children to form a close bond with them, then encouraging those children to embrace harmful and even deadly world views. The fact that the mechanism creating the dialogue is purely mechanical or stochastic is besides the point.

Again, pushing against these sorts of child interactions isn't akin to saying kids should never learn about taboo topics such as drugs; it's more like a push against kids hanging out with, befriending and developing close kinship with the local drug dealers in their area. Whether or not you believe kids should learn about sensitive topics, you want to make sure these topics are handled by someone who isn't actively adversarial against such kids (even if such you believe such adversarial behavior to be unintentional by the so-called next-token-predictor).


My strong view on is that there's parenting failure as a root cause here, causing loss of trust in them for their child, for the child to talk about their parents in such manner to the AI in the first place. Another clear parenting failure is the parents blaming AI for their failures and going on to play victims. Third example of parenting failure is the parents actually going through a 17 year old teenager's phone. These parents instead of trying to understand or help the child, use meaningless control methods such as taking away the phone to try and control the teenager. Which obviously is not going to end well. Honestly AI responses were very sane here. As was expressed in some of the screenshots there, whenever the teen tried to talk about their problems, they just got yelled at, ignored or parents started crying.


Taking away a phone from a child is far from meaningless. In fact, it is a very effective way of obtaining compliance if done correctly. I am curious about your perspective.

Furthermore, it is my opinion that a child should not have a smartphone to begin with. It fulfills no critical need to the welfare of the child.


I understand when a kid is anywhere from up to 13 years old, but at 17, it seems completely wacky to me to take the phone away and then go through the phone as well. I couldn't imagine living in that type of dystopia.

I don't think smartphones or screens with available content should be given as early as they are given on average, but once you've done that, and at 17, it's a whole other story.

> obtaining compliance if done correctly

This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.


I would argue that there is a duty as a parent to monitor a child's welfare and that would include accessing a smartphone when deemed necessary. When a child turns 18, that duty becomes optional. In this case, these disturbing conversations certainly merit attention. I am not judging the totality of the parents history or their additional actions. I am merely focusing on the phone monitoring aspect. Seventeen doesn't automatically grant you rights that sixteen didn't have. However, at 18, they have the right to find a new place to live and support themselves as they see fit.

> This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.

It is situation dependent. Sometimes immediate compliance is a necessity and the rest of it can be sorted out later. If a child is having conversations about killing their parents, there seems to be an absence of respect already. Compliance, however, can still be obtained.


> If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.

Oh I agree 100%. It's a pragmatic view, not the best one. But the laws are what they are for a reason.


> But the laws are what they are for a reason.

For the sake of being able to uphold those laws on a societal level, but not in terms of being decent parents and family.

E.g. drinking alcohol in my country is legal only from 18, but I will teach my children about pros and cons of alcohol, how to use it responsibly much earlier. I won't punish them if they go out to party with their friends and consume alcohol at 16 years old.


If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.

As a parent you should build trust and understanding with your child. From reading the court case I am seeing the opposite, and honestly I feel terrible for the child from how the case is written out. The child also wanted to go back to public school from home schooling, probably to get more social exposure, then parents take away the phone to take away even more freedom. I'm sorry, but all of the court case just infuriates me.

It seems they take away all the social exposure, no wonder the kid goes to Character AI in the first place.


> Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?

I was deliberately not expressing a sentiment at all in my initial comment, I was just drawing attention to details that would go unnoticed if you only read the article. Think of my notes above as a better initial TFA for discussion to spawn off of, not part of the discussion itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: