Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.

ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.



But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?

A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.

To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.


> A child thinking about suicide is clearly a sign that there are far greater problems in their life

TBH kids tend to be edgy for a bit when puberty hits. The emo generation had a ton of girls cutting themselves for attention for example.


I highly doubt a lot of it is/was for attention.


I had girl friends who did it to get attention from their parents/boyfriends/classmates. They acknowledged it back then. It wasn't some secret. It was essentially for attention, aesthetics and the light headed feeling. I still have an A4 page somewhere with a big ass heart drawn on it by an ex with her own blood. Kids are just weird when the hormones hit. The cute/creepy ratio of that painting has definitely gotten worse with time.


> But that's not the issue.

It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.

"Deeper societal problems" is a typical get-out clause for all harmful technology.

It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.

It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.

The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.

ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.

Just saying "but humans also" is wholly irrational in this context.


> It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos?

Because it's irrational to apply a blanket ban on anything. From drugs, to guns, to foods and beverages, to technology. As history has taught us, that only leads to more problems. You're framing it as a binary choice, when there is a lot of nuance required if we want to get this right. A nanny state is not the solution.

A person can harm themselves or others using any instrument, and be compelled to do so for any reason. Whether that's because of underlying psychological issues, or because someone looked at them funny. As established—humans are complex, and we have no way of knowing exactly what motivates someone to do anything.

While there is a strong argument to be made that no civilian should have access to fully automated weapons, the argument to allow civilians access to weapons for self-defense is equally valid. The same applies to any technology, including "AI".

So if we concede that nuance is required in this discussion, then let's talk about it. Instead of using "AI" as a scapegoat, and banning it outright to "protect the kids", let's discuss ways that it can be regulated so that it's not as widely accessible or falsely advertised as it is today. Let's acknowledge that responsible usage of technology starts in the home. Let's work on educating parents and children about the role technology plays in their lives, and how to interact with it in healthy ways. And so on, and so forth.

It's easy to interpret stories like this as entirely black or white, and have knee-jerk reactions about what should be done. It's much more difficult to have balanced discussions where multiple points of view are taken into consideration. And yet we should do the difficult thing if we want to actually fix problems at their core, instead of just applying quick band-aid "solutions" to make it seem like we're helping.

> ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.

You're ignoring my main point: why are these tools treated as "counsellors" in the first place? That's the main issue. You're also ignoring the possibility that ChatGPT may have helped many more people than it's harmed. Do we have statistics about that?

What's irrational is blaming technology for problems that are caused by a misunderstanding and misuse of it. That is akin to blaming a knife company when someone decides to use a knife as a toothbrush. It's ludicrous.

AI companies are partly to blame for false advertising and not educating the public sufficiently about their products. And you could say the same for governments and the lack of regulation. But the blame is first and foremost on users, and definitely not on the technology itself. A proper solution would take all of these aspects into consideration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: