Hacker Newsnew | past | comments | ask | show | jobs | submit | yousif_123123's commentslogin

Don't underestimate the meaning and relationships people had in those times, hunting together to feed your family, farming with your community and interacting with animals etc.

I think physical activity, even just going on walks makes one feel change is possible. If something sucks and I sit home all day on YouTube, then it continues to suck. If I can change my environment, do things outside, see new people and find myself if different situations, then the thing that sucks starts feeling like maybe it also could change.

For example I doubt exercising in a basement just by yourself on 1 machine is likely to materially help with depression. At least not as much as going out doing a variety of things, or playing a game of basketball at the local gym/community center.


Yeah. I think the communitarian point is valid, and valuable. Also, even if you only do that 1 machine thing in the basement, you can regard it as an achievement that's worth something. At least I did that thing. It makes me feel kinda ok too, or just better.

What would be the cost for OpenAI to just stop these kinds of very long conversations that aren't about debugging or some actual long problem solving? It seems from the reports many people are being affected, some very very negatively, and many likely unreported. I don't understand why they don't show a warning or just open a new chat thread when a discussion gets too long or it can be detected that it's not fiction and likely veering into dangerous territory?

I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.


> It seems from the reports many people are being affected

I think the rapid scale and growth of ChatGPT are breaking a lot of mental models about how common these occurrences are.

ChatGPT's weekly active user count is twice as large as the population of the United States. More people use ChatGPT than Reddit. The number of people using ChatGPT on a weekly basis is so massive that it's hard to even begin to understand how common these occurrences are. When they happen, they get amplified and spread far and wide.

The uses of ChatGPT and LLMs are very diverse. Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.


Ah, the old "we're too big to be able to not do evil things! we've scaled too much so now we can't moderate! Oh well, sucks to not be rich."

They're not claiming they don't moderate, though. Where are you getting that? A common complaint about ChatGPT and even their open weights models is that they're too censored.

Anthropic at least used to stop conversations cold when they reached the end of the context window, so it's entirely possible from a technical standpoint. That OpenAI chooses not to, and prefers to let the user continue on, increasing engagement, puts it on them.

Incidence of harm is a function of harm/population. It is likely that Facebook is orders of magnitude more harmful than ChatGPT and bathtubs and bikes more dangerous than long LLM conversations.

It doesn't mean something more should not be done but we should retain perspective.

Maybe they should try to detect not long conversations but dangerous ones based on spot checking with a LLM to flag problems up for human review and a family notification program.

EG Bob is a nut. We can find this out by having a LLM not pre prompted by Bob's crazy examine some of the chats by top users by tokens consumed in chat not API and flagging it up to a human who cuts off bob or better shunts him to a version designed to shut down his particular brand of crazy eg pre prompted to tell him it's unhealthy.

This initial flag for review could also come from family or friends and if OpenAI concurs handle as above.

Likewise we could target posters of conspiracy theories for review and containment.


> Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.

I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?


>its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.


In June 2015, Sam Altman told a tech conference, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

Do you really think Sam or any of the other sociopaths running these AI companies care whether their product is causing harm to people? I surely do not.

[1] https://siepr.stanford.edu/news/what-point-do-we-decide-ais-...


It seems like a cheaper model could be asked to review transcripts, something like: “does this transcript seem at all like a wacky conspiracy theory that is encouraged in the use by the LLM”?

In this case, it would have been easily detected. Depending on the prompt used, there would be more or less false positives/negatives, but low-hanging fruit such as this tragic incident should be avoidable.


I've had OpenAI do the weirdest things in conversations about aerodynamics and very low level device drivers, I don't think you will be able to reach a solution by just limiting the subjects. It is incredible how strongly it tries to position itself as a thinking entity that is above its users in the sense that it is handing out compliments all the time. Some people are more susceptible to others.

> I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.

Because the mission is a lie and the goal is profit. alwayshasbeen.jpg


Those remediations would pretty clearly negatively impact revenue. And the team gets paid a lot to do their current work as-is.

The way to get the team organized against something is to threaten their stock valuation (like when the workers organized against Altman's ousting). I don't see how cutting off users is going to do anything but drive the opposite reaction from the workers from what you want.


>Those remediations would pretty clearly negatively impact revenue

That might make sense if openai was getting paid per token for these chats, but people who are using chatgpt as their therapist probably aren't using their consumption based API. They might have a premium account but how many % of premium users do you think are using chatgpt as their therapist and getting into long winded chats?


You can ask the same of users consuming toxic content on Facebook. Meta knows the content is harmful and they like it because it drives engagement. They also have policies to protect active scam ads if they are large enough revenue-drivers - doesn't get much more knowingly harmful than that, but it brings in the money. We shouldn't expect these businesses to have the best interests of users in mind especially when it conflicts with revenue opportunities.

It is much harder to blame meta because the content is disperse and they can always say "they decided to consume this/join this group/like this page/watch these videos", while ChatGPT is directly telling the person their mother is trying to kill him.

Not that the actual effect is any different, but for a jury the second case is much stronger.


OpenAI is a synthetic media production company, they literally produce images, text, & video + audio to engage their users. The fact that people think OpenAI is an intelligence company is a testament to how good their marketing is at convincing people they are more than a synthetic media production company. This is also true for xAI & Grok. Most consumer AI companies are in the business of generating engaging synthetic media to keep their users glued to their interfaces for as long as possible.

The cost would be a very large chunk of OpenAI's business. People aren't just using ChatGPT just to solve problems. It is a very popular tool for idle chatter, role playing, entertainment, friendship, therapy, and lots more. And OpenAI isn't financially incentivized to discourage this kind of use.

Looks like this would affect around 4.3% of chats (the "Self-Expression" category from this report[0]). Considering ChatGPT's userbase, that's an extremely large number of people, but less significant than I thought based on all the talk about AI companionship. That being said though, a similar crowd was pretty upset when OpenAI removed 4o, and the backlash was enough for them to bring it back.

[0]: https://www.nber.org/system/files/working_papers/w34255/w342...


> I don't know how this doesn't give pause to the ChatGPT team

a large pile of money

> What would be the cost for OpenAI to just stop these kinds of very long conversations

the aforementioned large pile of money


Just because you do not use a piece of technology or see no use in a particular use-case does not make it useless. If you want your Java code repaired, more power to you, but do not cripple the tool for people like me who use ChatGPT for more introspective work which cannot be expressed in a tweet.

By the way, I would wager that 'long-form'-users are actually the users that pay for the service.


> By the way, I would wager that 'long-form'-users are actually the users that pay for the service.

I think it may be the case that many of these people that commit suicide or do other dangerous things after motivation from AI, are actually using weaker models that are available on the free versions. Whatever ability there is in AI to protect the user, it must be lower for the cheaper models that are freely available.


I would bet that AI girlfriend is a top ten use case for LLMs

It is probably a top 1 use case if you add the AI boyfriend option.

There are a lot of lonely people out there.


And role-playing in general.

Responsibility for the aftermath is with the US. They previously didn't do a good job in Afghanistan or Iraq after they assumed defacto control, without really trying to make the countries stand on their own. Life is not much better for the average person there.

Venezuela has lots of oil and drugs. If different factions fight between themselves there's no reason you couldn't end up with a divided and dangerous country that in some ways could be worse for the people than Maduro.

The best way for "oppressed" people to be liberated is through some joint effort by parties that really want to help out and assume responsibility, or by supporting a revolution that naturally takes over. I don't think there's been any cases of success from this process of forcibly removing the dictator, and crossing your fingers that things will go well.


Not good. This shouldn't be allowed. What would be better is if groq and cerebras combined, and maybe other companies invested in them to help them scale. Why would the major cloud providers not lobby against this?

Usually antitrust is for consumers, but here I think companies like Microsoft and AWS would be the biggest beneficiaries of having more AI chip competition.


Groq is absolutely tiny. I don't think antitrust is an issue here.


20 billions is tiny?


That's the sales price of the company. Their marketshare, I imagine, is absolutely miniscule.


Market share wise, Groq is perhaps "tiny"? Nvidia may be paying a premium for Groq [0] since it eliminates competition (at least on the inference side).

[0] valued ~£6.5bn 2mo ago https://www.reuters.com/business/groq-more-than-doubles-valu...


WhatsApp was a tiny team


Is nowadays


>> if groq and cerebras combined

There isn't to be shared between the two techs, Groq's hardware is a like a railgun that installs all the weights into the optimal location before firing off an inference. Cerebras computer engineering more convention requiring the same data movement that GPUs struggle with optimizing.

Suspect Groq is complementary/superior to nvidia's GPUs, while it is unclear what Cerebras brings other then maybe some deals with TSMC.


They are both SRAM based solutions currently with the same benefits and pitfalls.


It's a non-exclusive deal.

No reason for antitrust action whatsoever.


That’s a loophole. Regulation hasn’t caught up to the innovation of non-exclusive licensing deal. Hopefully we’ll get some competence back in government soon-ish and can rectify the mistake


That's not a loophole. Non-exclusive licensing agreement is the opposite of loophole.


It's a backdoor acquisition by looting the key talent.


It's the opposite of an acquisition.

It's literally:

"I don't want you and your 200 tensorflow/pytorch monkeys. I just want your top scientist and I need a clever way to offer him a nine figure salary. Good of you to grant him so much stock and not options. Now I can just make a transfer to your shareholders, of which he is one! Awesome! Now I don't have to buy your company!"

I'll give you bonus points if you can guess what happens to the worthless options all those TF/PyTorch monkeys are holding?

Guys, seriously, be careful who you go to work for, because chances are, you are not the key scientist.


Non exclusive deal but also acquiring a lot of the staff, which seems pretty exclusive in that term.


Yeah but that's going nowhere in court right?

You can't have the government coming in telling a scientist who he has to work for. People are free to take jobs at whatever company they like.

This is just a clever mechanism of paying that intellectual capital an amount of money so far outside the bounds of a normal salary that it borders on obscenity.

All that said, I don't say anything when Jordan Love or Patrick Mahomes are paid hundreds of millions, so I need to learn to shut my mouth in this case as well. I just think it sucks for the regular employees. I guarantee they will lose their jobs over the next 24 months.


License disallows production use

MIT - Do whatever you want with it (except deploy to production )


It's a joke. The entire thing is a joke :)


No no, let him deploy to production.


It doesn't need to be describing a function. It could be explaining the skill in any way, it's kind of just like more instructions and metadata to be load just in time vs given all at once to the model.


Why doesn't OpenAI include comparisons to other models anymore?


Because their main competition (Google and Anthropic) have caught up and even started to surpass them, and comparisons would simply drive it home.


Why do they care so much? They're a non-profit dedicated to the betterment of humanity via open access to AI. They have nothing to hide. They have no motivation to lie, or lie by omission.


> Why do they care so much? They're a non-profit dedicated to the betterment of humanity via open access to AI.

We're still talking about OpenAI right?


You're not calling Sam Altman a liar, are you?


They are not a nonprofit at all. Legally, yes. But they are not.


because they probably need to compare pricing too


Sam Altman posted with a comparison to Gemini 3 and Opus 4.5

https://x.com/sama/status/1999185784012947900


I see, thanks for this.


Perhaps they want to be able to run them on mobile hardware they release?


I can definitely see them wanting to have models that can run on Windows computers or Surface tablets locally - although their focus seems to be sticking CoPilot into absolutely anything and everything possible, but why synthetic data models? Other companies have made small parameter models, but they don't seem to keep them up to date (correct me if I'm wrong).


I've always noticed that when I'm giving advice to someone or trying to help out, it always feels their problem is easier than whatever problem I have. As someone with some anxiety around things like calling some company to get something done or asking a random stranger for some help in a store, I would gladly do it if it was to help someone else (family member or friend). But when it's for me I find it harder.

I wonder how much psychologically we can be more confident and less anxious when we're doing something for others vs ourselves..


People in the ADHD community are outspoken about a tangential concept: cleaning. Cleaning your friends place is a fun, novel, non-emotional activity. Cleaning your own space is a mental slog, boring and often painful due to having to rid yourself of mementos.

In that case, my theory is that you get to shed your learned helplessness about how things look. I suspect it’s similar with giving advice.


> Cleaning your friends place is a fun, novel, non-emotional activity. Cleaning your own space is a mental slog, boring and often painful

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.” - Mark Twain


This is ozempic territory: a technical solution to your own shortcomings is most effective.

I have solved all my issues with doing house chores with wireless headphones, tablet, and youtube @ 2x speed. Sure, it means that I can't load my dishwasher until I find something half-decent to listen/watch but once I do find it, I have 10-50 minutes of just pure closing. Dishwasher loaded, countertops empty, new load of laundry, dry clothes in the closet, gym bag packed, trash taken out. Frankly, kinda enjoy it now.


This is me. Finally buying some bluetooth headphones 15 years ago changed my life. I finally became a person who cooks every meal, cleans everything, does chores, and exercises daily, even pushes around the house.

I like listening to debates since they are the most stimulating. So long as I can find a good one, I’m about to make dinner and unload the dishwasher.

An audiobook that’s good enough can be so captivating that I run out of things to do while listening to it.

I have pretty extreme adhd which might be related. But I’m just glad I bought those headphones back then.


What are the sources of stimulating debates you've found?


"other people's same problem easier" i see, but have never seen messiness as example at least in communities w adhd comorbid with depression. personally the concept of other people cleaning my private mess, even/especially if they are close family/friends is terrifying and already overloads my head and i can only project the same sentiment (and some extrapolation of my own experience helping friends/family with cleaning... it is super hard, we are talking about intruding on what the person values as trash or not trash, and that itself can be a source of great shame i.e. my mother who lived to much worse abject poverty than the children she helped raise with a better life. sorry for being dramatic about an otherwise straightforward point but yes in my experience that "cold" reduction of the problem into something actionable would be key, though people arrive there differently i noticed, e.g. me and my "armchair courage" that any unseen sideeffect is not my problem, for my mom (okay sometimes for me as well) it is about being able to forget that she has problems just by the appearance of having the luxury to give advice


A line I always remember (from Babylon 5) is: "When I clean my place, all I've done is clean my place. But when I help you clean your place, I'm _helping you_."

tl;dr you should ask your badass partner for strategic help when the entire galaxy is under threat, even if she seems busy.


Nice, what was the context of that line in B5? I don't remember it.


S03E20 "And the Rock Cried Out, No Hiding Place"

Reverend Will Dexter: "You know before I got married Emily he used to come by sometime to help me clean out my apartment well I asked her how come here so he could help clean up my place when your place is just as bad she said because cleaning up your place helps me to forget what a mess I made a mind and when I sweep my floor all I've done is sweep my floor but when I help you clean up your place I am helping you."


Thank you, great episode!


Helper, help thyself...?


I have ADHD and I 100% feel like what OP describes. I’m always motivated when helping others, not so much for myself.


It’s true.

My girlfriend and I both have ADHD and are medicated. I will run laps around her tidying up her place, but struggle at my own place.. its so hard to understand


What's the opposite? Where solving your problems is easy and but solving your friend's problems are very difficult because your advice are never relevant?


This is something I've noticed as well. I've talked about this with my psychiatrist and she calls this brave, reassured version of ourself the "me-mentor" (jag-mentor in Swedish). Similar to our inner child, this is a core part of our who we are.

The idea, if I understood correctly, is to build this me-mentor more and let it help us feel more safe. Let it support our insecure parts/personas.

(I hope my English isn't too bad)


Somewhat related, a psychologist I talked to in the 2000s said she really liked the Patronus concept in the Harry Potter books. You imagine an entity that's fueled by your positive memories and emotions, and that protects you from certain anxieties and other stressors.

Things like that seem to be used in at least some schools of psychology.


That does sounds similar, yes! I like that idea.

It reminds me on something my psychologist told me, when trying to find this me-mentor, it can help to take inspiration from someone I find really safe with and trust a lot. Aka someone I have good memories with / of.


Your English is perfect. I wouldn't have known you are not a native speaker if you hadn't mentioned it.


Your English is great, by the way.


When trying to examine someone else's problems, you can see the problem itself. But what you aren't seeing is a pile of all the little habits, beliefs, behaviors, impulses and assorted mind defects that prevented them from solving it in the first place.

It takes intimate familiarity to know all of those things about someone.

If you were in their shoes, the problem might genuinely be trivial, for you. Because you're not that person, and that problem isn't your own failure mode - you would instead fail at a different "trivial" problem and in an entirely different way.

Or maybe you are flawed in the same way, but don't know it yet. You never quite know. Humans aren't any good at that whole "self-awareness" thing.


> When trying to examine someone else's problems, you can see the problem itself. But what you aren't seeing is a pile of all the little habits, beliefs, behaviors, impulses and assorted mind defects that prevented them from solving it in the first place.

This is accurate. The roadblocks to solving their problem are often several small things completely unrelated to the problem itself.


The opposite conclusion is that you are more risk-taking when it comes to dictating the actions of others, because neither their gains nor their losses directly accrue to you. But human beings feel loss aversion more keenly than they desire gain, so this biases the advice you would give others (but not yourself) riskier in general.


I think this is exactly it. It's easy to see that there's a chance to improve things while ignoring the ways it could make things worse when they won't affect you. Should you quit the job you don't like? "Of course" the friend will say. But then you might just end up with a job you hate more that pays less, or even no job. Whether the outside perspective is helpful probably depends on how much your own perception deviates from reality. Though people do have a tendency to prefer the status quo until things change, so maybe you should always prefer the "change" option when you aren't sure.


This phenomenon is called "Solomon’s Paradox" - People think more clearly about other people’s problems than their own. When facing their own issues, they reason less rationally.

Yet, a study from 2014 showed that seeing your own problem from an outsider view removes the gap between how wisely you think about yourself and how wisely you think about others.

[1] https://pubmed.ncbi.nlm.nih.gov/24916084/


I imagine it has to do with vulnerability. When you are asking for something or sharing something, being turned down feels personal. When doing it for someone else, it's no big deal if they say no.


> I've always noticed that when I'm giving advice to someone or trying to help out, it always feels their problem is easier than whatever problem I have.

One mundane reason is that you've probably already solved that problem for yourself.

Almost by definition, the big problems we have are in areas where we're less competent than others.


This effect is very real and part of what makes people social creatures -- and why the golden rule is essential to a functioning society.

Like coyotes and wolves, we're wired for life in relatively small tribes where we're caring for one another and pursuing a common purpose.


> noticed that when I'm giving advice

When someone asks for advice, I often find if I pay deep attention, that advice is aimed at myself as well. Listen to the advice you give, because often times, the advice giver should follow it as well.


Probably because our desire to help and not let down a person we care about gives us courage. That courage serves as motivation to go outside our comfort zone.


Two things are at play:

The problem with your problem is you have a desired outcome. And the other is you are not required to do the heavy lifting.

One method is to find a way to bless "future me". Future me will thank current me sometime in the future and while current me won't enjoy future me's rewards directly, he will think kindly, instead of with contempt.


It's something I've been wondering about for a long long time. Thanks for bringing up the question. Sometimes my problem-of-the-day is not even that hard but I have near zero drive to finish it, but if anybody comes with an issue, I then feel motivated (up until I realize his/her issue was hard I guess).

I see three dimensions:

- natural pleasure of helping someone

- ignorance about the problem, making it seems easier

- a saturation aspect: my problem has probably something i've been dealing with for days, my brain is full of unanswered questions about it and has no more "space" for it


>I wonder how much psychologically we can be more confident and less anxious when we're doing something for others vs ourselves

Thank you for taking the time to type this up. I would be extremely interested in any sort of research around this and may add( maybe others face the same ) that's incredibly difficult to introspect yourself and solve problems for yourself as easily as you can for others.


This is a fascinating phenomenon, isn't it? I've heard it invoked as "it's always easier to clean someone else's room." And anxiety does seem to be the key. Very often the actual blocker isn't the difficulty of a task, but how we relate to it.


I find the same thing between doing something entrepreneurial vs. doing it for work. If my boss tells me to call a customer, I will have no problem doing it. Calling for my side hustle......way more anxiety.


I'm sure there's a proper name for what you described, but I call it Rip van Winkle syndrome. He helps everyone in the town with their needs, while allowing his own property to fall to ruin.


That's why it's good to have close friends so that we don't have to be perfect ourselves in all respects in our private lives... humans are a political animal after all


Golden rule - treat others as you would like to be treated. Applies externally and internally - IMO. ie. "Treat yourself as you would treat others"


That sounds like role-reversal. Securely attached people are more flexible (than avoidant or helpless) in both receiving and giving.


Yeah, definitely, to the point that I think we should get together to fix each other’s problems as long as the problems fit.


I'm exactly the same, down to the specific examples you chose.

So, what is to be done?


I was hoping someone points it out for us.

Since you asked me, you are using the same concept and now I need to help you solve your problem (which seem to be the one I also have..)

I think the solution must be we're primarily responsible for ourselves, and that unless we ask others for help all the time we need to figure things out. I also lately have been thinking from the perspective of the person I'm anxious to interact with, and feel that they may actually be happy to interact with me, receive some warm greeting and help out by answering my question or doing my task.

If you could do something for others but feel anxious doing it for yourself, it must be "in our head" and logically we should be able to get over that and choose to be brave. I think in really it's often missed how we can be brave doing the action if it was for someone else, and that the bravery may actually already be inside us.

This at least is how I think of it now.


One thing is also ability to have clearer start and end and boundaries, or some sort of mental boxing for the case at hand.

If you're visiting someone else, you arrive, and you leave. The helping them clean part has at least some sort of boundaries. Even if you don't finish, you have helped them along.

When you're at home, even if you start, if you leave it halfway, it will be your problem after you stop. And tomorrow still and so on. So it feels more daunting.


Apparently, it's a common symptom of ADHD. Probably of other sources of anxiety, too.


Probably to help you avoid being distracted given the higher friction of rebooting if your dual booting or going to another device vs just launching another browser tab..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: