Hacker Newsnew | past | comments | ask | show | jobs | submit | more AIPedant's commentslogin

I think it's fine to be "morally absolutist" when it's non-medical technology, developed with zero input from federal regulators, yet being misused and misleadingly marketed for medical purposes.


Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then

a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder

b) OpenAI would be deeply (and deservedly) vulnerable to civil liability

c) state and federal regulators would be on the warpath against OpenAI

Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.

[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.


> It is a somewhat ugly constitutional question whether this speech would be protected

It should be stated that the majority of states have laws that make it illegal to encourage a suicide. Massachusetts was not one of them.

> and any civil-liberties minded person understands the difficult issues this case raises

He was in his truck, which was configured to pump exhaust gas into the cab, prepared to kill himself when he decided halt and exit his truck. Subsequently he had a text message conversation with the defendant who actively encouraged him to get back into the truck and finish what he had started.

It was these limited and specific text messages which caused the judge to rule that the defendant was guilty of manslaughter. Her total time served as punishment was less than one full year in prison.

> These issues are moot if the speech is between an adult and a child

They were both taking pharmaceuticals meant to manage depression but were _known_ to increase feelings of suicidal ideation. I think the free speech issue is an important criminal consideration but it steps directly past one of the most galling civil facts in the case.


IANAL, but:

One first amendment test for many decades has been "Imminent lawless action."

Suicide (or attempted suicide) is a crime in some, but not all states, so it would seem that in any state in which that is a crime, directly inciting someone to do it would not be protected speech.

For the states in which suicide is legal it seems like a much tougher case; making encouraging someone to take a non-criminal action itself a crime would raise a lot of disturbing issues w.r.t. liberty.

This is distinct from e.g. espousing the opinion that "suicide is good, we should have more of that." Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Depending on the context, suggesting that a specific person is terrible and should kill themselves might be unprotected "fighting words" if you are doing it as an insult rather than a serious suggestion (though the bar for that is rather high; the Westboro Baptist Church was never found to have violated that).


I think the "encouraging someone to take a non-criminal action" angle is weakened in cases like this: the person is obviously mentally ill and not able to make good decisions. "Obvious" is important, it has to be clear to an average adult that the other person is either ill or skillfully feigning illness. Since any rational adult knows the danger of encouraging suicidal ideation in a suicidal person, manslaughter is quite plausible in certain cases. Again: if this ChatGPT transcript was a human adult DMing someone they knew to be a child, I would want that adult arrested for murder, and let their defense argue it was merely voluntary manslaughter.


> Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).

Fun fact, much of the existing framework on the boundaries of free speech come from Brandenburg v. Ohio. You probably won't be surprised to learn that Brandenburg was the leader of a local Klan chapter.


There are entire online social groups on discord of teens encouraging suicidal behavior with each other because of all the typical teen reasons. This stuff has existed for a while, but now it's AI flavored.

IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.


Section 230 changes 2) and 3). OpenAI will argue that it’s user-generated content, and it’s likely that they would win.


I don't think they would win, the law specifies a class of "information content provider" which ChatGPT clearly falls into: https://www.lawfaremedia.org/article/section-230-wont-protec...

See also https://hai.stanford.edu/news/law-policy-ai-update-does-sect... - Congress and Justice Gorsuch don't seem to think ChatGPT is protected by 230.


The hypothetical comparing ChatGPT to a human OpenAI employee is instructive, but we can also compare ChatGPT to a lawnmower sold by a company. We have product safety laws and the ability to regulate products that companies put on the market.


> state and federal regulators would be on the warpath against OpenAI

As long as lobbies and donators can work against that, this will be hard. Suck up to Trump and you will be safe.


I wonder if part of this is that the Switch 1 was hurt by Unity/etc slop and lazy AAA ports, so Nintendo wants to manage that better for the Switch 2. The Switch 2 doesn't have many games but it is also refreshingly free of hentai match-three games. Likewise Cyberpunk on the Switch 2 is dazzling, but even the trailer for Star Wars Outlaws seems to have performance issues. It would make sense that Nintendo wants to limit the number of unflattering comparisons between AAA games on the Switch 2 vs the Steam Deck.

I don't think Nintendo is going to go "Seal of Quality" but it would be nice if the Switch 2-filtered eShop was not full of cynical trash. The ease of publishing for the Switch 1 was new for Nintendo, and it was welcomed at the time, but in retrospect they went too far.


From what I gather switch 2 is already subject to lazy unperformant AAA ports.


Is there something specific you are referring to? I haven't played any of the non-Nintendo Switch 2 ports, but the reviews haven't suggested widespread performance problems.

What is true is that (for example) Split Fiction and Tony Hawk 3/4 are quite a bit less fancy than the PS5 or XBox versions, which is unflattering.


People have been saying the Elden Ring port is looking to be a dog.


I have read that too (particularly in handheld mode) but Elden Ring is a bit of a dog on PC and PS5, this might not be the best example. The engine has way too much cruft to ever run smoothly.


Really? Which titles?


Elden Ring from what I see on reddit.


Yeah, I figured this was clickbait but my jaw still dropped a bit when I saw this:

  I cloned the backend for Truco and gave Claude a long prompt explaining the rules of Escoba and asking it to refactor the code to implement it.
How long would it take the human dev to refactor the code themselves? I think it's plausible that it would be longer than 3 days, but maybe not!


I don't know I feel like rewriting a backend for one card game into a backend for another wouldn't be that difficult, especially for the original dev. Once you've worked out how to represent cards and code the rules you're basically there for any card game.

Also, a refactor is by definition rewriting code without changing the behaviour. Worth knowing the difference.


As an LLM hater, I have to say, this is exactly the use case I want code generation for. If I need to figure out the problem as I develop, which is the case for new code, the model can kindly get out of my way. But if I have already written a bunch of code and I can explain the problem with the understanding that I've gained from my implementation and have the bot redo the grunt work? fine with me..


>As an LLM hater

I thought this was the start of a joke or something, I guess if you use LLMs you are a "LLM lover" then.


It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment. Obviously this is cost-prohibitive and we don’t have even 0.1% of the data required to make the simulation. Maybe we could simulate every single neuron instead, but again it’ll take many decades to gather the data in living human brains, and it would still be extremely expensive computationally since we would need to simulate every protein and mRNA molecule across billions of neurons and glial cells.

So the question is whether human intelligence has higher-level primitives that can be implemented more efficiently - sort of akin to solving differential equations, is there a “symbolic solution” or are we forced to go “numerically” no matter how clever we are?


> It is vacuously true that a Turing machine can implement human intelligence

The case of simulating all known physics is stronger so I'll consider that.

But still it tells us nothing, as the Turing machine can't be built. It is a kind of tautology wherein computation is taken to "run" the universe via the formalism of quantum mechanics, which is taken to be a complete description of reality, permitting the assumption that brains do intelligence by way of unknown combinations of known factors.

For what it's worth, I think the last point might be right, but the argument is circular.

Here is a better one. We can/do design narrow boundary intelligence into machines. We can see that we are ourselves assemblies of a huge number of tiny machines which we only partially understand. Therefore it seems plausible that computation might be sufficient for biology. But until we better understand life we'll not know.

Whether we can engineer it or whether it must grow, and on what substrates, are also relevant questions.

If it appears we are forced to "go numerically", as you say, it may just indicate that we don't know how to put the pieces together yet. It might mean that a human zygote and its immediate environment is the only thing that can put the pieces together properly given energetic and material constraints. It might also mean we're missing physics, or maybe even philosophy: fundamental notions of what it means to have/be biological intelligence. Intelligence human or otherwise isn't well defined.


QM is a testable hypothesis, so I don't think it's necessarily like an axiomatic assumption here. I'm not sure what you mean by "it tells us nothing, as ... can't be built". It tells us there's no theoretical constraint and only an engineering constraint to doing simulating the human brain (and all the tasks)


Sure, you can simulate a brain. If and when the simulation starts to talk you can even claim you understand how to build human intelligence in a limited sense. You don't know if it's a complete model of the organism until you understand the organism. Maybe you made a p zombie. Maybe it's conscious but lacks one very particular faculty that human beings have by way of some subtle phenomena you don't know about.

There is no way to distinguish between a faithfully reimplemented human being and a partial hackjob that happens to line up with your blind spots without ontological omniscience. Failing that, you just get to choose what you think is important and hope it's everything relevant to behaviors you care about.


> It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment.

Yes, that is the bluntest, lowest level version of what I mean. To discover that this wouldn’t work in principle would be to discover that quantum mechanics is false.

Which, hey, quantum mechanics probably is false! But discovering the theory which both replaces quantum mechanics and shows that AGI in an electronic computer is physically impossible is definitely a tall order.


There's that aphorism that goes: people who thought the epitome of technology was a steam engine pictured the brain as pipes and connecting rods, people who thought the epitome of technology was a telephone exchange pictured the brain as wires and relays... and now we have computers, and the fact that they can in principle simulate anything at all is a red herring, because we can't actually make them simulate things we don't understand, and we can't always make them simulate things we do understand, either, when it comes down to it. We still need to know what the thing is that the brain does, it's still a hard question, and maybe it would even be a kind of revolution in physics, just not in fundamental physics.


>We still need to know what the thing is that the brain does

Yes, but not necessarily at the level where the interesting bits happen. It’s entirely possible to simulate poorly understood emergent behavior by simulating the underlying effects that give rise to it.


Can I paraphrase that as make an imitation and hack it around until it thinks, or did I miss the point?


It's not even known if we can observe everything required to replicate consciousness.


The alternative is magic.

Brains are physical molecular machines. Everything that they do is the result of physical processes.


i'd argue LLMs and deep learning are much more on the intelligence from complexity side than the nice symbolic solution side of things. Probably the human neuron, though intrinsically very complex, has nice low loss abstractions to small circuits. But on the higher levels, we don't build artificial neural networks by writing the programs ourselves.


That is only true if consciousness is physical and the result of some physics going on in the human brain. We have no idea if that's true.


Whatever it is that gives rise to consciousness is, by definition, physics. It might not be known physics, but even if it isn't known yet, it's within the purview of physics to find out. If you're going to claim that it could be something that fundamentally can't be found out, then you're admitting to thinking in terms of magic/superstition.


The vast majority of the evidence, as well as logic, supports it so yes we have an idea.


You got downvoted so I gave you an upvote to compensate.

We seem to all be working with conflicting ideas. If we are strict materialists, and everything is physical, then in reality we don't have free will and this whole discussion is just the universe running on automatic.

That may indeed be true, but we are all pretending that it isn't. Some big cognitive dissidence happening here.


This bogus argument has been refuted numerous times--read Dennett's book "Freedom Evolves" for one sort of response. And whether people are "pretending" something is irrelevant (and ad hominem, and not even true). The plain fact remains that all evidence and logic supports physicalism, and even if you entertain dualistic ideas like those of David Chalmers they don't give you free will, they don't counter determinism.


This seems like a cool company and I don't want to nitpick too much, but gamers have no respect for history:

  Castlevania... [so] called because it is a Metroidvania game set in a Castle.
Ouch - this is precisely backwards. Metroidvanias are named after Metroid and Castlevania because those series practically defined the genre.

Also a bit frustrating because the first Castlevania itself isn't actually a metroidvania, it's a more conventional action-platformer. Castlevania II has non-linear exploration, lots of items to collect, and puzzle-solving, all like Metroid. So it's not too surprising Antithesis had to do a lot of work for adapting their system to Metroid - but I wonder if this work means it now can handle Castlevania II without much extra development.


You were successfully trolled. :-)


This is correct. Also, Metroid is called Metroid because it is a Metroidvania set not in Romania, but on an alien world.


It does seem like it helps with math, but in a way that demonstrates the futility of the enterprise: "after training the LLM on 10,000,000 examples of K-8 arithmetic it is now superhuman up to 12 digits, after which it falls off a cliff. Also it demonstrably doesn't understand what 'four' means conceptually and it still fails on many trivial counting problems."


I just don't understand being so cynical and lazy that you'll accept a meaningfully higher chance of being misinformed if it saves a few minutes of searching and reading[1]. Nobody is that busy.

[1] If the search takes more than a few minutes then the AI overview is almost guaranteed to be wrong or useless.


This is true - the most compelling evidence we are in a bubble is not the content of this story (maybe it's just a day in the markets) but the triviality of the cause for hand-wringing. A somewhat disappointing product release from a single company should not strike investor dread across the entire sector. The tenor of the conversation changed dramatically over the weekend because bubbles are very thin and pop quickly.

That said, "GPT-5 will not be any better than competitors' products, demonstrating OpenAI was bluffing about AGI and destroying investor exuberance" was a very specific prediction made by (for example) Gary Marcus.


Yes - the submission title used to be

  As Alaska's salmon plummet, scientists home in on the killer - Science - AAAS
seemingly a goofy copy-paste thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: