Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).





I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

And legally, code and software are considered a form of speech in many contexts.

Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.


Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.


Those examples of harm are not good ones.

The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.

As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.

But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.

(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)


> Those examples of harm are not good ones.

I emphatically disagree. See you at the ballot box.

> but it's not a major practical concern for any end users so far.

My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.


Fear of leaks, or the other harms you mention, have nothing to do with the question at hand, which is whether these features are enabled by default.

If someone is using ChatGPT, they're using ChatGPT. They're not inputting sensitive personal secrets by accident. Turning Gemini off by default in Gmail isn't going to change whether someone is using ChatGPT as a therapist or something.

You seem to simply be arguing that you don't like LLM's. To which I'll reply: if they do turn out to present substantial harms that need to be regulated, then so be it, and regulate them appropriately.

But that applies to all of them, and has nothing to do with the question at hand, which is whether they can be enabled by default in consumer products. As long as chatgpt.com and gemini.google.com exist, there's no basis for asking the government to turn off LLM features by default in Gmail or Calendar, while making them freely available as standalone products. Does that make sense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: