Hacker News new | past | comments | ask | show | jobs | submit login

> It's to incentivize large commercial and political interests to disclose their usage of generative AI.

You would be okay allowing small businesses exception from this regulation but not large businesses? Fine. As a large business I'll have a mini subsidiary operate the models and exempt myself from the regulation.

I still fail to see what the benefit this holds is. Why do you care if something is generative? We already have laws against libal and against false advertising.




> You would be okay allowing small businesses exception from this regulation but not large businesses?

That's not what I said. Small businesses are not exempt from copyright laws either. They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.

> I still fail to see what the benefit this holds is.

I have found recent arguments by Harari (and others) that generative AI is particularly problematic for discourse and democracy to be persuasive [1][2]. Generative content has the potential, long-term, to be as disruptive as the printing press. Step changes in technological capabilities require high levels of scrutiny, and often new legislative regimes.

EDIT: It is no coincidence that I see parallels in the current debate over generative AI in education, for similar reasons. These tools are ok to use, but their use must be disclosed so the work done can be understood in context. I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.

1. https://www.economist.com/by-invitation/2023/04/28/yuval-noa... 2. https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-c...


> They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.

They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them, but that's not the same thing at all. We regularly see small entities getting harassed under these kinds of laws, e.g. when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.


> They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them

Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations. I will concede that copyright legislation has severe flaws. Affirmative defenses and other protections for the little guy would be a necessary component of any new regime.

> when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.

Look, I have used and like youtube-dl too. But it is clear to me that it operates in a gray area of copyright law. Secondary liability is a thing. Per the EFF excellent discussion of some of these issues [2]:

> In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.

I do not think it is clear how youtube-dl fares on such a test. I am not a lawyer, but the issue to me does not seem as clear cut as you are presenting.

1. https://www.eff.org/issues/cda230 2. https://www.eff.org/pages/iaal-what-peer-peer-developers-nee...


> Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations.

This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.

> But it is clear to me that it operates in a gray area of copyright law.

Which is the problem. It should be unambiguously legal.

Otherwise the little guy can be harassed and the harasser can say maybe to extend the harassment, or just get them shut down even if is is legal when the recipient of the notice isn't willing to take the risk.

> > In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.

Notably this was a circuit court case and not a Supreme Court case, and:

> The discussion of proportionality in the Aimster opinion is arguably not binding on any subsequent court, as the outcome in that case was determined by Aimster's failure to introduce any evidence of noninfringing uses for its technology.

But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal -- because it still isn't an infringing work. It's the same reason the DMCA process isn't supposed to be used for material which is allegedly libelous. But the DMCA's process is so open to abuse that it gets used for things like that regardless and acts as a de facto prior restraint, and is also used against any number of things that aren't even questionably illegal. Like the legitimate website of a competitor which the claimant wants taken down because they are the bad actor, and which then gets taken down because the process rewards expeditiously processing takedowns while fraudulent ones generally go unpunished.


> This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.

Ok, I'll rephrase: the clarity of its mechanisms and protections benefits small and large organizations alike.

My understanding is that it no longer applies to copyright because the DMCA and specifically OCILLA [1] supersede it. I admit I am not an expert here.

> Which is the problem. It should be unambiguously legal.

I have conflicting opinions on this point. I will say that I am not sure if I disagree or agree, for whatever that is worth.

> But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal

This is totally fair. I also am not a fan of the DMCA and takedown processes, and think those should be held as a negative model for any future legislation.

I'd prefer for anything new to have clear guidelines and strong protections like Section 230 of the CDA (immunity from liability within clear boundaries) than like the OCILLA.

1. https://en.wikipedia.org/wiki/Online_Copyright_Infringement_...


> I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.

You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.

With regards to the articles, they are entirely speculative, and I diaagree wholly with them, primarily because their premise is that humans are not rational amd discerning actors. The only way AI generates chaos in these instances is by generating so much noise as to make online discussions worthless. People will migrate to closed communities of personal or near personal acquaintances (web of trust like) or to meatspace.

Here are some paragrahs I fpund especially egregious:

> In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

Dumb people will dumb. People with different values will different. I see no reason that AI offers increased risk to cult followers of Q. If someone isn't going to take the time to validate their sources, the source doesn't t much matter.

> On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

In these instances, does it mayter that the discussion is being held with AI? Half the use of discussion is to refine one's own viewpoints by having to articulate one's position and think through cause and effect of proposals.

> The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

Intimacy isn't necessarily the driver for this. It very well could have been Lemoine's desire to be first to market that motivated the claim, or a simple misinterpreted singal al la Luk-99.

> Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

Akin to the concerns of scribes during the times of the printing press. The market will more efficiently reallocate these workers. Or better yet, people may still choose to search to validate the output of a statistical model. Seems likely to me.

> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain.

Now we get to the point: please regulate me harder. What's to stop a more powerful AI from corrupting the minds of the legislative body through intimacy or other nonsense? Once it is sentient, it's too late, right? So we need to prohibit people from multiplying matrices without government approval right now. This is just a pathetic hit piece to sway public opinion to get barriers of entry erected to protect companies like OpenAI.

Markets are free. Let people consume what they want so long as there isnt an involuntary externality, and conversing with anons on the web does not guarantee that you're speaking with a human. Both of us could be bots. It doesn't matter. Either our opinions will be refined internally, we will make points to influence the other, or we will take up some bytes in Dang's database with no other impact.


> You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.

This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.

I am not entirely convinced by the arguments in the linked opinions either. However, I do agree with the main thrust that (1) machines that are indistinguishable from humans are a novel and serious issue, and (2) without some kind of consumer protections or guardrails things will go horribly wrong.


> This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.

I strongly disagree. I heard the same arguments about how Google needs regulation because nobody could possibly compete. A few years later we have DDG, Brave Search, Searx, etc.


You mean the market will sacrifice people in order to optimize!?!?!?!

say it aint so bobby, say it aint so!


There are no machines than are indistinguishable from humans. That is science fiction.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: