Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'No AI Fraud Act' Could Outlaw Parodies, Political Cartoons, and More (reason.com)
120 points by elsewhen on Jan 18, 2024 | hide | past | favorite | 140 comments


Maybe this is reasonable, iff it only applies to fake media that is indistinguishable from the real thing. Like, if you show a jury two videos and they can't tell which one is real and which one is fake. I can accept that there is an implied claim of realism by creating realistic media, and that would be a lie, very similar to libel.


I think it entirely depends on what the media is doing.

I don't care if a hyper realistic political figure impersonator gets up and does a comedy sketch. What I care about is if said impersonator got up and said "I, bob dole, use depends and think that everyone should use it".

That is, deception used for comedic effect should be protected. Deception used to hawk goods should not be ok.


> Deception used to hawk goods should not be ok.

Honestly, focusing on AI seems silly since the internet, at this point, is completely neck-deep in deceptive scams being pushed by reputable, mainstream, important orgs.

Google, Facebook, Amazon, etc. take in literal billions of revenue to show people scam ads. The scammers are directly paying them money - presumably these are not anonymous orgs, but people with credit card numbers and ID. And the big internet giants just gladly act as for-profit middle-men for ripping off confused elderly people, because reviewing the ads posted to their platform isn't practical.

Something has completely failed in enforcement.


It's a start. It's too much to wish to undo the Internet becoming a giant scamming platform without something shifting. But AI faking is a new class of danger of its own since Gen AI eliminates tons of markers of authenticity that used to be reliable.


It’s not too much of a wish to break up these massive fraudulent companies that serve no one except scammers and ad agencies pushing products that are broken by design.

They should be broken up and shut down no matter what will be “shifting” as a result (whatever that is supposed to mean)

The brightest minds of our generation are working on making us click more ads — its dangerous and an unimaginable waste.


This would already be illegal under false advertising laws. There may be other use cases that are similarly objectionable but that are not covered by these laws (because they don't involve advertising, for example).


Bob dole advertised viagra for them, I honestly doubt it would take more than a few $$s to get him to advertise depends, AI not needed.


Its the internet. You can just decieve people anonymously even if its illegal.


Right, but can McDonald's run a commercial with AI linus torvalds endorsing their product?

I think that fakes happening on the internet isn't super interesting. Those are typically smaller scams anyways. I worry way more about much larger companies doing "Here's what so and so would say about this circumstance with our ai generation" or whatever.

Large companies have much larger reach than bitcoin scammers.


If the conditions of tge laws sre made such that there is no overreach into individial freedoms i agree with you. But that is a big ask.


I understand and appreciate your point, but I would like to point out there is a large segment of society that, if shown an article from The Onion, wouldn't realize it is satire.


I think it's okay for The Onion to make satirical news reports, but maybe not okay to make a realistic deep-fake of someone. An actor could dress and act like that person though.


Would this be allowed:

https://www.theonion.com/frail-emaciated-mlb-players-still-a...

In case you aren't a baseball fan, that's a photoshopped image of baseball player Cory Seager.


I was maybe on Buttons side but your checkmate won me over.


What about a comedian who does voice impressions that are good enough to be indistinguishable from the politician to the average person?


If the comedian starts using that ability to make people think the politician said something they didn't say, then the comedian is guilty of fraud.

Also, "humans can do it" is not a reason to allow AI to do the same thing. GenAI systems can do things at high speed 24 hours a day forever and the fraud potential is much much much higher.


> then the comedian is guilty of fraud

Fraud

wrongful or criminal deception intended to result in financial or personal gain

What's the personal gain here?


Thought experiment: If I steal someone else's credit card to buy things (example from FindLaw [1]) that I won't use just to harm my victim, would I be committing fraud? Or would only the non-fraud charges apply?

[1] https://www.findlaw.com/criminal/criminal-charges/fraud.html


Don't trust findlaw.com or a dictionary definition. It doesn't matter if you personally gain. Here is the actual text of the US federal fraud statute [1]

"....obtaining money or property by means of false or fraudulent pretenses, representations, or promises....."

[1] https://www.law.cornell.edu/uscode/text/18/1341


Thats not fraud thats just lying. lying is legal.


And those peoples' votes matter no less than yours.

Misinformation is very dangerous in a democracy, especially when it is easier to sow than real information.


The Supreme Court has historically upheld the first amendment pretty well. You can't shout "fire" in a crowded theater and the like, it doesn't mean you can say anything by legal precedent, but you can say a lot. On a historical time scale, it seems like a steady stream of legislation get passed and completely struck down or neutered into irrelevance on first contact with the Supreme Court shortly after attempted enforcement.

I guess the optics are good in taking money from entrenched interests and looking good to your constituents on passing stuff that the Supreme Court will neuter in less than a year. I just find it boggling that they don't even try to write legislation that will stand challenge.


At risk of nit picking: you can shout fire in a crowded theater. That was a metaphor used in a historic case that’s been pretty radically walked back given how bad of an opinion it was.

https://www.popehat.com/2012/09/19/three-generations-of-a-ha... Is a good read on the topic.


I’m really surprised that it’s okay to shout fire in a crowded theater but not okay to hit the fire alarm.

I also don’t understand why the author of parent link calls it “appeal to authority”. It’s one of the most concise, self-explanatory examples of free speech crossing the line between stating opinion and being an action.


Except it’s not. Shouting “fire” isn’t an action. And the metaphor was used to explain something (handing out anti-war pamphlets) which is clearly protected speech.

It’s described as an appeal to authority because generally, the quote is invoked to imply ~”well a Supreme Court justice said there are limits on free speech, so clearly we can limit this speech we’re discussing”.

There are well-defined exceptions to free speech in the US, with well established tests for assessing them. A rough overview can be found here: https://en.m.wikipedia.org/wiki/United_States_free_speech_ex...


> Shouting “fire” isn’t an action.

What definition of action are you using? I couldn't find any that would exclude shouting.


I was commenting in the context of this comment thread, where the person I was replying to referred to “shouting fire” as a clear example of the distinction “between stating opinion and being an action.”

1st Amendment jurisprudence is pretty clear that shouting is speech/expression, and is subject to the finite and well defined set of exceptions to 1st amendment protection.


You are not allowed to falsely shout "Fire!" in a crowded theater. That famous line from Schenck is often misquoted to drop the "falsely".

Schenck is no longer good law, but this metaphor captures how some classes of speech fall outside the First Amendment. Specifically, false speech.


False speech generally opens you for civil liability. If I shout fire in the theater, everyone rushes out, and I get sued by somebody who was injured, that seems clearly within the free speech exceptions to me.

But if I shout fire in the theater and nobody does anything… what crime have I committed / what liability do I potentially have?


The fire in a crowded theater metaphor is about false speech. The line is often misquoted to drop the "falsely."

For the second part, I would ask your attorney. But in general terms, attempts that fail can be just as bad as the crime itself.


I think you’re misunderstanding the metaphor.

Holmes was comparing “falsely shouting fire in a crowded theater” not to an example of false speech, but an example of inflammatory speech. Schenck wasn’t making false statements, he was encouraging resistance to the draft during wartime.


Christopher Hitchens yelling "fire!" in a crowded theater... https://youtu.be/zDap-K6GmL0


People like to say strawman and appeal-to-authority but often don't notice its not the right situation.

Very common on this website. Disagreeing here is often called strawmanning.


Often people deny something is an appeal to authority if it is an appeal to an authority they accept.


The thing is we dont know eachothers identities here. If the thread is about the right way to do something in gamedev and you tell me you used to work at Big Game Company, and the information you provide seems sound, it is truly evidence in favor of the value of your information. It need not be claimed explicitly "and they work on this there, therefore I also have valid experience there, and as such have experience in the domain" because we assume thats understood already.

People throw around appeal to authority as if relationship or prior experience with any institution or association with any well known name could not be causal evidence. And just throw the term out if they see a title, like it erases the strength of a position by mere utterance.

If somebody told me John Carmack said to use such and such spatial indexing structure for this case, it is good information. There is a sincere practicality here. Not just manipulation or weak rhetoric.

A good place to use it would be when someone with illegitimate authority is called upon, or the surrounding information is so weak or even untrue that it contradicts the association with said qualified authority.


Saying “you should do this because John Carmack did it” would absolutely be an appeal to authority. The fact that sometimes you really should do that thing doesn’t somehow negate the fact that the rhetorical strategy being used is ~”you should do this because an authority on the matter did it”.

The rhetorical strategy “appeal to authority” doesn’t have the conditions you’re implying about the proposed hypothesis being weak, or about the authority being illegitimate.


It doesnt? Well in that case its even more useless than i thought!

I'm not concerned with the clothing of someones rhetoric im concerned with truth, and utility.


I mean appeal to authority in particular is a fallacy in structured, formal debate, where your argument is supposed to stand on its own because all that's being evaluated is the participants' ability to reason and skill in rhetoric. It doesn't apply nearly as well to standard discussion.

Unless it's narrower than I think of it. If "Appeal to authority" is supposed to only refer to authorities that aren't subject matter experts, and "appeal to expertise" wouldn't fall under the fallacy then I suppose it's a reasonable shorthand for "Oprah doesn't actually have much more credibility on Mad Cow Disease than, say, I do"


I think the gap here, in standard discussion, is that something involving a rhetorical fallacy doesn’t make it _wrong_.

At any number of points in my career, I’ve argued in favor of using a technology because the majority of the industry is using it. On one hand, that’s a bandwagon fallacy. But it’s also often the right answer: there are actual benefits in technology of using something with an active userbase, where new hires are likely to have familiarity, etc.

Likewise, appeals to authority are not fatal to a suggestion. If we’re debating two courses of action and I say “well $smart_engineer generally recommends option A”, that’s meaningful. But it’s still an appeal to authority. In a good discussion, the follow-up question would be “why do they generally recommend that?”, which is how you determine if the authority’s wisdom applies to your situation.


its not a bandwagon fallacy then... the fallacy part only applies if it has no causal influence on the point.

In the case of software, popularity causally equals tools. So it is not a fallacy. Someone deciding to make their railroad the same guage as everyone else is not a fallacy. The fallacy would be if everyone in the town said "we should make it different" and you ask why, and people respond "because thats what everyone else thinks, and the mayor"


I'd assume if you tell fire in a crowded space and there is a stampede (but no fire) you could face civil liability, but not necessarily criminal charges.


Probably. There’s plenty of areas where you can be liable for speech without the 1st amendment being relevant.

There’s parallels to the test for incitement. If you say “Let’s go burn down the grocery store” while you’re out with your friends, and everybody chuckles and nobody does anything to burn down the grocery store, you’re pretty much free and clear of incitement. Your random outburst wasn’t likely to cause imminent lawless action.

If you shout “fire” in the theatre and nothing happens, nobody has been damaged and so there’s no civil claim. But if you shout fire and that leads to a stampede and I get hurt, I have a decent pitch for holding you liable for the harm your statement caused me.


More relevant, you stand on stage in front of a crowd and tell they have to fight, and in the same speech tell them to march on the capital where violence then breaks out and leads to physical harm is what?


I guess you’re going to find out. But the answer is “it depends”. If I start every speech with “Carthago delenda est” for a year and nobody destroys Carthage, and then one day I say it and they do, was my comment directed to inciting imminent lawless action, and likely to produce it?


The "shouting fire in a crowded theater" referred to the Italian Hall Disaster [1], where union busters did precisely that.

[1] https://en.wikipedia.org/wiki/Italian_Hall_disaster


If that’s the state of things - taking money for personal interest to oppress the people that are suppose to be represented - perhaps we need an amendment to bar holding public office after attaching a name (authoring or sponsoring) to bills that get struck down as constitutional. I don’t know what the solution is, but there should be some teeth and personal accountability for shenanigans like this.


Yeah but like, how do you change the rules when the people who can make the rules are the ones who made the rules because they like it that way?

Kinda feels like asking the king to ditch the crown because "yaknow, it's the right thing to do"


Politicians that create laws that are later deemed unconstitutional need to be removed from office. Without any repercussions they'll just keep trying until they eventually slip one through.


I think your issue is less with politicians and more with the concept of checks and balances. A system where the executive and legislative branches are constantly at risk of dismissal at the decision of the judicial has massive problems. The government would be basically unable to attempt legislation for emerging situations, because there are often times where there just are not clear judicial standards for how the constitution applies.


I don't know why there isn't some sort of legislative barrier that at least runs new laws through some body that can flag constitutional issues. Most legislators have law degrees, so you would have thought they are aware of the issues before they pass them.

The worst ones are criminal laws that send people to prison for years only to be found unconstitutional much later down the road. (e.g. Chicago's gun ban)


Quite often they know the law they're passing isn't constitutional but it's good campaign optics so they do it anyway. When it is overturned by the courts, voters generally get angry with the court for overturning the law rather than with their congress critter for passing it.


This is the same scam a lot of trial court judges run. They will find a defendant guilty at bench trial with no evidence, or massively over-sentence them, then when the appellate court reverses it they can just jump up and down and blame out-of-touch appeals court judges.


That seems like a good way for the courts to get rid of political opponents whenever it's convenient. I think it'd be a lot less dangerous to just put a restriction on how soon something deemed unconstitutional can be proposed again. A cool down period isn't so bad, and ideally, if someone in congress keeps putting forward a bill nobody wants they'd be voted out of office by the people.


Freedom of speech/expression does not protect defamation or similar misrepresentations of fact that harm people in any country I know of. Perjury, filing a false report, defamation, confidence-jobs, libel/slander/whatever, etc.

Releasing faked content that imply that what's being presented is true events? Stuff that's clearly harmful to a person? That's already illegal everywhere, but also functionally impossible to actually enforce. Whack-a-mole lawsuits for every viral faked video of Mr Beast or Elon Musk will never work, unless the US gov't creates something equivalent to the DMCA (laws defining how content hosts should police what they host) for this kind of actionable disinformation.

I imagine that in the US, works that are obvious satire or otherwise explicitly marked as satire will, as always, be protected by the Supreme Court. However, videos that simply use AI to fake events about real people? That will stand up, I think. IANAL.

But it doesn't matter. We've already seen that the laws on defamation don't actually work if the disinfo is diffused instead of having a clear identifiable target with deep pockets.

If Fox News posts a faked video of Greta Thunberg explaining her nefarious scheme to destroy the economy, I expect them to get sued again. But if such a video goes viral and ends up on every FWD from Grandma and every conservative facebook group and *chan? Nobody will face consequences.

Stuff like that is already happening *now*, even without AI. Trivial stuff like faking newspaper articles and then posting them as a screenshot gets passed around on X and FB and various messaging tools. Look how many people 100% confidently believe the US election was stolen, or climate change is fake, or vaccines are a nefarious scheme for mind-control, or whatever.


As a thought experiment: if I were to use AI to generate a video of a famous person describing how much they love green beans, and how they’ll do anything to get their hands on a can of delicious green beans, and I post it publicly have I committed a crime?


The person you impersonated may sue you, I guess.

You used his image to promote green beans without the right to do so. Normally that's something you pay for, you didn't, and I guess he may ask for what the personality rights for such an ad would normally cost. And depending on how you done it, it could conceivably damage to his reputation, qualifying as defamation. I don't know, maybe he got in trouble with the red bean industry because of that.

That's similar to violating a trademark.


Assuming there's no actual deeper political or financial implications, no. The standard for libel and slander for a public figure are 1. The content is inaccurate. 2. You knew it was inaccurate (you have to have known they don't feel that way about green beans). 3. The content does measurable harm to their reputation or business.

If that feels like a borderline impossible standard to reach, well, now you know why celebrity tabloids are so prevalent, despite their wealthy, powerful, influential targets hating them. We take free speech very seriously in the United States.


I think the part of this that will be interesting, and that the legal system will have difficulty addressing, is what “inaccurate” means.

At any point in history, I could have written an article about why I believe that a prominent public figure worships the devil and is engaged in a plot to blow up the moon. As long as I didn’t attempt to back up my story with undisclosed facts (“I was allowed access to his email inbox, and now it’s clear to me that he’s trying to blow up the moon”) or lie about a hard fact (“he told me that he is going to blow up the moon”), I would be fine.

The gap for faked media is that you can blur the line between what I am saying about someone and what it looks like they’re saying themselves. So if I make a fake video that appears to be Obama cackling while he describes his plan to blow up the moon, and post it to my YouTube account… am I presenting it as if Obama said these things? Obama doesn’t generally publish content on random tech dudes’ YouTube channels. There’s a line somewhere where the presumption is that the content is faked, but I don’t know of any firm answers so far from the legal system on where they plan to draw that line.


You mean like Steve Jobs' Apple Orchards? https://youtu.be/lm8cScTvvBo?si=7ItrpdnFPqgLLNxR


Let’s assume for sake of the hypothetical: like that, but with no disclaimer at the front and with video instead of photos.


Reading through the actual law, it seems that only a few qualifiers are needed to prevent abuse. Although it seems it could be rendered toothless according to page twelve, section twelve, subsection two. The wording in that means somebody could interpret other parody laws to render enforcement basically impossible. It's also concerning that the main focus tends to be on musicians, despite listing deepfake nudes and other malicious uses of the technology as reasons for incitement of the bill.


Agreed, it starts by talking about AI and then when the actual law is written it does not use a single qualifier to identify AI. It's written in a way that almost makes me think some public figure is mad they got lampooned on SNL or some comedy sketch.


Bandaid stuff.

Local models are out there, this is a situation where moral people comply with the law, but immoral people use a corporation to shield themselves.


Imagine getting something like DCMA takedowns because your voice happens to sound similar to someone famous or you look like someone else.


There is already precedent of this, where it was overt. Tom Waits, owner of a very distinctive vocal style, famously successfully sued Frito-Lay for making a commercial that used a singer that sounded too much like him. This was in the 80s, well before the Web or Generative AI.


Your voice or your looks are not AI.


And a bot programmed to scan YT for anything that sounds like X person's voice isn't going to give a shit if it's literally just your voice, they'll still throw a DMCA notice at your video. Bogus DMCA takedowns are thrown at people all the time for stuff like this, and Google already hardly cares to hear your argument against the notice.


> Bogus DMCA takedowns are thrown at people all the time for stuff like this, and Google already hardly cares to hear your argument against the notice.

The automated system you're talking about is called Content ID and is "not DMCA".

DMCA is a legal statute, which is where copyright holders expressly file a notice to YouTube that they're hosting infringing content, and getting one as an uploader will lead to an account strike.

Content ID is completely separate from any legal mandate (besides the settlement Google made with Viacom). In fact, it's even separate from Copyright entirely; YouTube is under no obligation for Content ID to operate only on copyrighted content. If YouTube was willing, they could take a paycheck from Disney to keep Steamboat Willie stuff protected by Content ID, since it's their own system and their own rules.

For this reason, the process for Content ID appeals is tougher - it's in YouTube's best interest to keep videos claimed, particularly if they need to appease rights holders[0] who hold all the cards for whether or not their music is available on YouTube. If it were a DMCA notice instead, you would have a legal right to counter the DMCA notice (under fair use) and get your video reinstated, unless the rights holder tells YT they have filed suit against you.

0: https://www.bbc.com/news/business-42420218


>DMCA is a legal statute, which is where copyright holders expressly file a notice to YouTube that they're hosting infringing content, and getting one as an uploader will lead to an account strike.

I am well aware of the differences between the two, and this is exactly what I was referring to. There absolutely have been instances of bogus DMCA takedown notices negatively impacting users and being a nightmare for them to resolve.


Then I don't see how it's fair to criticize YouTube for this. The DMCA process makes it so that YouTube doesn't have much choice in the matter besides by suing the fraudster for false DMCA[0]. If they make a determination that "we think this DMCA is false so we'll bring your video up" they'd take on a lot of risk in assuming legal liability for that video if it actually infringing. They have done this before for obvious abuse, though[1].

0: https://www.theverge.com/2019/10/15/20915688/youtube-copyrig...

1: https://old.reddit.com/r/youtube/comments/6em1gw/youtube_is_...


>Then I don't see how it's fair to criticize YouTube for this.

I never criticized YT for this (stating that Google hardly cares to hear is just a fact, used as an example - they are notorious for not putting much effort into responding to customer service complaints), I just used it as an example of one platform on which this happens. The entire point of my post was to highlight the fact that bots scanning for copyright infringement don't typically care if your voice happens to sound like, say, Gilbert Gottfried. All the bot hears is Gottfried's voice, so bam, takedown notice sent.

The platform doesn't matter.


Doesn't need to be "AI" despite the bill's name (link to page that has link to draft of the bill): http://salazar.house.gov/media/press-releases/salazar-introd...

(a) DEFINITIONS.-In this Act:

[...]

(2) The term "digital depiction" means a replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or in part using digital technology.

[...]

(4) The term "digital voice replica" means an audio rendering that is created in whole or in part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations, or approximations of an individual that the individual did not actually perform.

[...]

(7) The term "digital technology" means a technology or device now known or hearafter created such as computer software, artificial intelligence, machine learning, quantum computing, or other similar technologies or devices.


And for only $700 / hour to hire an expert witness on AI (on top of hundreds of thousands of dollars in legal bills to get to that point), you can prove it in court!


How do you prove that outside of a courtroom?


I think it would be least likely to outlaw political cartoons or parodies as those can be forms of criticism of the government, one of our most sacred rights. We don't have freedom of speech so that we can talk about the weather.

With that said, isn't it already against the law to use someone's likeness, especially for commercial/political gain?


They reference the Tom hanks dental thing but that looked to me like it was obviously not a deepfake.

This law seems vaguely ok. If the general premise of no ai replication of individual people’s physical qualities and voice without permission then sure whatever.

It won’t make much of a difference though. People’s voices arnt appearances aren’t as special as we think


The First Amendment does not protect:

- false statements (fraud, libel, defamation).

- "fighting words"

- true threats

- obscenity under a community standards test

- child sexual abuse imagery (obscenity per she)

- speech that incites imminent lawless action

There's always room to debate what AI has to contribute. But it is already illegal to defraud people with forged documents. Forged videos are still forged documents.


That's one the main points of using AI for censorship, they'll be able to ban all that (parodies, political cartoons and more) while "blaming" AI, going by the: "it was the algorithm/computer code, not us!".

In fact the whole process of outsourcing the implementation of censorship to private entities (Facebook, Alphabet etc) is part of that same shtick, and it's pretty effective in the short to medium term. Of course, in the long term it only helps bread more discontent when it comes to the system, which discontent this time around also involves private entities (just notice how trust in the privately-owned mainstream media has cratered almost everywhere in the Western world in the last ten or so years), but I guess the powers that be have stopped caring for the long-term health of the system a long time ago.


It is interesting if you look at the talks at davros this time they are talking about trying to put that trust back and only 'one tech mogul is not onboard'. The people with money want the gatekeepers back. The tech industry is glad to help.


I like how the law requires you to have signed an agreement with a person before you're allowed to make a video with their likeness, but the law also applies to people who are dead and incapable of signing an agreement at all.

No more putting dead starwars actors into new movies I guess.


This reminds me of ReasonTV's playlist on Unintended Consequences on YouTube.


> It states that "every individual has a property right in their own likeness and voice," and people can only use someone's "digital depiction or digital voice replica" in a "manner affecting interstate or foreign commerce" if the individual agrees (in writing) to said use.

Is there anything that isn't "interstate commerce"?

Remember, while the court was clutching their pearls over Roe v Wade and other cases not too long ago, they allowed Wickard v Filburn to stand, in which it was decided that someone growing food on their own land to feed to their own animals was "interstate commerce".

My understanding is that it's important for everything to be interstate commerce because the federal government can regulate interstate commerce, and so if we define interstate commerce to include everything, then the federal government can regulate everything.


>>they allowed Wickard v Filburn to stand

Has there been a case to challenge this recently?

The current court is looking at Alot of old precedents and is poised to rollback federal power in some big ways. Before them right now is a case on Chevron Deference

I am not aware of any that has been before the current current that would challenge Wickard or any of the other terrible rulings that followed Wickard. So I am not sure the statement of "they allowed Wickard v Filburn to stand" is accurate.


> It states that "every individual has a property right in their own likeness and voice," and people can only use someone's "digital depiction or digital voice replica" in a "manner affecting interstate or foreign commerce" if the individual agrees (in writing) to said use.

Really?

So it's illegal to record my likeness with your security camera and it's illegal to use my likeness to stop me from stealing because that affects the price of goods in interstate commerce?

I'm tempted to demand cops turn off their body cameras the day this goes into effect. Same of all government survillance.

If I can find an honest judge, I'll freaking own the government.

Who elected these idiot? Oh, right, never mind. I thought the counted my vote for a minute but obviously not.


> I'm tempted to demand cops turn off their body cameras

Police don’t like wearing body cameras because it acts as surveillance of their actions. A citizen can FOIA request body cam recordings and, even though it happens on occasion, it looks bad when the police are not able to procure recordings for an encounter with civilians.

I’ve seen videos of officers seeming to momentarily turn off their cameras as they have conversations amongst each other. They don’t like the surveillance. Don’t demand that the police remove one of the best ways we have of holding their actions to account.

----------------

I also suspect the analysis is off. It appears to regulate only those recordings which “[affect] interstate or foreign commerce“, which would not likely include police activity. That said, “first amendment auditors” may be regulated away with this, seemingly in contradiction to said amendment.


The wildly over-broad part of this is that anyone making a "personalized cloning service" available is liable under this law.

That's nuts because there numerous legitimate uses for such a service.

This law would cover anything capable of making deepfakes of a person, which would kill off generative AIs in general.


After considerable thought, far before this article landed, I have to agree. I have no problem with AI inventing new things. But purporting to be something else - the potential for confusion is far too great. A human parody that contains obvious clues that even those unfamiliar with the subject would recognize as not being the subject - that's fine. But a near-perfect copy of the subject of satire, that is too far. Way too far. The potential for abuse is far too great.

Obviously, banning such usage will not prevent outlaws and rogue states from doing this very thing. The goal of the banning is to send a clear signal that in our society, we do not accept such things. Much like chemical weapons. It will also help contain the spread of such things in our society even if our enemies try to send it to us, like child pornography.


What’s your plan to enforce it? Because passing a law doesn’t magically eliminate a problem. Often times laws increase the behavior they are trying to prevent, like drug laws.


How about something like "Sassy Justice", where the image is almost identical, the voice is similar, but where a different name is used and no claim is made to be Donald Trump?


Well, it was fun while it lasted. Comedy, I mean. I guess... it's OK, we'll all be 'safer' probably. At least we've still got ... clean air.

But you don't need AI to do a great Trump impression. I've seen: Tyler Fischer, Shane Gillis and 'Godfrey' do good impressions. They really nail the voice. It's funny how impression-able Trump's voice is. His voice is its own character, as is his hand gestures.

I guess entertainers need to evolve to have some characteristic.


they should just blanket ban all impersonations, while they're at it. I feel like thought police might be closer than we think at this point.


May as well ban all acting while they are at it. Wouldn't want someone to confuse an actor in a biopic for the real thing, would we?


Dude, they already have them in the UK. And Canada. Certain ideas and expressions of those ideas are outlawed based on how they spread: "misinformation" and "hate".

Misinformation (ie, not knowing what you're talking about, but speculating and trying on theories), or being in the unknown, is the foundation of frontier science, which is basically the invention of ignorance. So, you're no longer allowed to think, and if you can't think, you have to be told what the truth is...So...religion I guess?

Hate (ie, a wide spectrum of behaviors that is easy to allege but challenging to objectively verify and often deployed without nuance, self-awareness or impartiality) can just be subjective confirmation of existing biases. Hate could just be I don't like this person, then the accuser is like, "That's hate!", but then it's like, "Isn't it hate to accuse of that? Aren't people allowed their preferences?"

These dual problems of misinfo and hate (the new heresy and blasphemy, I suppose, administrative and legalistic tools which have been deployed in the past in the service of a ruling class to suppress or control a subjugated class), are exacerbated by the pseudo-collectivist elevation of one's "group identity" to a place of primacy, where the state seems to not only suggest that group identity is more important than individual identity, but it seeks to take responsibility for the interactions at group identity level, thereby amplifying the sense of victimhood, and lack of personal responsibility, that already underpins the psychological vulnerabilities that compels people into the larger cultural conflict traps already in play.

Anyway, that's my sort of literary theory, cultrual crit attempt to describe it, hahaha! :)


Sorry but you will never regulate memes.

The actually harmful AI deepfakes like fake Elon and Mr beast scamming old people with fake crypto, and deepfake porn, are already illegal and still impossible to regulate.

Are you going to fine Google everytime a throwaway account uploads to YouTube.

Let's be realistic here, this law will only be used by celebrities hiring lawyers to attack comical parody.


Just make companies criminally liable for false advertising that they allow.

If a YouTube video promotes a fake Elon crypto? No problem.

A paid advertisement points to that? Fine Google for every instance it was displayed.

This would cost too much and make advertising impractical? Great. Win/win.


In practice what that would do is make the platforms even more draconian in how they screw over and abuse the public while giving them the excuse that it isn't the platform's fault, the government is forcing them to be evil.


In practice, crypto scams, Mr. Beast giveaway scams, and many other types of scams are freely displayed on the front page of YouTube, in the #1 spot of Recommended videos. This is what Google is allowing their platforms right now.

Any excuses that originate from stricter duty to protect against obvious scams should be really easy to see through. Personally I don't think they should be asked to act on ambiguous borderline cases where something could be interpreted as a scam by someone, but allowing "send 1 BTC, get 2 back" to display on the front page of Youtube is inexcusable.

Either way, they're going to continue doing whatever they want and it's not like they're going to face consequences in our lifetime.


I agree that those scams being openly on YouTube is an issue, my concern is that we've seen how they blindly abuse the public using retaining DMCA safe harbor as an excuse, so I don't think plainly expecting the company to suppress things will do the job. All of this social media related legislation needs to be rebuilt from the ground up by people who are actually from a current generation.

Since fighting false/abusive claims requires more effort than most people/creators can afford and Google only cares about retaining its safe harbor protection, DMCA is heavily abused by both malicious individuals and large companies to suppress anything they dislike. Technically there's a counter-claim system, but IIRC Google just hands over your personal info to the other party to fight the claim in court, which obviously puts anyone being false claimed in a tough spot (fight the claim and dox yourself to a malicious actor or don't fight the claim and get copyright strikes/lose revenue).

Sure, everyone can see that Google is just making an excuse to not have to spend more on checking the validity of reports, but ultimately that doesn't really change the fact that they continue to get away with it anyway.


> Are you going to fine Google everytime a throwaway account uploads to YouTube.

That kind of depends on Google's modus operandi, doesn't it? Like telecom network not doing enough to combat robo-calls.

If Google wants to reap the profit from allowing people to upload content, they should also perhaps share a part of the responsibility of anything that happens from their own (in)actions?

If Google has ha habit of taking down malicious content, then less responsibility perhaps?

If Google attempts to put into place safeguards versus repeat offenders, then less responsibility again perhaps?

Like if a person is speeding and causes an accident, the government could in theory share part of responsibility if the road they designed and built did not have adequate best-practice safeguards against head-on collisions. A news network could also face legal issues for lies said on it's programmes by guests and hosts regarding voting machines.


Google takes every step you mentioned and has departments working on it daily.

It's still just as common as it was 2 years ago.


Then maybe one monolithic video platform that serves the streaming needs of an audience the size of YouTube's isn't a good business plan.

For decades the mantra of Silicon Valley has been "eat the world" on a given service you provide, alright, fine. Then provide it. And if you can't provide a service to billions of people without the quality of said service going into the shitter, then the suggestion I have for you is to not eat the world.


Google makes a token effort based on the current risk v reward calculation.

Increase the risks and they might actually become effective.


> still impossible to regulate

That's not actually true, as the problem with most of these deepfakes video ads is the fact that they are auto-approved on various platforms with no human oversight.


So what you are saying is that it is impossible to regulate? After all, if the regulation was successful then the ads would be approved only after careful human oversight. But if regulation is impossible then the actors can do essentially whatever they please, including allowing ads to be auto-approved.


I responded to someone that said

> still impossible to regulate

with

> That's not actually true

How did you conclude that I'm saying it is impossible to regulate? Are you just being overly pedantic?


Well, you said it wasn't true, but then went on to explain how it has held true thus far. If you actually see it as being possible, why not provide evidence of support about the possibility, not evidence to the contrary?

If it is possible, what is holding us back? It was asserted that the regulation is already in place. That is not the stumbling block. It is getting people to actually enforce the relation that is the hindrance.

Enforcement requires using up finite resources. Enforcement is not happening because we don't have the will to use those resources for that purpose. We have other things we deem more important. What is going to change the will? Without a change in will, it is impossible.


> So what you are saying is that it is impossible to regulate?

No, it would be difficult to regulate, at scale. Just like most things are difficult at scale and history is awash with corporate whining about how doing anything at all to mitigate the harm they do will annihilate them, capitalism and the concept of the free market.

"You can't possibly expect us to get all the children out of the coal mines!"

"How can we function as a business if we have to pay negroes the same wages as whites?"

"The requirement that we have to store pesticides away from employee break rooms is government overreach!"


> No, it would be difficult to regulate, at scale.

The earlier comment pointed out that the regulation already exists. It is just not being regulated – as said to be because it is impossible to regulate. Perhaps you mean it is difficult to regulate? Which is the exact same thing the earlier commenter said.

I mean, sure, in some hypothetical world where there are no constraints one could conceive of how it would be possible to regulate, but in the real world where people have other, competing concerns and only so much time in the day, is there actually the will to regulate it? Without the will then it is, indeed, impossible.


I'm not sure this even makes sense on more abstract level - AI "deep fake" bad, but making the same content with couple motionFX guys using Maya ok?

Where's the distinction and in what way is AI even relevant here, other than the buzzwordy news release?


It's not that making content manually I'd "ok* it just requires so much time/skill as not to be a concern for legislators. The high barrier to every is already limiting enough in most cases.

It's like I can't walk into a store here and buy gunpowder but I could visit three different stores for the ingredients and mix them... If I was extremely dedicated.


> Are you going to fine Google everytime a throwaway account uploads to YouTube.

Sure, why not, fine them $1 (assuming the content is illegal).

If they amplify the video's reach so a million people see it, fine them $1 million instead.


and how do you do it? user reports? and how do you verify those user reports? At the end of the day you need an automated system for scale that probably wont be better than what's in place already.


A regulator employs a handful of staff who estimate the number of offences and hands out the fines. If Google disagrees, they can have their day in court instead, where a judge rules on a representative selection of the alleged offences.

The regulator gives everyone a break on the first, say, $100 million of fines a year, to recognise that some things will fall through the cracks.

The regulator is publicly funded, and the fines also go back to the public purse.

The regulator employs some mix of low-level data analysts who click links all day, technologists who build automated review systems, and bureaucrats who update policy documents and talk to politicians and companies. A revolving-door system develops between the regulator and the content moderation industry. The regulator is a generation or two behind tech giants in the sophistication of its systems, but that doesn't matter - it only needs to catch the most egregious offenders.


That would force them to push more informative stuff at the expense of fake inflammatory crap that attracts more views. I don't see a change like that coming anytime soon, but hope never dies.


>Are you going to fine Google everytime a throwaway account uploads to YouTube.

Yes, and Google could use the same AI technology to automatically remove such material at a massive scale. Of course, tons of legitimate material would get wiped out by that system too, but Google doesn't care about that, and neither does the US government.


>Sorry but you will never regulate memes.

Yet this is likely part of the background reasoning for laws such as this. Not only is this bill entirely unworkable, & against how things have been done with likenesses for ages, it is a full-on assault on the first amendment, going after the most potent, easily-spreadable political speech out there right now.


Yes. A simple fine for each verified instance prominently displaying a clearly fraudulent advertisement seems like a reasonable solution. Maybe instead of spending engineering cycles thinking of how to jam a larger volume of ads down our throats, Google can develop better ways to detect and reject scams. Win/Win.


>> re you going to fine Google everytime a throwaway account uploads to YouTube.

Sure. Social media companies have been denying responsibility for years, but that doesn't have to be the case. The core problem is anonymity, or inability to verify a source. That and automation that users can just lie to.


so the main Problemes are the users which provide wrong or loose identification.

why should google care? mybe go after the company who sold them a phone/PC? or after there Parents who got them born in the first place..


> why should google care?

Well, if "we" want to prevent a scenario where Alice provides wrong/loose identification to Bob and Bob accepts it, there is only really two realistic ways to do so: force Alice to not do this, or force Bob to check (maybe with "our" aid). Or do you propose having Victor who would regularly come by Bob and check all of the new identifications provided from the all of Alices in the meantime?


I think yellowsir actually has a point, and I'm the one they responded to. For search I think Google should care, as their users will eventually leave if they can't find reliable information. YouTube on the other hand... I don't see any business reason for them to care about content quality or authenticity - they make money from every video people watch.

To my first point, I don't know how any Google competitor is going to provide more reliable information until the root problem of authenticating sources is solved. So maybe there is really no reason for them to care after all.


I've reported fake Elon musk crypto scam streams on accounts that were taken over by scammers, each time it too youtube over 24 hours to stop the scammers from streaming, let alone restored the channel to the rightful owner.

Why shouldn't they be liable for this?


I mean we have DMCA that requires mass enforcement of copyright, so the "fine YouTube every time some throwaway account uploads" ship already sailed.

Honestly, in this age of bulk uploaded disinfo, I'm kinda disappointed that only copyright gets that kind of extreme protection... but I suppose it's much easier to do copyright since IDing an exact copy of a given source is infinitely simpler than detecting eg defamation.


DMCA gets hosts out of being fined for user uploads or even having to think much about it. YouTube has their own system that goes well beyond DMCA, probably because they want to encourage the copyright owner to agree to accept monetization of the infringing material instead of having it removed.

For a host that just sticks to DMCA it is quite simple.

1. A claimant alleges that a user is infringing their copyright.

2. The notifies the alleged infringer and removes the content.

3. If the alleged infringer disagrees that the content is infringing they notify the host.

4. The host notifies claimant that the user disputes the claim, and gives the claimant the user's contact information.

5. If the host does not receive within a couple weeks proof from the claimant that the claimant has filed a copyright infringement lawsuit against the user the host puts the content back up.


Copyright isn't only trivially definable, it's uncontroversial.

Let's say you decided to police "disinformation" about the current Israel/Gaza conflict. In about 5 seconds you'll be inundated with propagandists trying to define what is "true" and what's "disinformation".


But if you decide to police pornography made using the image of real people against their will it should be a bit easier right?


> trivially definable

People who try to work within fair use rules will disagree on that point.


> Are you going to fine Google everytime a throwaway account uploads to YouTube.

Mandating KYC on Social Media plateform would be enough to stop that though.


Like the Clipper Chip?


From what I read on Wikipedia, this wasn't supposed to be a KYC-related device so I'm not sure I see the point you're making.

Almost every banking/investment/trading/payment provider out there routinely have a KYC mechanism in place to avoid throwaway accounts being used for money laundering (And this is mandated by regulation), so this is definitely a thing. (I'm not particularly advocating for it since it would have serious privacy and freedom of speech implication, but claiming this is an impossible problem is a gross oversimplification).


[flagged]


Oh come on it's not that bad! It's just things which threaten their power


Because every time there's something cool and new, it could threaten the profits of large slow-moving incumbents, so the incumbents use their lobbying power to try to get bureaucrats to neuter or kill the threat.


Are actors on SNL going to be banned next?


Reading between the lines this doesn't seem all that different than California's likeness-protection law.


I don't know if this is in the bill, but all (non-advertisement) parody videos etc should simply be legal on the condition if they display the word "PARODY" in the upper right in White on black font size 13 taking up no less than 5% of the video's height (or something).


That would be “compelled speech”, which should not pass Constitutional muster.

I say “should” because the courts have already taken big stinking dumps on all of our supposedly inalienable rights, so who knows how it would actually go.


I’d agree to that as long as politicians have to wear the same label any time they speak about legislating AI, because they’re clearly so lost and out of touch it must be a parody.


> I don't know if this is in the bill, but all (non-advertisement) parody videos etc should simply be legal

FTFY


people usually put disclaimers for that kind of stuff


Let's do that for advertising first...

(Or maybe not at all, think of the screen burn-in)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: