Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
European Union allowing the use of deepfakes (lukaszolejnik.com)
56 points by Hard_Space on April 15, 2021 | hide | past | favorite | 35 comments


This is a pleasant surprise. If photo editing is legal (and it obviously should be) then editing the frames of videos should be legal. Modern technology just allows for individual people to feasibly do large number of frames.

It's good to democratize what was previously the domain of only institutions and corporations. As for the claims of ownership on derived works from public data, well, the only reason that's even being entertained as a valid legal claim is technological ignorance and the catchy name "deepfakes".


Quantity has quality of its own. Scale and efficiency matters.

Mass surveillance with million cameras and face recognition is the same as operative with a camera, films that get developed and then processed by an overworked analyst using drawers with physical files. Just a lot faster.


While I'm not sure where I stand on this ruling, there's a flaw in this form of reasoning. Namely, I don't think this is merely (or at least usefully) reducible to 'efficiently editing a series of frames'. The ability to do so has emergent effects, and that should be taken seriously!


There's something in that text that I find worrying:

"... shall disclose that the content has been artificially created or manipulated. This obligation shall not apply where necessary for the purposes of safeguarding public security [and other prevailing public interests] or for the exercise of a legitimate right or freedom of a person and subject to appropriate safeguards for the rights and freedoms of third parties.”.

Does the above mean that the powers can create deepfakes, but given certain conditions, they're not obligated to reveal their nature? There's huge room for bending these rules to disseminate fake news to attack opponents, falsely accuse innocent people of crimes or save others from crimes they have committed. These laws will be abused, that's for certain, because they look exactly like they were thought with abuse in mind.


Those get-outs are pretty standard boilerplate. They're there even when it comes to human rights - as otherwise certain actions by the state would not be permitted (for example jail).

> There's huge room for bending these rules to disseminate fake news to attack opponents, falsely accuse innocent people of crimes or save others from crimes they have committed

I'm not sure how well that would work in court. Which is where you'd go if you think the government in your country has broken the law, isn't that the process?


"Does the above mean that the powers can create deepfakes, but given certain conditions, they're not obligated to reveal their nature?"

There's a similar, obvious loophole, in the statement from the released EU doc:

"users of AI systems who use the same to generate or manipulate image, ..."

Basically limiting the regulatory concern to the 'use of AI' in the process.

So what if 'AI' is not used?

If you create 'fakes' manually - by some non-AI process, by dubbing over a voice - that's cool then?

Because if that statement were used in a legal context, then the notion of 'users of AI' seems fairly explicit.

I've been downvoted into oblivion for the the very clear observation: it's nary impossible to legislate using specific technologies, especially as they are ill-defined.

The issue here is not 'AI' - it's the 'misrepresentation' of individuals, and the legal precedent has nothing whatsoever to do with the means of reproduction.

If you 'fake' a representation of a celeb or public figure, and the EU wants to make that illegal for whatever reason, then that's fine, but it has nothing to do with AI.

The legislators are misinterpreting the nature of this new branch of computing.


When has that not been the case? There is no obligation to label your centurion helmet or novelty photoshop of the presidentbwalking down the white house lawn without pants as fake.

The actual misdeeds themselves are the crimes like false accusations.


Great!

1. What would be the point of a deepfake prohibition? The tech is here and it can be useful. Prohibition only leads to further issues legislating against actual bad actors.

2. People like images/videos, people mostly trust images/videos, but for that same reason they're commonly the best tools to deceive and deceit. Here's to hoping that deepfakes will increase their scrutiny and awareness.


This is pretty concerning, if not scary:

> “users of AI systems who use the same to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a reasonable person to be authentic [or truthful], shall disclose that the content has been artificially created or manipulated. This obligation shall not apply where necessary for the purposes of safeguarding public security [and other prevailing public interests] or for the exercise of a legitimate right or freedom of a person and subject to appropriate safeguards for the rights and freedoms of third parties.”.

So, if "for the public good", it's all good for government and others to fake all they want without disclosing that manipulation has taken place.


Weird exceptions indeed. I don't know in which case it would apply.

To me, it just sounds like this is a "low priority" law.

For example, if you are allowed to use deception, for example in the context of a criminal investigation, or if a citizen needs to protect himself, then the new law doesn't change that fact.


It's incredible that "safeguarding public security" is unqualified, there should be a law that specifies exactly how such things are evaluated or it should be forbidden to put such loopholes in law.

My respect for law as a field of study is starting to be around the same as my respect for finance as a legitimate business.

The people who actually study how to write law are mathematicians (including to various extent the applied mathematicians, which range from programmers through engineers and all the way to the probability theorists a.k.a. "scientists").

One day I hope we can clear up the misunderstanding in our society about the extent of our reliance on mathematics and stop acting as if "we cannot engineer solutions to social problems" - these glorified scammers that write the law and run the economy are the cancer that is killing everything.


Title should say "European Union not prohibiting deepfakes"


It depends on the text of the regulation, it may be written in a way that bans all uses not explicitly allowed, which I believe is common for EU regulations.


Why should not? Deepfake technology is just a tool, that has both legitimate and illegitimate usages. Can be used with consent of displayed person, or e.g. in political satire.


Funny you should mention political satire. Is it protected speech to show a to-be mob a deepfake video of a political rival in a secret meeting saying that they enjoy causing suffering of the weak?

This is just one example. People are very easily manipulated and these days tribes are operating in a post-fact era manner. The prospect of readily available deepfakes makes this problem much worse.


Honestly, I don't think the problem can get worse. Or at least, it won't get worse because of more information. The capacity of the human mind for rationalization is already essentially infinite, people are manipulated because they want to be manipulated.

It's pretty much to be expected, the world is really quite complicated and without a drive for consensus independent thought would lead you astray a hundred times before leading you to water.


That's covered by the proposed regulations(it wouldn't be allowed).

It's no use banning "deepfake technology"- you can't outlaw maths.


There is a phenomenon of minor celebrities making a porn tape to cash in on their 15 minutes of fame, but sometimes they hire an actor to stand in for them, and it is clearly a different person. I wonder if one place where the displayed person will consent, is using deepfake technology to better make the actor look like the celebrity.


That’s fine, people will probably just gain immunity to that, like when they realized that train on a movie is not gonna run over them, or when they realized that the king of Sweden isn’t actually wearing silly hats.


I highly doubt that's an comparable comparison.

But anyway, even today people have hard time to distinguish unprocessed photo from one that was "enhanced" for social media purposes so imagine how hard for them will be to tell if video material was real or a generated one; hell, in time probably even experts will have hard time to tell the difference.

I can hardly see any benefits in deepfakes technology - there are more obvious threats that might affect our everyday life, our society and how we might perceive the truth in the future.


If anything, this is quite welcome as it allows for plausible deniability if one's private nudes were leaked or other such matters.


Agreed on this point. Revenge porn is devastating in that there is no way to solve the problem. Deniability will give small something back to victims.


Just like a huge percentage of the population doesn't believe that the Covid pandemic is a hoax or that masks are useless against it?

Oh, the eternal optimism...


President of France and all its government said during the entire first lockdown that masks were useless against Covid. People would behave a teeeent bit better if they didn’t know they were blatantly lied in their faces as a common way of governing.


I wonder how they define "Artificial Intelligence". Is that the same term being thrown around by Silicon Valley grifters (hello HN readers!)? Can I just claim my "video manipulation software" is just a lot of mathematics, and it's not "intelligent"? Are they going to add the Turing test into legislation?

It would be funny if in the future some stupid software can be classified "intelligent" because some stupid bureaucrats made a dumb definition...


Our brains are probably also only a lot of mathematics.

The border between intelligence and non-intelligence is either a very wide grey area with bricks on one side and Einstein on the other, or it's a well-defined line and much closer to the bricks than to us.


> shall disclose that the content has been artificially created or manipulated.

That's it. That's the condition. (With some exceptions there but anyway)

It's a non-issue pretty much.


The exceptions are odd and in my view indicate that some government agencies are already using deepfakes for questionable purposes or at least want to have the option to do so.

Other than that, I agree. Having to clearly label deepfakes is enough, and makes the use of deepfakes for deception illegal (except for the ominous exceptions).


So all movies with CGI now need a disclaimer?


No, because "unless this is obvious from the circumstances and the context of use,”" as it says in TFA

But to a point, photoshopped images also require a disclaimer in some countries (e.g. France)


They already have it, don't they? In the form of "Special Effects by XYZ" the credits, I believe.


Can't stop it, and there is some reasonable use cases. So why not prohibit missuse by other legislation? Defamation laws already exist.


It's weird to me that they even considered banning it. How would you enforce that?


> How would you enforce that?

The same way every law is enforced - by prosecuting those who break it. Theft or murder being unlawful doesn't stop them from happening either, yet I think we can still agree that it's indeed very useful to have them be crimes, no?


They need to completely reconsider their approach because regulating 'AI' really doesn't make any sense, and there's hardly a way to even define in legal terms what 'AI' is.

Instead - they should focus on applied outcomes that may be affected and signal their intent to monitor events in those areas and regulate only if necessary.

'Deep fakes' illustrates this quite well. If someone can manually photoshop a 'fake' would that be legal?

What if they used tools that leveraged AI?

Does it have to be 'automatically generated' to be AI?

If there were to be legislation it would be around 'misrepresentation' in this case, and should meaningfully be applied to the outcome, not the technology.

They mention surveillance and other scenarios, again, the 'means' really doesn't matter that much - what matters is privacy, where and when people are monitored, etc..

We use considerably amounts of real world data to build models for a variety of things - whether it's spreadsheets, algorithms, Machine Learning or Neural Networks, or more likely a variety of those things together ... doesn't matter. What matters is what people are doing with it and how citizens are affected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: