Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is also so devastating that you can no longer believe anything published online, even if it looks legit.

This is going to be one more dimension to misinformation on the net.

Now you will see videos of politicians saying something and even then you cannot be sure whether this is actual video or a fake.



Would be good if content publishers did something like digitally signing their content along with embedded metadata so if e.g. you see a video circulating you can see that the BBC attested that it was released by them & its original air date was such & such.

Or if you see a quote claiming to be from Emmanuel Macron or Boris Johnson, you can see it was released with a digital signature from a Guardian journalist & they add whatever date/time/location details they want to validate the information.

If instead I release something (on Twitter or wherever) that I say is a screen-capture of my TV, _I_ add the metadata (originally seen on BBC on 27 Jul, 1:55pm) and sign and then you know that you only trust it as much as you trust _my_ reputation rather than the BBC.

Wouldn't solve the problem entirely but it might create a bit of an audit trail for stuff and encourage people not to trust unvetted material.


Metadata that would swiftly be destroyed by the first twitter/facebook user reposting a screenshot/screenrecording from their phone


Exactly, so if I upload something and say "Look what I just recorded off the BBC!" you _shouldn't_ believe me if creating a deep-fake becomes as easy as recording the real thing.

In that scenario, you'd want people to say "but wait, there's no signature on this, it could be fake" and then only trust the video as much as you trust the source (not the claimed source).


Lack of the right kind of metadata would be the first tell.


While still maintaining a verifiable origin


The problem is how to do you make that signature or digital artifact accessible by the general public.

Any visual artifact can be mocked, so we end up with the same problem as clickbait titles, where the conclusion one arrives at from just a title can be disproved, but it doesn't prevent the false information from going viral.

What good is it to say "that video you saw was fake!" after the video has spread around and done the damage already?

It's hard to come up with a solution to this problem just because the solution has to preempt the problem. A cryptographic visual artifact _could_ work, but it's still likely that misinformation via deep-fakes will cause problems for society at large.


Yeah agreed — making security & authenticity understandable to a layperson is always going to be tricky.

Websites like Twitter adopting a "blue tick" for a validated profile on their platform though is a model people seem to get. If we had some equivalent of a "blue tick" at a user-agent level, e.g. a for your browser to take a signature and display it in a standardised, human way to say "this video is signed by bbc.co.uk" it could work. (With a similar model for user-agents elsewhere e.g. you'd probably need adoption in apps like WhatsApp to get traction.)

The other side of it (like privacy discussions) is how much the average person will care — tabloid journalism often skirt the borders of what they can get away with at the moment & they nominally have a duty currently to only write factual information. If Fox News or the Daily Mail release videos and put their own signature to it, then you arguably lend them legitimacy ("it's on the news so it must be true. It's signed by them and all!").


Blue check twitter people are sus, don't trust them.

That's the mood in the algorithm hole twitter put me in to.

So, make what you will of that.


As most of the viewing is done in digital screens, the player itself could show when media is signed, much like the lock icon in web browsers.


And what's really the difference between a fake video and a compressed/lower res video, as far as the signing algorithms know?


I wonder if there's a startup opportunity there. In the coming years, this problem of deepfakes and lack of trust in media is only going to get worse. Crypto could mitigate this. Maybe a hardware company who sells very high end cameras for media outlets, that digitally signs and adds to a blockchain, all recorded media?


Already done for (high-end) photo cameras.

But why store the signature in a blockchain? If you do not trust the certificates in the first place, the storage location won't make any difference. And if you trust the certificates, the storage location is completely irrelevant. Because the certificates alone provide the trust.


> But why store the signature in a blockchain?

For the same reason as certificate transparency logs; you want to avoid trusting something that has a history of certifying false statements. You also need to handle throwaways, so it's definitely not sufficient (and might turn out to not be necessary once a complete solution is found), but it does seem useful.


Well, my thinking was that you would want to store the data cryptographically signed, on a block chain, for the same reasons (more or less) that NFTs exist on a blockchain. Predominately, the public ledger of ownership seems like an important aspect of digital content. Is it necessary for trust? Not at all, but it certainly doesn't hurt it?

Disclaimer: I am an armchair crypto fan. Not an authority.


Then, when someone videos an atrocity, they need to choose between publishing publicly (risking retribution) or publishing anonymously (if they even know how) and risk being disbelieved.


Well, they could contact a news outlet on the condition of anonymity & the news outlet satisfies themselves as to the validity of the footage.

Similar to how anonymous 'tip-off' stories with protected sources work in general at the moment — the media outlet put their own reputation on the line on the basis of the source & we trust (to a certain degree) reputable news outlets to validate & vet their sources correctly. This is true for stuff that's easily forgable at the moment, e.g. a whistle-blower releasing documents.


They could use ring signatures (like what Monero and other do), where the signature only validated that it came from one of several possible private keys.


> Would be good if content publishers did something like digitally signing their content along with embedded metadata

NFT for news!!


This feels like it has been true since forever, just with other media. A picture with a made up quote seems exactly as damaging. Good journalists will continue vetting sources and unscrupulous TV personalities will continue showing whatever fits their narrative without vetting.


It has been true only to a certain level. Photos can be faked and videos can be mislabeled, but as long as it happens it small enough number it is possible to have people point it out and make a fuss about it.

This changes when individual people with no resources at all can make convincing fakes and wield it as a weapon to sow disinformation, to have it then picked up by "major" media, all information on the net becomes pretty useless.


Again, this was always true, we relied on news agencies and such to be gatekeepers of what is true.

Sometimes a random video pops up and people believe it shows X and then it propagates but it's something completely different (e.g. people beating up immigrants on the streets in northern Italy -> traditional krampus celebrations; Junker drunk at some event -> Junker suffers from lombalgia; Berlusconi mimicing a sexual act on some woman -> it was a comedian skit; Britney Spears sex tape -> it's a random pornstar ...)

People will learn to be doubtful of random internet videos just as they have learned to be doubtful of random internet articles.

Or not, since they haven't yet, but it's not a qualitative change.


The majority of Covid disinformation was spread by 12 people with limited resources. They didn't need deepfakes. Deepfakes add nothing of substance to the liar's toolkit.

It's easy to lie to people so long as you're saying things that validate their shitty emotions. Conversely, it's extremely hard to tell people the truth when it goes against their shitty emotions.


Tangent, but:

Emotions can't be shitty.

You can miscalibrate your emotional responses to situations, much as you can sear your conscience.

But the emotions themselves - the full range are valid human feelings, from fury to transcendent joy.


The effect that emotions have on their behaviors can be shitty. Worse, they can be self-reinforcing, such that their emotions cause them to seek out ways to deepen that emotional state, resulting in an increase in shitty behavior.

Those feelings are valid, but the effect they have on other people is not. Dealing with valid emotions in a way that doesn't harm other people can be incredibly difficult, especially when those emotions put harm front and center.

Our emotions are valid, but our behaviors are not. It behooves us to mind our emotional states when they cause problems for other people. Often, that will coincidentally bring about an emotional state we prefer as well, but often with unpleasant transitions.


Yup, I agree with all of that.

I didn't say anything about behaviors, because it was emotions that were labeled as shitty.

I didn't bother to go into the distinction between emotions and the bad behavior they often give rise to, so thanks for explaining that.


"valid" seems sort of weasel-wordy here.


Okay, I'll phrase it more strongly:

Emotions as such are good, from sorrow to rage to joy.

Each one is a good response to some situations, as far as I can tell.

Your emotional response can be misplaced, so that you experience an inappropriate emotional in some situations.

As noted elsewhere in the thread, you can also be inspired by your emotions to inappropriate (and just terrible) behavior.

The emotions themselves, though, are not shitty. Recognizing them and understanding where they're coming from can be tremendously helpful in aligning your actions with reality and your own values, and even in discovering what your own values are.

I share this perspective not out of a sense of superiority but in the hope that it helps someone else avoid my mistakes.


I don't see the point in saying emotions are automatically good, but also inappropriate. This just seems like a segregation of two concepts that don't need separating.

E.g. saying "I think people of type X are bad so I feel hatred towards them" is a good emotion but misplaced, vs is a bad emotion? I don't see the useful distinction.


People often try to eliminate emotions from their lives, whether some specific ones or all of them.

It's called "repression" and it can really mess you up.

I don't mean that emotional responses are automatically good - I mean that emotions, as abstract entities, are inherently good.

I'm not so sure I'd call "hate" an emotion, so I'll try a different example.

If I'm angry at someone because they told me I wrote a buffer overflow (and I actually did), that's an unreasonable and unhealthy response. The fact that I feel anger over it should impel me to introspection and working out why I'm angry over useful technical feedback. From there, I can move on to personal change so that I'm no longer inclined to be angry about good, helpful input.

If I'm angry at someone because they sexually assaulted my wife, that's a healthy anger response. They've mistreated her horribly and my anger on her behalf should push me to protect her and seek justice.

How exactly I act on that anger matters, though - physically intervening so they cannot continue to hurt my wife then calling the police would be good.

Pulling out a pistol and shooting them would not.

Does that clarify my thoughts at all?


I appreciate the explanation, but it doesn't really seem to justify making the distinction.

Calling all emotions axiomatically good, but also saying that they are "unhealthy" if they're "inappropriate responses" still seems unnecessary.


> The majority of Covid disinformation was spread by 12 people with limited resources.

created by 12 people. spread by many thousands.


> It has been true only to a certain level. […] as long as it happens it small enough number it is possible to have people point it out and make a fuss about it.

I think the opposite is true. This isn’t a historical perspective, you’re using logic to speculate. Historically speaking, there have been fakes that reached huge numbers of people, and they were more damaging then than they are today because they were more believable; the public had not yet conceived that photos could be faked, and it was not possible to see evidence of fakery. Today, everyone knows photos and videos can be faked.

I don’t know of any deep fake videos yet that have tricked a large number of people or been used for political purposes. Maybe it has happened, I don’t know, do you know? But there have been lots of influential faked photos. Just Google a little to find hundreds of historical examples of famous and misleading doctored photos. (Lots of overlap in these lists, because some of the photos are famous).

https://www.cc.gatech.edu/~beki/cs4001/history.pdf

https://delmarwatsonphotos.com/photographs/famous-photograph...

https://www.quora.com/What-historical-photos-are-highly-misl...

https://www.businessinsider.com/fake-photos-history-2011-8

https://www.ranker.com/list/historic-images-that-were-retouc...

https://www.ba-bamail.com/content.aspx?emailid=29607

https://www.pinterest.com/yosomono/faked-images-everyone-thi...


Since the inception of photography, all photographs have been lies [1,2]. The only remedy is critical thinking and awareness and skepticism on the part of the recipients, which is being outpaced by the technology.

[1] https://en.wikipedia.org/wiki/Hippolyte_Bayard#Self_Portrait...

[2] https://i.redd.it/nh45pwigrhc21.jpg


>It is also so devastating that you can no longer believe anything published online, even if it looks legit.

I don't buy these sky is falling arguments.

Deep fake video will have about as much impact as photoshop has had.


This has been true since forever though. (It's such a weird point.) Very believable photoshops have been possible since forever, but you generally only believe images that are verified by a trusted source. Even ridiculously fakable things like "someone telling you a thing is true" (without having photographic evidence of it) has somehow not been completely eroded as a communication channel by deceptive agents because of reputation and trust holding it all up.


> Now you will see videos of politicians saying something and even then you cannot be sure whether this is actual video or a fake

Back to text and the good'ol credibility of the messenger. Digital commodified journalists and now the need for credibility will let them get out of anonymity again.


Does anyone actually determine what to believe this way? Like if you read a quote from a politician in a large newspaper, you don’t believe it’s real, but if you see a cell phone video of the politician saying something at a rally, you do believe it’s real? Personally my confidence in the veracity would be the opposite. There’s nothing special about video that makes it fundamentally harder than text to distort, edit, or even outright fabricate.


What would a detective or a jury believe? Video or testimony?


Presumably they would believe (or at least be instructed to believe) neither implicitly.


To be fair, this tech has been around for years and I've yet to see it be used successfully in social media misinformation. The stuff I see on my in-law's facebook is usually some clunky meme photo with shocking text.

A video needs to actually be watched, that takes more effort, and then it would be widely debunked as fake. The "fake news" memes are usually at least partially true which helps convince people that the misinformation is legit.


“I’ve yet to see it used successfully…”

As far as you know. By definition, wouldn’t its successful use mean you didn’t know it was successfully used?


I say it every time this comes up: People don't care if something looks legit. They care if it supports their views. Nothing will get worse just because the fakes get better. It might even help when it's common knowledge that everything can be faked by a 14 year old on their PC.


But people do care if it looks legit. Something that is legit supports their views better, even if it just looks legit and is actually counterfeit.

But totally agreed it needs to be common knowledge that everything digital can potentially be bogus. This stuff should be taught in schools from an early age honestly.


I think we'll eventually see hardware that cryptographically signs the content as soon as it is produced with timestamp, but then that could be fooled by someone creating a deepfake, projecting it on a high resolution screen and then photographing with another camera. Or we wouldn't believe things unless they are captured by multiple cameras at different angles and maybe future GANs will be able to cover that. We are seeing the beginning of an arms race!


As of recently we can recreate 3D space in the weights of a neural network

Neural Radiance Fields (NeRF): https://www.youtube.com/watch?v=JuH79E8rdKc&t=5s


As long as everyone wants their information for free, this will be just another layer of icing on the cake of misinformation.

Good, clean, and reliable information is expensive and needs a fair bit of work. I can see why most people have forgotten that but it might come back them and then this will be way less of a problem.


"Now you will see videos of politicians saying something and even then you cannot be sure whether this is actual video or a fake."

This is actually already the case with Biden and Trump video clips even without being deepfaked. Often they are presented out of context to the point of completely reversing reality. It's helpful to assume any clip is fake by default, especially if it's a viral one that makes one side look bad.

By the time deepfakes are common, it'll be best practice to assume fake by default.


Doesn't even need an actual quote taken out of context, just a headline (or a thousand) will already have an effect because people scan and can't be bothered to read the contents, until it becomes a background idea stuck in someone's head. People also forget where they read something, people forget details, and they simplify things over time.



I agree - this is going to make finding the truth so much harder once it’s weaponized as it surely will be. How can representative democracy flourish when you can no longer tell who you want to represent you?


Not to mention the politicians who are recorded doing something genuinely shady and will wave it away as fake news - this is happening already even without deep fakes.


Maybe it could mean journalism will be professionalized again, when trust isn’t as simple as taking a video on your phone or writing a blog post… if we’re lucky.


Idk seems like an overstated problem.

People will be less likely to believe leaks or supposed hot mic recordings, but the majority of what politicians and other public figures say happens in public view which makes it difficult to fake.

You already don't really know if a video you see on Facebook has been carefully re-cut to change the meaning or tone of what the speaker was saying, so I think the fact that we can more easily wholesale create videos doesn't really change much. If you want to know if something is real the best option is still to cross reference multiple sources and if possible multiple recordings.


This has been a possibility for decades but only for those with large budgets. Now that it is more widely accessible more people actually know it can be done.


> This has been a possibility for decades but only for those with large budgets.

Yep. You don't need CGI if you can search a large population for someone who looks like insert-public-figure-here and make up the difference with makeup.


How do you already believe something written is legit? Does it not just put video on par with text?


Normally you try to think and reconcile it with knowledge and experience you already have.

The problem is when everything you have ever learned is suspect.


It's probably safer if people stick to primary sources with good reputations anyway, right?


I remember watching "The Running Man" when I was a kid and thinking that the scene were Arnold loses the fight to the Jesse Ventura guy, that in the movie was a "Deep Fake", was so unrealistic... man, we are already there. Nothing can be believed anymore.


> you can no longer believe anything published online

When has this not been the case?


Fifteen years ago it was not feasible for a random person with an axe to grind to publish a convincing video of a specific person doing something they did not actually do.

It's about to become pretty low-effort for a random person with an axe to grind to do that.

It's not that you can no longer believe anything published online - it's that video evidence without provenance was relatively reasonable to trust for a few years, and it's about to stop being so.


The problem is actually opposite. People won't believe anything because they assume that what they are seeing is a deep fake. It's happening already. If you see videos online of Trump / Biden / someone notable there's always someone claiming that it's a deep fake if they don't like what they see.


Isn't it exactly the same problem rather than opposite of it?


Yes, it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: