Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dutch MPs had a call with deep fake imitation of Navalny's Chief of Staff (nltimes.nl)
265 points by sAbakumoff on April 24, 2021 | hide | past | favorite | 110 comments


While technically curious, from legal/societal point of view to me there doesn't seem to be huge difference between having a deepfake or a lookalike actor on the other end. And with old-fashioned telephone presumably has had this impostor problem since forever.


> And with old-fashioned telephone presumably has had this impostor problem since forever.

Nalvany himself impersonated a Russian official via telephone to uncover details of his own assassination attempt: https://www.bbc.com/news/world-europe-55395683

And here’s the call itself: https://youtu.be/ibqiet6Bg38

It’s an incredible story and yes it does still happen.


It's always hard to know if tech based phenomenon is actually new or just a migration of something to digital form. There are two ways to be wrong, broadly.

One way is the "technically true" error. There are plenty of predecessors to twitter, spam, podcasting, etc. Legally, philosophically and such... it's usually easy to underestimate impact because it really is "nothing new." But... in different medium, at a different scale or higher velocity things change. Junk mail is like spam, but the junk mail problem rarely got beyond manual handling scale.

The other way to be wrong is the opposite. Assuming that digitization will create change, but all we get is a digital version of the previous.

It's hard to know the future. I agree that Deepfake is likely not going to change the world of imposter/fraud too much, though it may get a lot of attention.

OTOH, I do think it has the potential to weird up the entertainment world. I also think it could be high impact in media. If nothing else, it'll strengthen the "skeptical of everything" segment's skepticism. That said, guessing these things is a losing game.


Now they don't need to hire a lookalike actor or impersonator, so that both greatly reduces the chance of the getting caught because you can use someone already on your team (instead of an actor), and increases your capabilities because you can call more targets, more often.

At last a genuine case of computers improving productivity ...


I don't know too much about deepfakes, but I'd imagine that you still need some actor, and having them at least remotely resemble the target character is probably helpful. You still need the source actor to act the mannerisms, way of speaking, physical posture etc of the target character


Nope. It's more like puppeteering an image. No initial similarity required. It's not directly discussed in the below link but extending this technique to arbitrary new faces or even cartoon characters isn't truly difficult anymore.

https://developer.nvidia.com/ai-video-compression


If you're immitating Tom Cruise. But not if you just need to roughly look and sound like someone that the Dutch MPs have rarely seen before.


And over Zoom, not HD video.


Perhaps one would be wise to demand a certain authentication ere one proceed with sensitive politics.


World leaders start discussing their childhood pets in person, in order to have good online authentication questions. And inadvertently create a new era of world peace through better understanding and appreciating one another.


> [XYZ] ... has had this imposter problem since forever

seems to be a prevailing argument against deepfakes being anything to worry about.

What will happen when there is a realtime crisis where quick decisions are required and suddenly, amongst the other noise, there is believable CCTV footage of X or an inflammatory statement by Y or any of a hundred bad actor insertions into the fog of war that disrupts a nation state OODA loop output in the real world where bombs are dropped and people are killed in response to what, too late and only after the fact, is determined to be a deepfake. Assuming that any effort at look back retrospection is made - which is by no means at all certain.

The ability to stage things with a Hollywood studio and VFX team was with us before now, yes, but the ability to have any joker with a GPU on the fly invent footage to meet the realtime and flammable story of the moment is new.

Oh, there is an Indonesian submarine that went missing in Bali yesterday? Well here is footage from an Indian trawler showing a Russian boat acting suspiciously in the area.

Good luck unpeeling that onion while you are in a hurry.


Quite a few politicians have been caught out by phone call pranksters in the past too.

I guess this is getting more attention because it’s not just a prank.

Makes you wonder though, if pranksters could do it, how often spies have just used fake phone calls before now!


Aslong as the message from the deepfake or lookalike actor is authorized by the person you’re intending to speak to.

That’s mostly the problem here.


Likely true, yep. The scalability of this kind of attack might be different nowadays (or in future), though.


The problem is one of ease and feasibility. As deep fakes become easier and cheaper it makes them more easily done by anyone. Thus from say a US perspective someone could make a fake Trump video and rally together a violent mob to attempt a second takeover of the capitol or similar to another building/group of people but this time call for AR15s rather than flags poles and even random office items to beat the police. Imagine that sent to an underground group of qanon people who believe that Hillary is an illuminati pedophile who eats children under pizza shops and it's a secret message to them to commit some act of terrorism. Anyone with that little of a grip on reality could be tricked into just about anything with a deep fake.


The story misses the actually interesting part: how did they establish contact and initiate the call in the first place? It’s not like you or I can simply call up <country>’s parliament on Zoom. There’s gotta be some channel of authentication other than “dude in video call looks like some politician” (or some politician’s staff of whom they can find some photos on Google Images), too.


Getting into contact with the Dutch parliament is actually quite easy, if you have a story interesting enough, you can probably get them to read your email and forward the Zoom link.

A simple phishing attack should be more than enough to confuse these people. They're almost all exclusively schooled in social sciences, business, history, that kind of thing. Incredibly few of them have any sort of technical background. There are plenty of agencies working their hardest to keep the political leaders safe, but they can't fix the people themselves.

A Dutch journalist managed to get into a "secret" Zoom call with the European ministers of defence after a Dutch politician posted a picture of her screen... with the invite link and most of the password visible. I'm sure they're intelligent people, but when it comes to computers, their young children/grandchildren are probably more capable of securing themselves online than they can.

Also keep in mind that there's an aura of secrecy surrounding Navalny's chief of staff even before the Russian government tried to kill Navalny. Things like routing traffic through TOR and the use of privacy-enhancing technologies like Fastmail can be well explained in an environment where the government actively wants to kill any competition to the current leadership.

In truth, I think these people have fallen for a well-put-together spear phishing attack that worked well because of their lack of digital skills. I strongly doubt that the leaders of other countries will do much better; politicians and tech rarely mix well.

I find it much more troubling that the Dutch government is using Zoom, a product with a terrible history in security problems from a company based in the country that notoriously spied on politicians of even just allied countries. Using American software for government videoconferencing (especially about Russian politics) is a terrible risk.


> the use of privacy-enhancing technologies like Fastmail

Fastmail is a privacy enhancing technology now? I thought it was just an email provider?


Fastmail has a few privacy-enhancing features (one being "paid email that's not connected to a tech giant") and it's one a token technology that's recognisable by end users.

It's no Signal or Threema in levels of privacy enhancement, but it's something.


Fastmail is a great email service, but their service won’t keep you say from state actors


It just baffles me how late governments seem to be at establishing secure videoconf applications for their middle-management (which is effectively what MPs are). I would assume most of NATO has better tools for military applications- what is stopping them from repurposing some of these tools for more “everyday” situations?


MPs are not "middle managers". In many countries [0] Parliament is sovereign: Collectively, MPs are the highest authority in the land. Theoretically, if they chose to do so, they could dissolve the judiciary, have anyone they didn't like executed, and declare war on their neighbours.

[0] https://en.wikipedia.org/wiki/Parliamentary_sovereignty


Well, in most countries the sovereign is actually the people, and Parliaments act as intermediary interpreters of the popular will. Hence, middle-management.


No, that that's not true. In countries with a history of royalty the monarch is sovereign but in modern times that power is vested in Parliament.

The notion of popular sovereignty is largely a recent new-world fancy.


I don’t know what your beef is, but what you say it’s not true for France, Germany, Italy, and likely tons more. All these constitutions state explicitly that sovereignty belongs to the people. Even in UK, where Parliament is now sovereign by way of forcing its primacy over the Crown, there are plenty of arguments about this (as recently rehashed), and the question is technically not settled.


  It’s not like you or I can simply call up <country>’s parliament on Zoom. 

Actually yes, mostly. You just need to contact one of the sympathetic MPs, on social networks for example, and he will set it up for you. They're not more security conscious than the general population.


One of the benefits of a fairly small country is that politicians are quite approachable.

Amateur YouTube channels frequently walk up to Dutch politicians to ask them a few quæstions without men in black denying them access.

It benefits journalism, of course, as the politicians feel compelled to provide some answer as a refusal to answer a tough quæstion will be construed against them.

In larger countries, security has given politicians the perfect excuse to control who can, and cannot ask them quæstions, by only inviting the journalists favorable to them on their press conferences, and decide who can ask.

Journalists being able to approach politicians directly is quite beneficial to democracy.


For some context, the ‘Anti-Corruption Foundation’ of Navalny and Volkov was giving talks for European and US state/human-rights organizations for years. It's actually a bit weird if MPs didn't get Volkov's phone number and other contact info from colleages of other countries.

But then, phone numbers can be spoofed, and if these incidents are done by the state—the FSS follows ACF for a long time now, they probably know quite well who Volkov speaks to.


It's likely to be a kind of privilege escalation and trust transfer. They first fool a chain of trusted outside sources and lower-level staff to raise their trust level to be accepted by the target.


> It’s not like you or I can simply call up <country>’s parliament on Zoom.

If the target is the UK, you can wait until BJ posts the meeting ID and MP usernames on Twitter. Somewhat surprisingly, there was a meeting password in that incident.

https://www.bbc.com/news/technology-52126534


That, plus I’d like to know what they discussed!


It‘s not like any one of those politicians knew his Chief of staff. Additionally, he was probably being translated from his native Russian, or speaking mediocre English. It would not have been difficult for any random person to „imitate“ him. The main Qualification would seem to be the ability to not burst out laughing. A deep fake of an actual public figure would be a different matter entirely.


That doesn't make it any less intimidating. This meeting (if undetected) could've easily been used to pour gasoline on the fire of that situation. Not everyone who's opinion matters greatly and will have serious impact on decision makers is well-known enough to be "safe" from this.


It does make deep fakes less (immediately) intimidating, because they can't (currently) do anything a human impersonator can't also


I'd say the criminal energy & logistical effort of finding an impersonator who looks sufficiently similar and is down for shady stuff makes buying a decent GPU workstation look downright trivial...


Indeed, it's probably very hard to find out what he looks like, despite his and Navalny's Anti-Corruption Foundation giving talks to European state and human-rights organizations for years now. And indubitably he speaks like a bear, he's a Russian after all: https://www.youtube.com/watch?v=bw84XVPlaCk


Excellent use of sarcasm, I have to say, but it isn’t an unfair generalisation that Russians don’t speak English very well. English isn’t and hasn’t been commonly taught in Russian schools like in Europe


> isn’t an unfair generalisation that Russians don’t speak English very well

Of that I happen to be painfully aware, along with the country's simultaneous post-SU obsession with English or generally Latin-alphabet branding. The closer I come to proper pronunciation, the more difficult it is sometimes to communicate my desires in a shop (if I happen to wander into a non-self-serve one). I guess I should just be thankful that it's not French, with the more regular spelling but half of the letters being silent.


Not every random person. The success in a social engineering attack is to know the jargon and framework of thought of your victim. Training a good deepfake is also expensive. Finally, if the campaign is running from march and is run against multiple countries, it requires knowledge and persistance to cover all of these different people and preparedness for the conversations.


IIRC, in a previous installment of this drama, Navalny himself somehow phoned the people who poisoned him and got them to admit doing so, how it was done (underpants) , and their reasons for failure (it was a murder plot).

Someone has gotta do a movie of this.


> Someone has gotta do a movie of this.

Starring Navalny himself... Or something that looks like him.


> Or something that looks like him.

Might be unnecessary if deepfake technology is used.


If I'm not mistaken, this is the first time deep fake technology has been used to carry out a disinformation attack on politicians.

I wonder if the actual video conferences have been recorded. I'm very curious to see them.



Thanks, here's a direct link to avoid paywall: https://0x0.st/-mQu.jpeg


0x0 is a really useful website.


Yep, this is in my ~/.bash_aliases :

  #### Upload file to 0x0.st (The Null Pointer) - A filehosting service.
  # Usage: nullpointer_upload <file>
  nullpointer_upload () { curl -F "file=@$1" https://0x0.st ;}


Ah, amazing, thanks! I didn’t even notice the paywall, odd.


It's the first time they actually did it and got caught. The article specifically mentioned the same actor has been in contact with politicians from Estonia, Lithuania, and the United Kingdom.


Right, I was referring to this attack as a whole.


I'm guessing that they used the excuse of a bad video connection to make them almost get away with this.


Have you been following open source deepfake technology? Recent models are convincingly realistic in 640x360 (the default zoom resolution)


I was mystified by a series of videos of rather serious politicians singling ridiculous songs, then found the clips were generated in Wombo.app.

It’s a funny little gimmick, but it had me questioning things I’ve previously seen and thought real.

https://www.wombo.ai/


Do you mind sharing some links? I’d love to see the most recent versions


In a parallel comment[1]. I wouldn't know it is a deepfake.

[1]https://news.ycombinator.com/item?id=26924229


This could mean that video calls can be compressed more aggressively. I.e., just send someone's face once, and then send posture and facial expression parameters for the remainder of the session.


That's a great idea. You could have low latency, high def, high FPS video convos. Couldn't someone launch a plausible competitor to Zoom doing this? I'd certainly prefer that service.


Nvidia is working on it already. https://developer.nvidia.com/maxine


The downside is that we will lose our #1 reason for having symmetric upload/download speeds in residential areas.


It's happened a couple of times that pranksters have managed to make calls to government figures.


Deep fakes seem destined to be the next source of political disinformation. The trouble is that by the time the 'deep fake' has been debunked, it has already made it's impact by spreading at rocket speed across social media. It's quite chilling what the consequences could be for political campaigns and debate.

This is an example of 'deep fakes' of two British politicians. If you look closely you can spot something amiss in the way they speak. Lots of people won't be look closely though. And this is from 2019 - the technology can only have improved since then.

The fake video where Boris Johnson and Jeremy Corbyn endorse each other (2019)

https://www.bbc.co.uk/news/av/technology-50381728


This seems like the first instance of deep fakes fooling politicians on a geopolitical scale. The repercussions of this are huge.

I really wonder how big the impact of this technology is going to be, especially since corona has made online meetings even more prevalent.


Cryptography (public key signatures) can mitigate this, alongside a globally public, decentralised identity system.

I predict that in 2030, we'll all be using digital signatures as part of our identity, whether directly or indirectly.


> globally public, decentralised identity system.

Zooko's triangle says no. https://en.wikipedia.org/wiki/Zooko%27s_triangle


Names and identity are different.

In particular names only need to be locally meaningful.


Good point on the cryptography. My passport already contains such a key, so it is probably feasible to use something like this for trusted communications.


Until you reach the level of authorities that provide those signatures.

I'd imagine fake IDs with properly signed cryptographic keys on their chips are available to any higher FBI, FSB or other agents.


My dads second wife has five passports. And more than a few names.

Whereas my single passport is expired mostly because I dont plan to go anywhere.


For their respective nations and ones considered friendly - likely.

For more foreign nations? Given proper PKI, I'd hope not.


Are there any examples of decentralized things running at such scale? Whether in the digital world or physical?


Zoom (or Teams or ...) will allow you to change your face as a feature.


So if they went all out of their way to create a deepfake, what was their mission?

I assume if you employ a deepfake, you also have a 'fake' message to put across.


Russian prangsters have contacted western politicians and tried to record them saying something compromising or at least stupid in order to air them on the TV. It might be something like that or trying to probe those parliaments to get an idea of their positions and likely future steps plus their red lines of support. So, it might've been an intelligence gathering operation.


Good point, I hadn't thought of it. IANAS ( I am not a spy )


Zoom bought keybase a while back. Now it's time to use their identity verification features.


Note that Keybase never offered end-to-end encryption, they tried to solve it with blockchain magic and, granted, got quite far (what they made truly was innovative), but missed a tiny step of actually checking the blockchain (which would be resource-intensive for mobile devices, which is why they pushed it back for later solving) or offering any other way of verifying the encryption keys.

I actually reached out to Keybase as well as one of the NCC auditors that I've talked to in the past for unrelated reasons, but the auditors are playing deaf and Keybase (on the third contact attempt) claims it's safe because of the decentralized social proofs (which aren't actually verified by the app -- you trust solely on the Keybase servers to give you the right encryption keys). All details are here: https://security.stackexchange.com/questions/222055/how-can-...

The whole thing seems moot now that Zoom bought them, but so now if Zoom does anything with blockchain and supposed end-to-end encryption, I'd be very weary and verify things regardless of whether it was audited. Even as a user, you can make sure they covered the basics (from my StackExchange post linked above):

> Users should have demanded followable instructions. We should have questioned Keybase, now Zoom, and anyone who makes a strong security claim. They claim it? You should want to see steps you can follow to verify it. Since you put your trust in the published code, those instructions should not involve any command line coding work, and definitely not have gaps like verifying keys on your laptop and hoping that the server sent the same keys to your phone.


Time to host key-signing parties again.

Maybe in the future we'll talk about unsigned people instead of undocumented people.


The solution is simple: PGP support in every video conferencing tool.


Didn't expect to read simple followed by PGP on HN.


In a sense we already have that. PGP works over pretty much anything. You would just send a signed message over the text chat function using ASCII armor.

As with any authentication system the hard part is knowing who owns the PGP identity in the first place. We don't print the PGP fingerprints of famous people in news reports. Perhaps we should.


An interesting idea. For WebRTC this information could possibly be exchanged in the SDP offer/answer messages. An extra line in the SDP with a signature and some key info.


Where is the global system of registrars of all-people-that-we-wish-to-talk-to that will be built into videoconferencing software, the one that we will trust?


Maybe this threat will finally make it more widespread and easier to use.


Easily MITMed for this use case.


You exchange a key using PGP. Then you use this key to encrypt the connection.


Can you explain? I don’t know much about encryption/PGP but I don’t quite follow how exchanging a key via PGP prevents an imposter from posing as someone else? How does it stop me from saying “I’m Navalny’s Chief of Staff, here’s my key”?


Well, you must somehow know the public key of the person you want to chat with. The important feature is that everyone is allowed to see this key. So you can have a trusted network that allows you to build a database of public keys of important people.

Look up public key cryptosystems. A worthwhile read, the entire internet is built on this technology.

https://en.wikipedia.org/wiki/Public-key_cryptography


Could we do that with a low-enough performance hit?


I've worked on compressing (with x264, not the fastest but gets the file size right down) and encrypting a video stream with PGP and streaming the encrypted data via encrypted FTP from a Raspberry Pi. If I remember correctly, the lag was about 2 seconds, i.e. when pulling the power cable at 18:00:05, the server would have footage until 18:00:03. Most of that delay came from compression, since x264 looks at the next frame (so it waits a frame or two before running the algorithm) so the output is a bit delayed (can be disabled at the cost of much worse compression).

On the raspberry pi, the encryption took a fraction of the CPU that the compression needed, so yes, PGP-encrypting a video stream is relatively trivial, even if you call the GPG agent 5 times per second and do public key crypto every time (in this case it also made for a nice and robust format, you just split on the BEGIN PGP MESSAGE strings and can ignore broken frames). This can be optimized by a lot of course. An ideal case is probably to do keyring management with PGP and then use a different, meant-for-streaming format for the actual video data encryption and transfer.


Yes, it's pretty common in the modern web (search terms: ECDH, TLS). In this case it would be as fast as any E2EE connection.


Still many sites require upload of a photo holding up a government ID as a proof of identification. Time to stop this nonsense and turn to real cryptographic solutions.


The original source article https://www.volkskrant.nl/nieuws-achtergrond/kamerleden-spra... has some interesting remarks. Because there are no video recordings its very hard to investigate this situation. The article also asks the question if it was: just an actor, a deepfake or an actor with general video altering software and that there is no way of knowing without the source material.

An other video in the personal time of the Dutch Prime minister https://www.nrc.nl/nieuws/2021/04/23/door-schrijver-gepublic... (disable JS for no paywall) there was a similar situation. Many people thought it was a deepfake or that the voice was altered or copied from an other video but later the government put out an statement that the video was real.

I wanted to share this because I saw multiple people claim that PGP may solve all problems but the problem is both ways. Real videos are also being labeled as deepfakes. So yes PGP will help solve a part of the problem but there is a bigger trust issue that needs to be solved.


Well, use S/MIME for emails? Wouldn’t that make such things more difficult to happen?


Well, this is interesting. Just as people all around the world realized they don't need to travel all that much.

Though, it doesn't seem all that hard to verify who you're talking to. Perhaps there's a great business opportunity for a "secure, verified" communication app.


We've been putting all of the "slack alternatives" and federated chat stuff through its paces, and while it's still young and annoying, the matrix ecosystem seems to err on the side of "simple for users". If you're in the same physical location as someone you're in an encrypted room with, you can choose to visually verify that person through the interface on the client, it displays a one time verification code on both clients, and that pair of keys (yours and t'other) will now be trusted, even across clients. It is in line with PGP, but there's no library of public keys.

Some things Matrix does (at least via Synapse server and Element client) on a federated "home server"

- 100% E2E from first message, if that's what you need

- automatically resizes images, but can send "full size" with a checkbox when uploading. Encrypted at rest on the synapse node. Tested up to 108MP images this way.

- URL previewing can be shut off if you consider that OP-SEC

- you can dump all new users into a specific room, or not. Depends on your stance on users using your "home server".

- Can make public rooms visible or invisible to the federation or specific federated servers (such as the matrix.org home server)

- Decent integration with irc.freenode.net (yay!) - you show up as "MatrixUsername [M]", other federated matrix users can see you in their clients and establish E2E with you directly. The only downside to this one is the "threading" features that probably came from Slack could be disruptive in IRC channels if you overuse them.

   + As part of the above - if you send an image normally, to an irc bridge, it gets made into a public image and the link sent to the IRC channel(s) you're in. It's both cool and disconcerting.
As I said, it's early days yet, but it's lightyears ahead of rocket.chat for ease of encryption, the user interface is more spartan than Mattermost. I would prefer mattermost, even though it's not E2EE - except matrix can use coturn/TURN, fully encrypted, to make and receive voice and video calls in the client, across the entire federated servers. And it sounds great. Did i mention encrypted?


Now I'm seeing heads of state, exchanging slips of paper in a GPG keysigning party, in my mind's eye.


[flagged]


How about we skip all the block and chain parts and just use the cryptography part to sign and encrypt a call?


[flagged]


> “cross chain yield farming aggregator”

For folks like me out of the loop, this is a real thing:

cross-chain / inter-chain: https://archive.is/5ncNj

yield farming: https://archive.is/7JupX

aggregators: https://archive.is/qmUOO


The keyword "fintech" and email are in user's actual bio, story checks out.


Blade Runner


It's crazy that everyone here seems to blindly accept that this was actually a deepfake just because the news says so. Odds are this wasn't a deepfake.


What is the alternative? The government themselves claim it to be a deepfake[1], not the news.

[1] https://www.volkskrant.nl/nieuws-achtergrond/kamerleden-verg...


Simply a lookalike? I have no idea if that's what happened, I don't know how much work it is to find a willing person that looks sufficiently alike to do the rest with fake hair, makeup, and a decent amount of jpeg. Training a computer is probably easier nowadays. So if it was a deepfake, that begs the question: how was it detected? There is some research into detecting that, but did the impostors not bother implementing countermeasures? Are there no known countermeasures? How were the artifacts, can we tell them apart from compression artifacts with the naked eye? Can the artifacts tell us what software was used? All of that is irrelevant if it's an impostor and very interesting if it's a deepfake, hence the question seems like a relevant one to me.


Yeah, but that's only because in 2021 everything is a deepfake. Even when it's just a normal fake relying on practical effects.

It's cooler to blame a deepfake than a silicon mask or a fake beard.


It's not really in the interests of Dutch politicians to claim they were duped by a deepfake when they weren't. It makes them look foolish.


The more sophisticated your adversary, the less blame falls on you. It's the same with all those breach notifications talking up "advanced persistent threats" and "state-level actors".

It's better to have been fooled by AI magic trickery than a guy with a fake beard glued to his face.


It funny to me that this has been chalked up to ‘state-level actors’.

When it involves a Russian dissident and disinformation, is the list of nations potentially involved particularly long?


Claiming they were duped by a deepfake makes them look foolish either way, at least to me.


Have you seen the screenshots or recent deepfakes? They are good.


Think how much easier this would be with a mask on like our clown in chief who is the only world leader to wear a mask on the recent Zoom call with world leaders.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: