Hacker News new | past | comments | ask | show | jobs | submit login
Stealing keys from PCs using a radio: cheap electromagnetic attacks (tau.ac.il)
223 points by liotier on June 20, 2015 | hide | past | favorite | 44 comments



Very cool work, but blinding is a good countermeasure [1], which all RSA implementations should use.

[1] https://en.wikipedia.org/wiki/Blinding_(cryptography)

Edit: And BTW, yet one more argument for NaCl or libsodium, which boasts "no data-dependent branches."


Stuff like this blows my mind.

I am still not entirely clear on how they are able to extract the key (does the victim computer need to process the provided encrypted payload? So if the victim never does that, they can't steal the key?), but it's still a fascinating read.


Often this sort of leakage comes down to timing, you can infer what a person is doing by the amount of time it takes you to do something. In the simplest example (and only slightly related to the linked article), imagine we have an API end point that allows you to log in using a password. A naive implementation of authentication might look like this:

    if ( password = "hunter2" ): 
        return True
This in a lot of cases will be vulnerable to a timing attack. If the user submits the password "huntducks" it will take the function momentarily longer to return than if they had submitted "whalehorn", due to the underlying comparison exiting at the first non-match. With a bit of trial and error they can deduce the password one character at a time by altering their guess to take longer and longer to execute. To avoid this security sensitive code needs to be time constant, which is a lot harder than you'd imagine.

Attacks like the author are performing are a lot more complicated and use tricks like coercing the victim into signing distinctive plaintexts, but the example also works for more complex systems like ECDSA where people can derive the signing key by just knowing how long it took.


Very interesting, and something I'd never considered before.

Instead of going to the effort of making the function run in a constant time, might adding a small random delay before it returns also mitigate against this type of attack?


The noise could be filtered out with enough samples, so this mitigation only increases the number of attempts required and does not prevent the attack.


Best practice is to hash the password. This is good for a lot of reasons, one of which is that the time to compare hash("hunter") vs hash("hunter2") vs hash("hunter21") does not correlate in any meaningful way.

https://en.wikipedia.org/wiki/Hash_function


And it gets even worse than that, in certain environments the fact that it takes more energy to flip a transistor than to keep it on the same output can leak information.

Noise technically mitigates, but it's like covering your unlocked safe with a cardboard box.


This is a chosen ciphertext attack, so yes, you have to decrypt the attacker-provided payload.

That being the case, there are a lot of systems where that is easy to achieve. If you have GPG in your mail client, it will decrypt anything anybody sends you.

Another thing to keep in mind is that attacks always get better, they never get worse. So today's chosen ciphertext attack is tomorrow's zero-knowledge attack.


You may be interested in this episode of The Amp Hour podcast, where they talk about powerline analysis attacks: http://www.theamphour.com/239-an-interview-with-colin-oflynn....

In general, it is like many other side channel attacks, where you can use the data to put constraints on the search space for a brute force attack.


Thanks a lot! This looks great!


yep, remember when we used to "air-gap" computers for safety?


A lead gap might be a better term.


Note: this is effectively a followup (with improved techniques) to http://www.cs.tau.ac.il/~tromer/handsoff/, which has also been in the (tech) news.


> We thank Werner Koch, lead developer of GnuPG, for the prompt response to our disclosure and the productive collaboration in adding suitable countermeasures.

"[...] If you are using a GnuPG version with a Libgcrypt version < 1.6.0, it is possible to mount the described side-channel attack on Elgamal encryption subkeys. [...]"

https://lists.gnupg.org/pipermail/gnupg-announce/2014q3/0003...


I'm a bit confused. The paper states:

"We have disclosed our attack to GnuPG developers under CVE-2014-3591, suggested suitable countermeasures, and worked with the developers to test them. GnuPG 1.4.19 and Libgcrypt 1.6.3 (which underlies GnuPG 2.x), containing these countermeasures and resistant to the key-extraction attack described here, were released concurrently with the first public posting of these results."

Basically that Libcrypt 1.6.3 underlies GnuPG 2.x. But when I check my system:

foo@bar:~$ gpg2 --version gpg (GnuPG) 2.0.22 libgcrypt 1.5.3

So I'm wondering why GnuPG 2.0.22 isn't using Libgcrypt 1.6.x?

I can see from your reference that I can upgrade to Libgcrypt 1.6.x but that it requires a rebuild of GnuPG, which I'd rather not deal with right now.


You sound like if you're going to build the binary manually (using an axe or something). Just compile the latest version for your system. Your distribution's package manager should be up-to-date by now.


Wouldn't this be far less effective on a busy computer (e.g. one loading webpages) or one using a multi-core processor?


That increases the noise floor, yes. However that does not stop the attack, it merely makes it more complicated (and probably taking more time)


If the same message (or similarly known messages) were decrypted in serial, the pattern could be quickly separated from the noise.


Reminds me of the old Tempest project. Some say they could read the information going through a monitor cable for over a kilometer away already in the fifties.

http://cryptome.org/nsa-tempest.htm


I thought it was the monitor itself, rather than the cable, that leaked enough EM radiation to reconstruct images.

Also: do LCD monitor leak so that someone can recreate the image on them some distance away?


There are optical-domain attacks for CRT monitors (including diffuse reflections off walls from the "blue glow"), likely similar for LCDs. And there are Van Eck attacks on CRTs and LCDs. Cables don't usually leak by definition of twisted pair and coaxial being solenoids (the ideal model of solenoids emanate zero net EM flux at distance and immune to external EM fields), but connectors, unshielded traces, straight wires, untwisted ends of twisted pair and component joints tend to be the usual suspects.

http://www.cl.cam.ac.uk/~mgk25/ieee02-optical.pdf

http://www.cl.cam.ac.uk/~mgk25/pet2004-fpd.pdf

http://www.hack247.co.uk/blogpost/van-eck-phreaking/ (unscientific/not peer-reviewed)


The flexible connector between the LCD panel and whatever PCB is in the system will leak.


TEMPEST is the security standard for Emanation Security (EMSEC). They keep most of the details classified because they use emanation attacks in the field and they don't want shielding spreading too far. Yet, you can see what these systems look like on the web sites:

List of certified companies https://www.nsa.gov/applications/ia/tempest/tempestPOCsCerti...

CIS Secure Computing TEMPEST gear http://cissecure.com/products/

Advanced Programs Inc TEMPEST gear http://advprograms.com/


Also related their previous work with RSA inventor Adi Shamir at http://www.tau.ac.il/~tromer/acoustic/


Pay no attention to that pita 50 cm from your computer ;)


I realize that you're joking, but RF penetrates many materials. Unless you have a lead desk then you should be checking underneath before sending state secrets. :-)


If you're sending state secrets you should probably check even if you have a lead desk. Things like keyghost are pretty small and hard to see.


I've seen computers that have a spread spectrum* feature meant to reduce EM emissions. How would this analysis fare against such a machine?

My guess is probably not at all. Because I've never seen the option in a notebook computer, and as the purpose is to pass emissions regulation, I assume it was enabled in the machines they tested. And that the difference in the emitted frequencies of various operations is much greater than any variation of the base clock.

[*] http://www.anandtech.com/show/2500/20


IIRC what it does is add some (tiny amount of) jitter to the clock signal; what this does is to "flatten" the spectrum of the emitted signal slightly (because instead of having a peak at 1GHz, let's say, it's going to be spread around, but centered on that frequency)


The reason the feature exists is so the equipment can pass EMC (electromagnetic compatibility) tests such as FCC Part 15 for emissions. By spreading the radiated energy further out across the frequency domain, the peak amplitude is reduced and you stay under the limit. Of course this raises the noise floor for everyone, but my sheep ought to be able to graze in the common.


I wonder if a laser could be used to pick up on this signal such that an individual could be targeted from afar?


It's not impossible:

http://www.technologyreview.com/view/517336/physicists-detec...

Lasers have been used for 50+ years in the intelligence community to eavesdrop on voice conversations.

https://en.wikipedia.org/wiki/Laser_microphone


My understanding is that light doesn't interact with electromagnetic fields, so you couldn't use a laser to detect electromagnetic fields.


light IS a EM field. First sentence on wiki: "Light is electromagnetic radiation ..."

And yes, electromagnetic fields interact.


You're probably thinking of a directed antenna, something like the BlueSniper rifle that was all the news like 10 years ago...


Consider a physics class to learn about electricity and magnetism.


I assume folks using a smartcard for their everyday encryption are safe from these kinds of attack?

Can anyone confirm that?


Everyone with updated software is safe from these attacks since the authors disclosed them to the effected software beforehand.

As for smartcards there is no reason to assume they would do anything to mitigate this.


A Macbook is essentially a Faraday cage so that should protect against this.

I guess another way of seeing it is that a Macbook is a large chunk of antenna which would be quite bad.

Don't know.


Having been through FCC testing, I can confidently say that the MacBook's case, and really any metal, consumer computer case, will do very little to help. EM radiation will leak through any gaps you have in that metal, and leak out any cables you have plugged into the machine that aren't also caged. For example: the power cable, USB cables, display cable, etc.

Now if you'll excuse me, I'm going to crawl into a ball and cry away the FCC testing tears.


Gene Hackman's "office" in Enemy of the State seems more useful all the time. :)


Oh the joys of copper mesh Faraday cage rooms. (Did some 900 MHz and 2.4 GHz industrial packet radio work.)


I was recently surprised by a claim on some website that in order to shield the inside from EM interference, the cage would need to be grounded, otherwise it may enhance the interference. Is that true? I would expect grounding to be necessary only for preventing the cage from acting like an antenna for the equipment inside, but the shielding effect should work regardless of grounding... could you share some of your experience with those cage rooms?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: