I am still not entirely clear on how they are able to extract the key (does the victim computer need to process the provided encrypted payload? So if the victim never does that, they can't steal the key?), but it's still a fascinating read.
Often this sort of leakage comes down to timing, you can infer what a person is doing by the amount of time it takes you to do something. In the simplest example (and only slightly related to the linked article), imagine we have an API end point that allows you to log in using a password. A naive implementation of authentication might look like this:
if ( password = "hunter2" ):
return True
This in a lot of cases will be vulnerable to a timing attack. If the user submits the password "huntducks" it will take the function momentarily longer to return than if they had submitted "whalehorn", due to the underlying comparison exiting at the first non-match. With a bit of trial and error they can deduce the password one character at a time by altering their guess to take longer and longer to execute. To avoid this security sensitive code needs to be time constant, which is a lot harder than you'd imagine.
Attacks like the author are performing are a lot more complicated and use tricks like coercing the victim into signing distinctive plaintexts, but the example also works for more complex systems like ECDSA where people can derive the signing key by just knowing how long it took.
Very interesting, and something I'd never considered before.
Instead of going to the effort of making the function run in a constant time, might adding a small random delay before it returns also mitigate against this type of attack?
The noise could be filtered out with enough samples, so this mitigation only increases the number of attempts required and does not prevent the attack.
Best practice is to hash the password. This is good for a lot of reasons, one of which is that the time to compare hash("hunter") vs hash("hunter2") vs hash("hunter21") does not correlate in any meaningful way.
And it gets even worse than that, in certain environments the fact that it takes more energy to flip a transistor than to keep it on the same output can leak information.
Noise technically mitigates, but it's like covering your unlocked safe with a cardboard box.
This is a chosen ciphertext attack, so yes, you have to decrypt the attacker-provided payload.
That being the case, there are a lot of systems where that is easy to achieve. If you have GPG in your mail client, it will decrypt anything anybody sends you.
Another thing to keep in mind is that attacks always get better, they never get worse. So today's chosen ciphertext attack is tomorrow's zero-knowledge attack.
> We thank Werner Koch, lead developer of GnuPG, for the prompt response to our disclosure and the productive collaboration in adding suitable countermeasures.
"[...] If you are using a GnuPG version with a Libgcrypt version < 1.6.0, it is possible to mount the described side-channel attack on Elgamal encryption subkeys. [...]"
"We have disclosed our attack to GnuPG developers under CVE-2014-3591, suggested suitable countermeasures, and worked with the developers to test them. GnuPG 1.4.19 and Libgcrypt 1.6.3 (which underlies GnuPG 2.x), containing these countermeasures and resistant to the key-extraction attack described here, were released concurrently with the first public posting of these results."
Basically that Libcrypt 1.6.3 underlies GnuPG 2.x. But when I check my system:
You sound like if you're going to build the binary manually (using an axe or something). Just compile the latest version for your system. Your distribution's package manager should be up-to-date by now.
Reminds me of the old Tempest project. Some say they could read the information going through a monitor cable for over a kilometer away already in the fifties.
There are optical-domain attacks for CRT monitors (including diffuse reflections off walls from the "blue glow"), likely similar for LCDs. And there are Van Eck attacks on CRTs and LCDs. Cables don't usually leak by definition of twisted pair and coaxial being solenoids (the ideal model of solenoids emanate zero net EM flux at distance and immune to external EM fields), but connectors, unshielded traces, straight wires, untwisted ends of twisted pair and component joints tend to be the usual suspects.
TEMPEST is the security standard for Emanation Security (EMSEC). They keep most of the details classified because they use emanation attacks in the field and they don't want shielding spreading too far. Yet, you can see what these systems look like on the web sites:
I realize that you're joking, but RF penetrates many materials. Unless you have a lead desk then you should be checking underneath before sending state secrets. :-)
I've seen computers that have a spread spectrum* feature meant to reduce EM emissions. How would this analysis fare against such a machine?
My guess is probably not at all. Because I've never seen the option in a notebook computer, and as the purpose is to pass emissions regulation, I assume it was enabled in the machines they tested. And that the difference in the emitted frequencies of various operations is much greater than any variation of the base clock.
IIRC what it does is add some (tiny amount of) jitter to the clock signal; what this does is to "flatten" the spectrum of the emitted signal slightly (because instead of having a peak at 1GHz, let's say, it's going to be spread around, but centered on that frequency)
The reason the feature exists is so the equipment can pass EMC (electromagnetic compatibility) tests such as FCC Part 15 for emissions. By spreading the radiated energy further out across the frequency domain, the peak amplitude is reduced and you stay under the limit. Of course this raises the noise floor for everyone, but my sheep ought to be able to graze in the common.
Having been through FCC testing, I can confidently say that the MacBook's case, and really any metal, consumer computer case, will do very little to help. EM radiation will leak through any gaps you have in that metal, and leak out any cables you have plugged into the machine that aren't also caged. For example: the power cable, USB cables, display cable, etc.
Now if you'll excuse me, I'm going to crawl into a ball and cry away the FCC testing tears.
I was recently surprised by a claim on some website that in order to shield the inside from EM interference, the cage would need to be grounded, otherwise it may enhance the interference. Is that true? I would expect grounding to be necessary only for preventing the cage from acting like an antenna for the equipment inside, but the shielding effect should work regardless of grounding... could you share some of your experience with those cage rooms?
[1] https://en.wikipedia.org/wiki/Blinding_(cryptography)
Edit: And BTW, yet one more argument for NaCl or libsodium, which boasts "no data-dependent branches."