Full disk encryption (FDE) is a UX issue, not a technical one. You don't need a secure cryptographic processor, but the UX sucks without one.
A simple, working, FDE setup would be something like LUKS running at boot:
1. Turn on phone
2. Phone loads up initial bootstrap OS
3. Phone prompts user for master key
4. Master key is used to unlock volume
5. Regular OS boot continues
If the master key has enough entropy, brute forcing it becomes impossible. The phone won't "disable" as there's no self-destructing component (i.e. "secure crypto chip") but that doesn't mean it can be cracked. Boil as many oceans as you'd like, you're not going to brute force 256 bits of entropy.
The UX problem is that the master key is a PITA to enter if it's long enough to be cryptographically secure. That's what a crypto chip is supposed to solve. A limited number of attempts with a shorter passphrase.
On the other hand, phones do not need to be rebooted often. And if I'm upgrading my OS the stupid thing is going to make me wait 20 minutes while it precompiles every app, so screw it I don't care if I have to spend 40 seconds inputting a password.
It's actually a rather logical set of steps, it follows the same pattern as elsewhere probably for the same underlying reasons.
Compiled once is easier to develop, prove correct and validate functionality. Just in time offers convenience in deferring compilation, but increases storage requirements and can decrease the initial runtime performance due to compiling the code. If you have spare cores this starts to make more sense. If you have a mechanism of profiling and automatically tuning the optimization of frequently used codepaths / infrequently used ones then JIT also starts to have payoffs (this is where human hours making a smarter compiler pay off across many applications that don't warrant the human hours optimizing directly).
Another interesting aspect of the evolution of the Dalvik and ART runtimes is that they are runtimes for (mostly) battery-powered devices. A JIT compiler was an obvious next step but less obvious is that it had to be battery conscious. It was designed to JIT as little code as possible to attain the greatest gain in performance as possible. That's not how you would design a JIT compiler for server Java!
Also, the system update install process will happen while the phone is in use, and only require a reboot after installation is finished. The whole process will be much faster.
Not only that, but Android N will be able to boot all the way to the lock screen before asking for the password, and even support functionality like phone calls, limited notifications, or alarms (so if your phone reboots in the night you'll still get woken up in the morning).
Right, but it may reboot at the most inopportune moments, like after the battery runs out, and where you may not have access to the 20 character long master password which is impossible to remember.
How about a QR code on a business card kept in your wallet? Scan the QR code (with you phone's camera) to input the long passphrase (or even better just random bits).
There could still be a short pin/passphrase in addition to that. Keepass allows for something similar by having a local file + passphrase to unlock a password vault.
Backups of the QR code could be as easy as photocopying it!
I'd rather just type than fuss with getting a QR code to scan in possibly-bad lighting. I just tested a password of 30 random lowercase letters, which is total overkill for security, and it took less than 30 seconds to type into my phone. Doing that perhaps once a month, or once you find a charger after your battery dies, is no problem.
> I'd rather just type than fuss with getting a QR code to scan in possibly-bad lighting.
Nothing says you can't do either, or even both[1]. The idea of using a QR Code is so that mom/dad/grandpa/grandma can use a long password to unlock their phone without having to type it in.
Bonus points if the QR Code itself is in the shape of a key!
[1]: Which would be "something you have" + "something you know".
Really? That is something I dreamed about for a long time. A small object on the finger is the least obstructive form factor and becomes part of self. But unlike a fingerprint, a ring can be replaced if somehow compromised.
I doubt it. But you might not prefer it to, depending on the threats you want to secure against. If you had extremely valuable secrets, someone could just execute you and use your NFC jewelry to unlock those secrets.
Phones spend most of their lives powered on. You need to be required to enter a similarly good password to unlock it while the phone is running, or disk encryption has accomplished nothing at all.
Not true at all. Any attack against the encrypted data itself will be stopped. Any attack that requires restarting the phone to load different software will be stopped. Having a secure boot password and also a software-enforced pin to unlock accomplishes a great deal. And as a bonus, when a security measure is not grossly inconvenient you get more people using it.
Last I checked, it does. I'm pretty sure for instance the Nexus 5X relies on standard LUKS for FDE.
What may be protected through other means is the key used to encrypt that filesystem. The user enters a pin or a pattern to unlock said key on boot, and that key is used to initiate LUKS.
You also want the passphrase to be easily changed, separately from the master key (because changing the master key is slow and resource-intensive).
Even if you have a strong passphrase, there's some probability of leaking some number of bits of it via side-channels every time it's entered (e.g. surveillance cameras, fingerprints on the screen, shoulder surfing, vulnerable code, TEMPEST, etc). Plus, people often keep a backup copy of their passphrases (unique, strong passphrases are hard to remember), so there's also a cumulative risk of the backup leaking over time, as well.
Long-term confidentiality is just surprisingly hard in the real world.
So if I understood correctly, there are 5 requirements for such a system to be secure:
1: secure/unmodifiable cryptographic processor
2: with unremovable rate limiting
3: and exclusive access to a hardware key
4: cryptographic processor has the only function of encrypting user data based on
5: hardware key and a user supplied pin/key
Errors done by Qualcomm:
Violated 3: Hardware key not exclusivly readable by cryptographic processor
Violated 5: Encryption based on derived key
Since the security of Android depends on hardware and OEM software not under Google's control, depending on FDE is apparently pointless. I guarantee Google really wants to build their own branded phones with their own secure Android version and gain Apple's advantages in building secure systems because you own everything.
This vulnerability does not make FDE pointless per se. It means that your security depends completely on the complexity of your passphrase (like the iPhone prior to the introduction of Secure Enclave). Which is still fairly bad given the typical passcode complexity on phones.
> Since the security of Android depends on hardware and OEM software not under Google's control, depending on FDE is apparently pointless.
For 99% of the users, this is complete nonsense. Why do we encrypt the phone? Are we afraid of the NSA or whatever agency cracking our phones? No! The reason most of us encrypt the phone is to prevent others from posting our pictures or reading our mail.
When somebody else finds my phone, or steals it, the four digit PIN is enough protection. Then again, I don't use a four digit PIN, but something longer, and from six or up it's really strong enough for manual unlocking.
People who have the means to read contents from the memory chips, they probably have the means to use other measures to get what they want.
There are levels of threat and protection between those you mentioned. As the threat from people with access to "other measures" increases, you moght at least want to make them work for it and have to go through proper channels.
For this to work they only need one little chip that can do what the secure enclave does. They can then require that chip in Android phones. That's a lot cheaper and easier. Then they need to control the manufacturing of this chip, not outsource it to China.
> I guarantee Google really wants to build their own branded phones
Considering they just sold for scrap an entire manufacturing pipeline to do exactly that, I'd say they really don't. If they do, they're going about it in the silliest and most roundabout way.
So out of curiosity, as someone who has a nontrivial passcode on his Android device that people constantly mock, how many characters are we talking to be safe.
I have known for a long time 4 digit numeric PINs are stupid. Sadly the San Bernadino case, for all the wrong reasons, taught me all the alternative auth methods are just as risky.
Should I be worried? I don't know. But as a long time Android enthusiast and power user who does not use Google Play on his phone and restrictive permission customization like XPrivacy, I am about to just give up and have a newish iPhone for secure stuff and knock around Android for the cool open source dev I aspire to with F-Droid.
A friend saw me entering my longish, non-trivial passphrase into my android phone and commented that I must not have many friends if I have to enter this passphrase every time to use my phone.
For a moment I was torn between being proud in my passphrase and sad at my lack of friends.
Some of my more sophisticated friends take this a step further and taunt me for not securely hiding notifications and buying a smart watch.
This confuses me because Bluetooth over the air does not strike me a secure mechanism (is it just Bluetooth; ironically scant detail unless my Google-fu is off, likely at this hour for me).
And do I need yet another smart device? The mobile first movement, with shinier crap, moving away from standard protocols, leads me to believe I have been passively perpetuating the biggest wart on computer science's labors. I am kind of embarassed, a few years in and late to the party, I bought an Android phone at all.
It doesn't really break the encryption, as long as the password is strong enough to prevent brute forcing.
Relying on a weak password and a "trusted computing" mechanism like this one from Qualcomm to prevent an attacker with physical access from brute forcing it is not really advisable.
Using such a mechanism at all has downsides since it means that you lose the data if the mechanism or the entire device stops working, while otherwise you could simply move the SSD/flash/storage unit to a new identical device and have it just work as long as you type the correct password.
But how many Android users do you know that actually have a strong password? (Speaking of which: Google's decision to not allow strong encryption passwords together with short screen unlock PINs/patterns is not helping here. It's actually possible, but they don't expose the user interface to change them separately, probably for usability reasons.)
Also, while I would agree as far as PCs are concerned, moving the eMMC to a new phone's mainboard is probably also not a likely scenario.
moving the eMMC to a new phone's mainboard is probably also not a likely scenario.
Data recovery is, however, which is why I don't think one-sided promotion of ubiquitous hardware-locked-FDE is a good idea --- it becomes a tradeoff between others getting access to, and you losing access to, your data. Is it more important that this data be accessible by no one but you even if it means you might also lose access to it, or that you not lose access, even if it means others can access it? Hardware-locked FDE is suited only to the former case.
As someone who deals with hardware-backed encryption in laptops, it's a huge pain.
Modern phones are worse. I was at a training and happened to sit with computer forensicators and they complain about the new eMMC systems. Forensics systems for mobile devices were a piece of cake prior to that.
I suspect the eMMC thing is meant to make this difficult on purpose.
You know, I'm not really all that certain that it's intellectually difficult to get crypto right; it feels more likely that it's organisationally and process-wise difficult to get crypto right.
What I mean is that folks want to take an existing product and layer crypto atop it, when really they need to start from the crypto and built a product over it. They want to do things which are mathematically impossible (e.g. let you read data, but not let you say that data to someone else). They want to hand the crypto to a junior developer instead of someone who knows what he's doing.
Crypto's not hard to do; organisations are bad at doing crypto.
> You know, I'm not really all that certain that it's intellectually difficult to get crypto right; it feels more likely that it's organisationally and process-wise difficult to get crypto right.
Replace "crypto" by just about any software engineering endeavor, and what you say is still correct.
spot on with one minor nitpick: crypto is not hard to use (as in, relying on existing libraries/standards/practices) - doing, as in writing your own crypto, otoh...
I think that might not be such a bad thing after all, since the flaws make it harder for companies to secure their products against the user, as they are apt to do. Something to ponder: would you rather live in a world of "perfect security" where encryption like this is unbreakable but so is DRM and other mechanisms of corporate/government control; or something closer to what we currently have, where "flaws" in encryption are periodically discovered but things like iOS jailbreaking and Android rooting still exist along with their associated freedoms?
Just don't use a password so weak it can be bruted for your FDE and you're fine. Treat it like most existing software-only solutions like TrueCrypt, not like a magic black box that can make a 4 digit number into a secure password. Even if the hardware were "secure" it still wouldn't be advisable to rely on it as a successful reverse engineering of the hardware could still result in a bruteforce attack on your key. And while that may be high effort and expensive, it's certainly not impossible. Never trust a half measure, this sort of TPM/Secure Enclave approach is a half measure.
Well hold on.. there are several issues. One is that there is an exploit allowing arbitrary code execution in the trustzone. This immediate problem should be fixed.
The second is that allowing a firmware updates to trustzone code means that Google could push an update which allows them to retrieve keys. Such updates should only be allowed if the user accepts them. How the user is supposed to know the difference between a security fix vs. Google trying to break into their phone is the real issue.
But IOS has this same issue. If you accept a firmware update from Apple, maybe it includes a pin logger.
You're right. Let's take a second to go over the issues:
1. The arbitrary code-execution in TZ has already been privately disclosed and fixed.
2. As for the second issue - I would argue that it's not an issue at all. TZ should be updated (otherwise how would they fix the TZ code-exec vulnerabilities?).
However, in this case gaining code execution in the TZ kernel directly leads to the disclosure of the keys which are meant to bind the KDF to your device. This is in direct contract with Apple's KDF. The key here is that software shouldn't be trusted, by design.
3. As for the last issue, I would argue that this is just a clever form of social-engineering. After all, who's to say they didn't just swap the phone with a dummy phone to make you insert the correct password?
However, in Android's case, you wouldn't even need to cooperate. OEMs could simply flash the new TZ code, extract the KeyMaster keys, and bruteforce the key. All without having any help from you.
The section on extracting the key mentions the "Widevine DRM application" and "privilege escalation from QSEE"
Could the problem be fixed if the manufacturer used TrustZone exclusively for store key/encrypt/decrypt operations, eliminating the support for DRM and Trustlets?
I would use a longer password. But currently the unlock and the encryption password are always the same. That one of the issue I would like to see changed.
I would also like to have more fine grained rules on when I can unlock with a fingerprint, when pin and when I should be forced to put in the encryption password.
Additionally I would like to use U2F NFC token on my keychain as a second factor for unlock (if I have not touched the phone for X amount of time).
I'll definitely try and get around to it sometime soon. However, I wouldn't be surprised if the situation is the same... After all, the KeyMaster module was initially only meant to keep encryption keys on the device, not to safeguard FDE.
Sorry for an extremely nooby question, but your comment makes me wonder:
Are Snapdragon and Exynos architectures that alike? I mean, do you have any preliminary idea how easy it would be to apply your Snapdragon method to an Exynos SoC?
Actually, I would imagine they are nothing like one another.
However, Exynos uses a TrustZone TEE implementation to implement the KeyMaster module, just like Qualcomm's Snapdragon. This means that unless that TEE implementation uses a hardware key which is not software accessible, the same issue would be present. Of course, reverse-engineering a new TEE implementation would take a long time... But it sounds like it could be fun.
You should be aware that people like you are unsung heroes and I'd like you to know that many of us who are busy struggling to stay afloat are feeling better knowing that people like you are doing the work you do.
If anyone is interested when the author talked about an overview of the iOS security measures, I created a more in-depth review that is an easy read and gives you a good over-view of the security architecture used. I'm happy to hear feedback if you enjoy it. The blog can be found at: https://woumn.wordpress.com/2016/05/02/security-principles-i...
A simple, working, FDE setup would be something like LUKS running at boot:
If the master key has enough entropy, brute forcing it becomes impossible. The phone won't "disable" as there's no self-destructing component (i.e. "secure crypto chip") but that doesn't mean it can be cracked. Boil as many oceans as you'd like, you're not going to brute force 256 bits of entropy.The UX problem is that the master key is a PITA to enter if it's long enough to be cryptographically secure. That's what a crypto chip is supposed to solve. A limited number of attempts with a shorter passphrase.