Hacker News new | past | comments | ask | show | jobs | submit login

Encryption keys can be hidden in a way that is next to impossible to decipher. Superfish level of insecurity doesn't have to be the norm. The downside is that there are no open source libraries that make this possible which is why few people know about it.



That's obfuscation, which is just an arms race; it's not security in any measurable sense. It's fundamentally equivalent to the DRM problem. And there are no open-source libraries and not a lot of public documentation about how to defeat this sort of thing, so very few people who engage in this work have a good understanding of exactly how robust it is (and, anecdotally, most people tend to overestimate their products by a lot).

The one thing you can do is to put the key in a separate hardware device, and have the hardware refuse to make the key directly available, but only do encryption or decryption operations under certain circumstances (e.g. it's audited what's running on the device). This is definitely doable with a TPM on a standard PC, and there are in fact open-source libraries that will handle this for you.


Which brings a good point: if you have root (and presumably you can even write a different kernel), how do you make sure the TPM can verify what's actually running on the processor when you can just fake it?

Or better yet, if you have full blown root, what's preventing you from just kinda LD_PRELOAD some code for that process and steal the decrypted data before it gets to the legitimate application? Or take a screenshot.

So I think the point is that Google probably will not allow this to be ran on any ROM that's not signed by some key.


> if you have root (and presumably you can even write a different kernel), how do you make sure the TPM can verify what's actually running on the processor when you can just fake it?

On a PC architecture, the TPM is wired up to the CPU and other parts of the system, such that (for so-called "static root of trust") it gets initialized with a hash of the BIOS at bootup. The legitimate BIOS then adds in a hash of the boot sector, which adds in a hash of the kernel, which adds in a hash of anything the kernel thinks is worth verifying. Only if the final value of this TPM register (called a "PCR") matches up will the TPM allow a stored private key to be unlocked ("unsealed") and used.

Alternatively, for so-called "dynamic root of trust", there's a processor instruction that both clears all processor state (interrupts, paging, etc.) and a particular TPM PCR, and loads in a block of code. If the code is different, the key won't unseal. If someone is intercepting that processor instruction, the PCR won't get initialized correctly, and the key still won't unseal.

So it's mostly up to the kernel to verify everything that could possibly be relevant. (If you're thinking this is a hard engineering task, yes, that's one reason why this isn't in wide deployment, despite the technology all existing.) For instance, it might verify an entire read-only root filesystem, and then set things up so that on the work container or VM, nothing else can be installed, no additional executables or libraries or LD_PRELOADs get loaded, debuggers don't work, etc. In the personal-use container/VM, it can still run a normal OS.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: