Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Building a Titan: Better security through a tiny chip (googleblog.com)
149 points by bluegate010 on Oct 17, 2018 | hide | past | favorite | 50 comments


> Finally, in the interest of transparency, the Titan M firmware source code will be publicly available soon. While Google holds the root keys necessary to sign Titan M firmware, it will be possible to reproduce binary builds based on the public source for the purpose of binary transparency.

and

> Transparency around every step of the design process — from logic gates to boot code to the applications — gives us confidence in the defenses we're providing for our users. We know what's inside, how it got there, how it works, and who can make changes.

This should be a boon for security researchers! I'm really looking forward to what comes out of fuzzing that whole subsystem. I imagine attacks against the secure enclave would be a lot easier to perform (and ideally, report to Apple) if it was feasible to attack it with pure software.


> I'm really looking forward to what comes out of fuzzing that whole subsystem.

Surely Google uses (a portion of, at least) their massive compute resources to do exactly this sort of thing before these chips even get anywhere close to an assembly line or being built in to new devices? Is an independent security researcher going to be able to try anything Google themselves haven't already tried?

Or is it kind of like brute-forcing a 256-bit key where, no matter how much "firepower" you have available, you'll never come close to trying all possible combinations of inputs, etc.?


It isn't so much one guy vs all of Google doing the work. It's more like the team that Google employs vs the entire rest of the infosec and academic community. This is a treasure.


Check out this: https://keystone-enclave.org/

Google and Titan are a large part of this effort and it will be great for security researchers.


I recently bought a few Titan products (the security key) - I was pretty bummed to find out that it had none of the features claimed by the Titan family.

No Side Channel Attack resistance.

No fuses to attest supply chain provenance or lifecycle.

No direct connections for FIDO hardening.

Apprantly the Titan keys given to Google employees were different than the Titan keys sold to the public. Themselves different from the Titan M used in Servers and Phones and now Chromebooks. None of this would matter so much other than the fact that products sole purpose is to establish a secure chain of trust and starts out the gate broken with ambiguous or misleading claims.

This is frustrating because the Titan M is an absolutely brilliant device, with some real advancements to normalize embedded security, including an SPI interposer to monitor communications (a real leap forward) - and should not at all be conflated with a generic, whitelabeled, non-hsm product that makes no claims whatsoever.


>fact that products sole purpose is to establish a secure chain of trust

I think the right way to explain it is that "Titan" is the project to establish a secure chain of trust from user to server and back, making sure that every piece of hardware and software (and every human in the chain) is what it says it is and is doing what its supposed to be doing.

From that perspective, the Titan M, titan key, and serverside titan chip[1] are all pieces of the same project.

[1]: https://cloud.google.com/blog/products/gcp/titan-in-depth-se...


Google has 3 Titans.

1) Titan for GCP servers: https://cloud.google.com/blog/products/gcp/titan-in-depth-se... (custom hardware, custom software)

2) Security key: https://cloud.google.com/titan-security-key/ ("built with a hardware chip that includes firmware engineered by Google" - seemingly stock hardware, custom software)

3) Titan M mobile: (TFA, custom hardware/software like #1 but for mobile)


How does the latest Google's hardware compare to the latest Apple's hardware in terms of security? Can Pixel and Pixelbook now be recommended to journalists[1] as reasonable alternatives to iPhone and iPad, or are Apple's products still much better in this regard?

[1] https://techsolidarity.org/resources/basic_security.htm


I'm pretty sure the Pixel 2 had hardware keystore security on par with an iPhone 6/7, which is what that suggests.


GP didn’t ask “hardware keystore” and either way, it’s a false assumption you made. The Titan M is the first comparable chip, based on marketing speak, to Apple’s equivalent. Give it time to see if the marketing speak lives up to technical reality, but my guess at this point is that they’ll be roughly comparable now.


Which actual differences exist between the hardware security of a Pixel 2 and an iPhone 7?


[dead]


No one even bothered to attack the Pixel at last year's Mobile Pwn2Own.


So the new Pixel includes U2F hardware in the device? That's cool - apparently, the flagship Chromebook has dormant U2F hardware, too.

Unfortunately, some providers (mainly Twitter) poorly implemented U2F by only allowing one device per account.


Wow, so after it took them years to get U2F, you can only enroll one device? Is it still setup to require that you activate SMS 2FA too (thereby backdooring your entire 2FA process). I swear, I don't understand how Twitter is so utterly useless at things like this.


Is it made clear anywhere how memory for the Titan enclave works, and whether they've done something similar to Apple with encrypted memory busses?


> Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions.

Looks like the security boundary is around the chip itself, so any bus level security would be application specific.


The post says 64 KB of RAM which is presumably on-chip.


>Last, but not least, to prevent tampering, Titan M is built with insider attack resistance. The firmware on Titan M will never be updated unless you have entered your passcode, meaning bad actors cannot bypass your lock screen to update the firmware to a malicious version.

very explicit threat-modeling with the FBI in mind


Why does this matter when a google device sends all information home and can be requested by the FBI with a sweeping gag order warrant? Seems like more security theater to me.


Even if assume Google sends stuff to the FBI. You still want secure hardware so that nobody else can get at your stuff.


If your threat model is "being attacked by a surveillance actor", the fact that "nobody else can get at your stuff" (which is something that you are in a position to at least theoretically defend) is not very reassuring ...


I did not say that was may thread model. And yes its not very reassuring but it is still important.


I hope Google sells these chips with a breakout board. Even better if you could order them with custom root signing keys


Look out for: https://keystone-enclave.org/

Google is part of this effort, more likely that you can get hands on some of the tech threw this project.


I've been using the Titan, my main feature request is to require a delay on pressing the large button to activate the beacon. Any time I pull it out of my pocket or bump it, it lights up and starts broadcasting. Yubico had this problem, there are images online of random keys showing up in tweets/social status updates etc. I just got their new usb-c nano, and they added a delay that helps out when you accidentally bump it.


Note that the Titan Security Key and the Titan M chip are two different things. In both cases the firmware is created and signed by Google in the USA, but the hardware used for the secure element in the Pixel 3 handsets and the Titan Security Key are different.

Also, the Bluetooth Titan Security Key has its own battery since it nees to be able to power the BT radio when it's not connected to anything else. So if you accidentally hit its button while you pull it out of your pocket, it can start transmitting.

In the case of the USB-C and USB Security Keys, (a) they are powered off of the USB bus, and have no batteries, so they are inactive when they are not powered up, and (b) all U2F keys don't need to to look like a keyboard (e.g., be a USB HID device). So random strings showing up when you accidentally touch a U2F key is never a thing. The issue with Yubikeys is that the can be both a U2F security key as well as a traditional HOTP token. If you disable the HOTP feature in a Yubikey device (using the Yubikey personalization tool), the problem of random HOTP passwords showing up in tweets, etc., goes away.


It’s not very well documented but you can change this feature of the Yubikey using the YubiKey Personalization tool:

https://support.yubico.com/support/solutions/articles/150000...

In addition to the two methods listed, you can also remove the feature completely by deleting the configuration of the OTP slot entirely.


Different Titan.


> For example, packing as many security features into Titan M's 64 Kbytes of RAM required all firmware to execute exclusively off the stack.

Do what now?

Edit: seriously what does that sentence mean? Executing off the stack is super dangerous. Even on an M3 you can (and should) setup the MPU to have a non executable stack.


It means the firmware does not have a heap for dynamic allocation, not that it's generating and executing code in buffers on the stack.


That also doesn't make sense. You commonly have a heap but no runtime allocation under such circumstances.

MISRA, JSF C++, and even NASA code guidelines say no dynamic allocation after initialization.

If it's written in C or C++, that's also crazy town, and you're pretty much guaranteed to have dangling pointers as auto allocated objects are left behind on old stack frames.


I don't know about "commonly". In my experience with embedded C, if you needed things you declared them statically and never used malloc. It's not like you're calling alloca and letting those pointers escape -- any (valid) pointer you might have is to a static allocation or memory mapped address.

C++ is a different game and I don't know anything about it though.

But yeah the way I read this is they don't use malloc, which is pretty standard. This is how I've heard it referred to many times, and nothing else makes any sense.


> I don't know about "commonly". In my experience with embedded C, if you needed things you declared them statically and never used malloc.

I mean, it's common enough that MISRA, the JSF standard, and the NASA standard all specifically call it out to allow it under these conditions.

> It's not like you're calling alloca and letting those pointers escape -- any (valid) pointer you might have is to a static allocation or memory mapped address.

You only need to call alloca on dynamically sized stack allocations. You can always leak pointers to fixed size objects by:

  void* woah_dont_do_this(void) {
    int value = 0;
    return &value;
  }
and boom, the pointer that gets returned is pointing at invalid memory. Of course, no one would write it like this, but it's way easier than you might think to accidentally do this once there's some abstraction.


That isn’t really a heap then. Just normal static allocation.


Nah, you can still call malloc/new. Worked on a sweet RTOS that had tons of default overrides for new that ultimately let you define all layout from your top level module despite following these guidelines. Let you share tons of code across boards, but still make decisions like "this guy's buffers should be this big, and stored in this ram bank" all from the top level C++ file for each board.

And it's definitely not stack allocated like the original point of this thread.


Well but regardless, if you have a malloc or new implementation that uses a static buffer, then you're still not touching the heap. In fairness and as you point out, it's not the stack either, but they're obviously not letting stack allocations escape. Nothing would ever work.


> Well but regardless, if you have a malloc or new implementation that uses a static buffer, then you're still not touching the heap.

What do you think the heap is on systems without an MMU?

> In fairness and as you point out, it's not the stack either, but they're obviously not letting stack allocations escape. Nothing would ever work.

You can go for an awfully long time without knowing that you dangling pointers to auto allocated objects.


The very next sentence doesn't make sense either:

> And to reduce flash-wear, RAM contents can be preserved even during low-power mode when most hardware modules are turned off.

Flash wear only occurs during erases, so... why does it matter to reduce wear in low-power mode? Why would they be writing to flash during normal operation, except perhaps an authentication counter?

One would hope that they're executing the program out of the flash (that they've previously verified and locked) rather than out of 'stack'. This isn't an operation that causes wear.


Maybe they're saying that the other low power mode would slurp all of sram into flash?

But that still doesn't make sense, SRAM lasts for years on a battery of you're not clocking it, and every little uC I know of that vaguely cares about power lets you do that. Why call it out. Hell, NES games keep their SRAM saves going for decades off a little coin cell.


Can someone throw some light on why Google is not using ARM Trustzone technology? Many current Android OEMs are using it, particularly Samsung KNOX is security mechanism all built around trustzone technology.

What advantages does these Titan chips offer over the existing trustzone technology.


Trustzone is on the same die, same physical memory. Presumably they think that it's potentially vulnerable to the sidechannel attacks they mention.

Plus, they want a full, open source, verifiable system which isn't possible with ARM IP.


> Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions.

It would be nice to see this being replaced by an an open-source RISC-V processor in the future, too.


I think it would be replaced by Cortex-M33, which has much higher security feature with TrustZone for microcontroller.

I personally don't see the benefit for us of using open-source processors by companies like Google. They are not going to open-source the design to the public anyway.


Something I've been worried a bit about with RISC-V, hopefully someone can tell me why I'm wrong. If I were implementing some cryptography in assembly on x86 or ARM I would make every effort to avoid branches and use conditional moves instead so as to be more resistant to timing attacks. Is this actually a common technique in computer security? And does RISC-V suffer from not having conditional moves?


RISC-V is technically an ISA with a reference implementation. In theory and practice, it can be implemented as a low-power micro controller or augmented with additional instructions to play in the supercomputer space. Processors simply cannot avoid conditional moves, but they can choose not to speculatively execute after a jump before it knows the result of the jump condition.


It'd be better to implement it with accumulation rather than conditional moves on all of those other platforms anyway.

You can have branches, you just need to take the same branches regardless of the input.


You are in luck. That's pretty much what Keystone is. See: https://keystone-enclave.org/

Google is helping with this effort and it will be RISC-V enclave.


Security, but without privacy?



And even if it were ... so what? Security is security. Either it is, or it isn't. No trust involved.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: