Hacker News new | past | comments | ask | show | jobs | submit login
Learn how to design and defend an embedded Linux device (embeddedbits.org)
131 points by sprado on Nov 7, 2020 | hide | past | favorite | 35 comments



This article recommends that manufacturers put the Secure Boot public key in OTP memory, where the eventual owner of the device will be unable to change it, and also warns that the GPLv3 will prevent you from following some of its advice. It isn't about legitimate security at all, but rather about DRM that it tries to make sound like a good thing by pretending it's security.


I had a similar reaction to this. I don't see what the use is in ensuring the hardware is running software from "trustworthy" persons, especially when the device's owner is not usually considered to be part of that group. This feels like something we aught to question, why is it acceptable that even after buying some electronic device there's a part of it that the owner isn't allowed to touch? Embedded devices are just smaller and more focused versions of general-purpose desktop computers, and the latter has been operating just fine now allowing owners to run whatever code they want.


I agree with you for devices where the user should be considered 'trusted' - I like to tinker too.

But I think there are use-cases where this is legitimate, for example, ticket readers mounted on public transport where anyone could tamper with them out-of-hours, or a utility meter installed in my house where I might want to change the way it records consumption to get out of paying. Likewise, a payment terminal taking card payments in a restaurant - as a diner I'd quite like some assurance that someone couldn't tamper with it to record card details for example.


>ticket readers mounted on public transport where anyone could tamper with them out-of-hours

The owner of the ticket reader is the public transport authority.

>a utility meter installed in my house where I might want to change the way it records consumption to get out of paying

The owner of the utility meter is typically the company providing the utility.

In both of these cases, I would expect the owner of the device to have full control over it and not, say, only the manufacturer. If my city's government installs ticket readers in every subway station and is then perpetually beholden to a single private company to upgrade/maintain/improve them, that feels like a problem.

Note that this doesn't have to mean that someone with temporary access to the device should have full control over it! There's still a place for secure boot-style systems; it's all about who controls the keys.

>a payment terminal taking card payments in a restaurant - as a diner I'd quite like some assurance that someone couldn't tamper with it to record card details for example

This one is maybe a bit more concerning since it's a card issued by a separate authority, you don't have the ability to confirm transaction amounts on a device you trust, etc... but sure, in this case, I'd be fine with the hardware vendor or payment processor controlling the hardware.

(Though long-term I think this is a bit silly in the first place considering approximately everyone is walking around with computers in their pockets that would be perfectly capable of letting you confirm transaction amounts on a display you trust. Card payments currently are effectively handing someone your account details and saying "pretty please only take as much as you say you will.")


> Though long-term I think this is a bit silly in the first place considering approximately everyone is walking around with computers in their pockets that would be perfectly capable of letting you confirm transaction amounts on a display you trust.

This is how China's new-generation payment systems work.


It's a conundrum. I prefer to have devices that are open to user modification or fixes, but that inherently means bad actors could also readily modify the device. Though even devices locked with DRM can be rooted, so perhaps it's a false sense of security.

It'd be great if one could build on zero knowledge proofs with something like monero that could let you verify the firmware of a device. Or rather, your phone/credit card queries the device with a ZKP (or something), then uses a public blockchain to verify the devices firmware signature chain. A user could possibly then use a "adblock" like software to warn of manufacturers that have been breached, etc.


A random third party owning the trust root doesn't help you in any of those situations.


Yes it does, because it's not a random third party.

It's the device manufacturer.

Everyone is already dependent on some level of integrity by the device manufacturer, whether they're happy with this or not, because there is no other option.

That integrity might be checked. The manufacturer may be audited. Their processes and people may be background checked. Their hiring practices subject to a standard. Some of their devices may be selected at random, scrutinised, picked apart, checked by third parties, just to be sure. It's not done much, but perhaps it should be. Anyway, if that's done there is some higher level of justification in trusting the manufacturer's integrity, even if it remains a weak point.

If we already have to trust the device manufacturer and/or their auditors, that makes them owning the software trust root a very different proposition compared with anyone else owning it.


>That integrity might be checked

It might also be broken and they don't care for you and you're screwed.

Also, as was shown many a time, the manufacturer will prevent you from doing whatever you please with your own hardware, if you chose to do so. In that case, their integrity is broken by design.


That's all true, but the context here is things like payment terminals, ticket machines and energy charging meters, and whether it makes sense for third parties to easily modify the software running on them.

That hardware is not "your own". It is deployed to facilitate and protect a transaction between you and someone else.

The certainly exists a possibility that the manufacturer of those devices doesn't care about protecting them, with the result that you the user get over-charged, have your card details stolen etc.

But it's hard to see how making it easy for "anyone" to modify the software on those kinds of devices in an unconstrained way doesn't pose strictly greater risks of the above kind to you the user (being over-charged etc).

Surely you would rather have to trust just a few entities in control of the device, who have some kind of quality control legal obligation through the usual network of contracts and liabilities, than trust the 100s of entities that have had some contact at some point with the device, any of whom could have modified the software on it?

Any ideas on how to solve this problem which don't involve trusted roots?


I see there was a misunderstanding. The owner of the payment terminal has to be the trusted root, not the manufacturer.


Outside of consumer electronics, the relationship between hardware and software is more tight. Customers don't just buy an electronic device, they also do want the software that's inside.

Furthermore, in this context, systems are often sold and setup by a third party company. End users value more the fact that they can be sure that the software run by the device has not been tampered with, than the possibility of changing the software.

Even for desktop PCs in companies, it is not uncommon that a rather rigid management policy is enforced, so in a way in this case PCs are just general-purpose and locked down embedded systems.


There is a parallel in non-electronic hardware.

Certain safes irreparably break after several attempts to incorrectly or violently unlock them. Nobody — neither a thief, nor a legitimate owner, nor the manufacturer can open them. The only option to reclaim the contents is to very slowly and with a great effort to cut them using serious industrial machines.

This is a feature that customers ask for. They want to be sure that snatching their safe in an attempt to quietly brute-force the lock or the door in a garage does not make sense. It prevents such attempts, and they agree to pay for that with the risk to turn their safe into a piece of scrap if they screw up badly.

Same applies to locking bootloaders, firmware, etc. Sometimes it's better to throw away a device than to allow a risk of tampering.

Of course, the owner should voluntarily and consciously make this decision. In the case of DRM-ridden media players, or even phones, the consumer may have different preferences but not given a choice and even not made aware, which, of course, is not great.


The key is that the manufacturer doesn't have a way in either. The GPLv3 is okay with making software absolutely immutable in a piece of hardware. It's just not okay with locking the user out but still letting the manufacturer in.


> Nobody — neither a thief, nor a legitimate owner, nor the manufacturer can open them. The only option to reclaim the contents is to very slowly and with a great effort to cut them using serious industrial machines.

The analogy fails if there's no way of "cutting open" a gadget/cellphone/PC to regain control of it. The bank is perfectly capable of recovering the contents of the safe, even after a destructive failsafe, they just pay someone and wait a day.


Is there a way to implement secure boot for embedded devices in another way? I've been racking my brain and the only way I can think of is to have a flash of an image verification key also result in an on device regeneration of a private key. Then require the device to sign something with that private key to verify the boot. All of that would require a complicated boot process and probably an embedded controller to facilitate.


Raptor Engineering's FlexVer is specifically working to provide trusted root without burning-in keys.

The problem is that it essentially requires an extra FPGA and few other components that provide the necessary secure key storage and attestation.


This article is not about security. It's about device lock in, so the user will need to jailbreak the device to have control.

IMHO, tiviozation of devices should be banned by law. Currently, GPL3 forbids it only, so, please, use GPL3 for your new open source project.


Frankly, I think open source needs to embrace the idea of 'no commercial use' as a viable license choice.

I'm not keen on the idea of a million/billion/billion+ dollar company taking code I've written for my own enjoyment and using it to disproportionately enrich the company's shareholders or abuse its customers by denying them control over their devices.

Code that can be used in closed devices shouldn't be available for the manufacturers of closed devices to use.


> abuse its customers by denying them control over their devices

That's exactly what tivoization is, so the GPLv3 already prevents that.


Please use the AGPL (3), which also defends against SaaSS. (Legitimate SaaS providers can still use it.)


It doesn't stop providers from hosting it unmodified, or with modifications that they don't care about releasing—that's why Mongo switched from it, because other providers just provided hosting for it unmodified.


They're providing a service to people – and giving back their improvements, so you can then use them (though not in your proprietary version, if you have one, unless you negotiate that). Unless your business is in hosting versions of (the free version of) your program, problem solved.


A question very marginally related to embedded Linux, but is there any way to do hard real time on Linux? I'm working on an embedded project that requires it, and while its achievable with microcontrollers and embedded Rust, it would just be so much easier if I had an actual OS.


The wiki for the PREEMPT_RT patchset has a general HOWTO: https://rt.wiki.kernel.org/index.php/HOWTO:_Build_an_RT-appl...

Generally speaking, your limiting factor wrt. hard RT performance is going to be the underlying hardware, not the OS per se. This of course assumes that you avoid known sources of OS-related latency.


> A question very marginally related to embedded Linux, but is there any way to do hard real time on Linux?

Go look at the Beaglebone Black/Green/Blue/AI.

The microprocessors in those run Linux but have at least two PRUs (programmable real-time unit). Those PRU units are bare-metal, hard real-time but can communicate out to the Linux system which can do the "soft" real time stuff.


A numbers of SoC makers have adapted by adding M4 or M7s as real-time coprocessors to the main application core. There's also the TI Sitara (Beaglebone) with its PRU that can function as a real-time assist.


Yes, there's nothing more challenging about using Linux for hard realtime. You still need to address your IO to ensure realtime constraints but you have the benefit of already constructed realtime ethernet drivers, etc.

It's also common to run your code as init and avoid calling into the kernel.

A more modern treatment is to write whatever and then qualify the system thoroughly, and all things considered this will probably overtake everything else as the main development methodology.


ive not done hard real time for a number of years (usually I have foubd its not needed), but Xenomai would be one option for real hard realtime. The other option is to use the rtlinux patches, however that is not true hard real time, but for most use-cases works ok.


Interesting, thank you! I'll take a look at Xemomai, but it seems like the options are very limited. I'm building a TVC rocket, so hard real time is unfortunately a strict requirement.


I think you can do hard real time inside drivers. The other is some embedded linux capable chips have TPU's, aka Time Processing Units.

For something like a thrust vectoring rocket, I'd stick to a sub processor running an actual RTOS or bare metal code.


Xenomai might be fast enough with some coprocessing added if necessary. It makes your RT threads have essentially "normal" POSIX RT api, but runs them on separate kernel that can respond to things like interrupts much, much faster (including stable timeslicing)


Have you looked into Nuttx?


This is a bad article. No practical technical discussion on topic. Many flowcharts, and bizspeak.


This is really good! Thank you for writing something that's so easy to understand.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: