What would be the point of this change? It erodes security in some moderately meaningful way (even easier to supply chain new computers by swapping the boot disk) to add what amounts to either a nag screen or nothing, in exchange for some ideological purity about Microsoft certificates?
It really doesn't. UEFI are still not by default locked behind a password (can't be locked since you couldn't change settings in the UEFI if that were the case), so anyone that has access to change a drive can also disable secure boot or enroll their own keys if they want to do an actual supply chain attack.
If your threat model is "has access to the system before first boot" you are fucked on anything that isn't locked down to only the manufacturer.
What if my threat model is "compromised the disk imaging / disk supply chain?" This is a plausible and real threat model, and represents a moderate erosion, like I said.
UEFI Secure Boot is also just not a meaningful countermeasure to anyone with even a moderate paranoia level anyway, so it's all just goofing around at this point from a security standpoint. All of these "add more nag screens for freedom" measures like the grandparent post and yours don't really seem useful to me, though.
This is a fascinating thing to post on an article about… bypassing UEFI Secure Boot?
PKFail, BlackLotus/BatonDrop, LogoFail, BootHole, the saga continues. If you’ve ever audited a UEFI firmware and decided it’s going to protect you, I’m not sure what to tell you.
To be clear, it’s extremely useful and everyone should be using it. It’s also a train wreck. Both things can be true at the same time. Using Secure Boot + FDE keys sealed to PCRs keeps any rando from drive bying your machine. It also probably doesn’t stop a dedicated attacker from compromising your machine.
> No one said anything about a nag screen.
The parent post suggested that Secure Boot arrive in Setup Mode. Either the system can automatically enroll the first key it sees from disk (supply chain issue, like I posted) or nag screen a key hash / enrollment process. Or do what it does today.
> For the record google pixels work largely this way. Flash image, test boot, re-lock bootloader
So do UEFI systems. Install OS, test boot, enroll PK. What the OP is proposing is basically if your Android phone arrived and said “Hi! Would you like to trust software from Google?!?!” on first boot.
And how many times has Intel's trusted computing platform been breached now? Would you also claim that SGX is not a meaningful security measure? Recall that the alternative to SecureBoot is ... oh that's right, there isn't an equivalent alternative.
People have broken into bank vaults. That doesn't mean that bank vaults don't provide meaningful security.
> So do UEFI systems. Install OS, test boot, enroll PK.
"Enroll PK" is "draw the rest of the fucking owl" territory.
I believe you somewhat misunderstood OP. The description was of the empty hardware. Typical hardware would ship with an OS already installed and marked as trusted. It's the flow for changing the OS that would be different.
> automatically enroll the first key it sees from disk (supply chain issue, like I posted)
I'm unconvinced. You're supposing an attacker that can compromise an OEM's imaging solution but not the (user configurable!) key store? That seems like an overly specific attack vector to me.
The breach in TFA happened because Microsoft actually did something benevolent and it blew up on their face. Now almost all of the hardware that takes security a bit seriously (basically expensive business class computers) have to upgrade their UEFI FW (many have already done ao via Windows Update).
No single point of failure will protect you fully. UEFI SB is just one layer. And nobody ever would protect you from a dedicated nation state (except another nation state). Unless you own the entire supply chain from silicon contractors all the way up to every single software vendor and every single network operator, you cannot fully prove things aren't snitching on you.
With almost all modern motherboard firmware you can enter Setup mode and use KeyTool to configure the trust store however you want, starting from enrolling a user PK (Platform Key) upwards.
It’s generally a lot more secure to avoid the use of any shims (since they leave you vulnerable to what happened in this article) and just build a UEFI Kernel Image and sign that.
Some systems need third party firmware to reach the OS, and this can get a bit more complicated since those modules need to load with the new user keys, but overall what you are asking is generally possible.
> It's really funny to me that Microsoft's attempt to finally stamp out desktop Linux once and for all failed
This conspiracy was never true and never happened. First off, note that the first version of the thing in the article you’re commenting on relied on a Fedora shim loader which Microsoft signed. Second off, note that almost all modern motherboards let you enroll your own UEFI keys and do not rely on exclusively the Microsoft keys.
The only place this is was becoming an issue for Linux was early Secure Boot implementations where the vendor was too lazy to allow key enrollment, and that era has generally passed.
I don't think it deserves quite that easy a dismissal; MS did lock down early UEFI+ARM devices and prohibit user-controlled keys (see the Windows RT devices as an example). Given their history of playing dirty, it's perfectly reasonable that people assumed this to be another play at undermining Linux, even if things didn't end up going that way.
By 2019, when the parent article was written, I don't think that was a good read on the situation. By 2026, when the parent comment was written, I really don't think it's a good read on the situation.
It's hard to believe when MS use secure boot to prevent Linux being able to boot. Twice now on my dual boot system a Windows update has prevented Linux being bootable. If it weren't for MS's history one might consider it the accident of a ridiculously inept company.
Even just the lies around required hw updates is enough not to trust them.
SecureBoot looks like a system designed to make it hard to change OS, it has been used by MS for that, MS have a history of user-antagonist actions.
You say the conspiracy was never true, I'm going to need some serious proof.
> SecureBoot looks like a system designed to make it hard to change OS
To be fair SecureBoot is in a way just that: it is intended to only boot binaries that are signed with a key that has been enrolled into the UEFI. The main issue is like almost always how those keys are managed.
If you buy cheap HW you'll get totally half-assed firmware. It is usually the firmware that causes stupid reordering of the boot entries and weird resets.
Business class computers (Thinkpads, Latitudes or Elitebooks) have somewhat half-assed firmware. So you usually don't encounter shenanigans like that.
Only server computers have almost not half-assed firmware. They are very reliable but take forever to boot.
If you want non-half-assed firmware, found your own computer company or join big tech where they can afford custom motherboards with their own firmware.
> Most motherboards include only Microsoft keys as trusted
Is this really true, in 2019 when this was written or today? I haven’t seen a motherboard that didn’t let me enroll my own keys in a really long time. Laptops are a different story but even there, it’s been awhile.
> Microsoft forbid to sign software licensed under GPLv3 because of tivoization restriction license rule
> Is this really true, in 2019 when this was written or today?
This is true in the sense that they only ship with MS' keys as trusted, but all MoBos (including laptops) I had allow enrolling your own. Some might handle not having MS' keys better (or at all) than others, but it should in theory be possible to remove them, whether it will boot after is a different question (see option-ROM [1])
You are missing the point. It's the fault of those who pushed SecureBoot down our throats (and don't get me wrong: I use SecureBoot) to have decided that Microsoft had both a free-pass to have its certs by default in every UEFI out there but no other certs.
So users either have to understand how to enroll their own certs or to use a shim signed by... Microsoft.
Let's not forget that we're talking about the company responsible for Windows 11 here.
Of the GPLv3 sentence? No, it's dishonest rhetorically. Of the piece? Also I don't think so, exploiting the shims is a fun way to prove that Secure Boot is silly but we already knew that, and by 2019 claiming that "most" systems only allowed Microsoft keys is just flat out dishonest as well.
> It's the fault of those who pushed SecureBoot down our throats (and don't get me wrong: I use SecureBoot) to have decided that Microsoft had both a free-pass to have its certs by default in every UEFI out there but no other certs.
I really don't get this argument in general; Microsoft certs are enrolled by default as a combination of: a matter of convenience for majority users who are going to use Microsoft OSes, the unfortunate design of Option ROM signature checking, and the desire to get a Windows logo on the packaging and Microsoft OEM discounts.
There's no Secure Boot or UEFI related reason that boards can't come in Setup Mode with no PK, and most industrial boards do indeed come this way, since they don't need Option ROMs and customers don't want a Microsoft logo.
> So users either have to understand how to enroll their own certs or to use a shim signed by... Microsoft.
This seems like the best outcome for end-user computers which will have Windows installed to me? Users get a computer that checks that the OS it came with is valid (well, tries to, but that's a different can of worms), and still have the option to do whatever they want with it if they so desire. They can choose the Microsoft signed shim for convenient dual-booting, or erase the platform keys and own their system end to end if they wish.
> Let's not forget that we're talking about the company responsible for Windows 11 here.
I've never really understood these arguments, and it's even weirder to bring Windows 11 into it. Is the thing we're railing against here Ballmer-Borg Microsoft? Shitty Product Management Kills Products Microsoft? AI Infested Microsoft? The Venn diagram of overlap between 1990s Microsoft (genesis of UEFI), 2012 Microsoft (Secure Boot introduction), and 2025 Microsoft (Windows 11) seems likely to be... quite small.
It's both; it's aimed at hosting a single user program on another userspace, but also seems to have its own kernel as well?
The "North" part seems to be what I think you'd traditionally think of as a library OS, and then the "South" part seems to be shims to use various userlands and TEEs as the host (rather than the bare hardware in your example).
I'm really confused by the complete lack of documentation and examples, though. I think the "runners" are the closest thing there is.
It's a library that is linked to in place of an operating system - so whatever interface the OS provided (syscalls+ioctls, SMC methods, etc.) ends up linked / compiled into the application directly, and the "external interface" of the application becomes something different.
This is how most unikernels work; the "OS" is linked directly into the application's address space and the "external interface" becomes either hardware access or hypercalls.
Wine is also arguably a form of "library OS," for example (although it goes deeper than the most strict definition by also re-implementing a lot of the userland libraries).
So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox, and run it on SEV-SNP. Or take an OP-TEE TA, link it to LiteBox, and run it on Linux.
The notable thing here is that it tries to cut the interface in the middle down to an intermediate representation that's supposed to be sandbox-able - ie, instead of auditing and limiting hundreds of POSIX syscalls like you might with a traditional kernel capabilities system, you're supposed to be able to control access to just a few primitives that they're condensed down to in the middle.
> So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox
If you have to recompile, you might as well choose to recompile to WASM+WASI. The sandboxing story here is excellent due to its web origins. I thought the point of LiteBox is that recompilation isn’t needed.
Looking more closely, it looks like there are some "North" sides (platforms) with ABI shims (currently Linux and OP-TEE), but others (Windows, for example), would still require recompilation.
> If you have to recompile, you might as well choose to recompile to WASM+WASI.
I disagree here; this ignores the entire swath of functionality that an OS or runtime provides? Like, as just as an example, I can't "just recompile" my OP-TEE TA into WASM when it uses the KDF function from the OP-TEE runtime?
I had previous experience with WASM on TEE. Just use the foreign function interface. Remember WASM isn’t native code so you still need other native code to run WASM (such as wasmtime), and you can import other native functions into WASM through the runtime.
Any pure code (WASM or otherwise) that does not perform any input/output is by definition useless. It consumes electricity to do computation and there is no way to communicate its results.
The use case here was to use a KDF function from the TEE, and I assume it serves as an oracle where the actual key material cannot be revealed.
Turing machines have a well-defined input, and output if they halt.
So no, they are absolutely not useless, they are just "single-shot" models of computation. Certain software fit that model very nicely (e.g. compilers), others less so.
It's absolutely trivial to make a very strict sandbox - just a simple, mathematical Turing machine is 100% safe.
The hard part is having actual capabilities, and only WASI (which is much smaller than WASM) helps here, and it's not clear why would it be any better than other options, like LiteBox. Especially that wasm does have a small, but real overhead.
> you can’t expect every game studio to have the expertise to write secure, reliable kernel drivers.
If someone wants to sell something that comes with a driver, the driver needs a modicum of care applied to it. This is of course also on Microsoft for signing these things, although that ship sailed ages ago.
Yes, I wouldn't expect every studio to need their own team - game studios can buy anti-cheat middleware, and the middleware can compete on not being total junk (which is how the industry already works, with a side helping of these more obscure awful drivers and a few big studios with their own).
> If Microsoft wants Windows to be more stable and secure, they should provide built-in anti-cheat support in the OS.
I guess they could have users approve a set of signed applications that would get some "authenticated" way to read address space and have an attestation stapled to it? It's actually kind of an interesting idea. The hardest part here would be that each anti-cheat tries to differentiate with some Weird Trick or another, so homogenizing the process probably isn't appealing to game developers really.
Anti-cheat could go the opposite direction, with basically a "fast reboot" into an attested single process VM sandbox, but this has issues with streaming/overlays and task switching which are a bit thorny. I've always thought that this might be the way to go, though - instead of trying to use all kinds of goofy heuristics and scanning to determine whether the game's address space has been tampered with or there's a certain PCIe driver indicating a malicious DMA device or whatever, just run the game in a separate hypervisor partition with a stripped down kernel+OS, IOMMU-protected memory, and no ability to load any other user code, like a game console lite.
I think we ended up in this situation because of this outsourcing. Competitive games and MMO need comprehensive security solution, as cheating has a global lasting impact in matchmaking. Attackers may also have financial motivation to attack the anti cheat in these games.
Coop games might don't need as much security as competitive games, as some games do not have global state, or the global state is simply cosmetics. Since nowadays all the anticheat you can buy (except VAC) are kernel mode you'll have to accept the security risk just to have fun with your friends.
Thanks! I had no idea it was already being used in the wild. It's a good case study for why shipping signed drivers with exposed IOCTLs and weak authentication is such a liability, even if (especially if) the developer never bothers to even load them.
The subscription services have assumptions baked in about the usage patterns; they're oversubscribed and subsidized. If 100% of subscriber customers use 100% of their tokens 100% of the time, their business model breaks. That's what wholesale / API tokens are for.
> hitting that limit is within the terms of the agreement with Anthropic
It's not, because the agreement says you can only use CC.
This is how every cloud service and every internet provider works. If you want to get really edgy you could also say it's how modern banking works.
Without knowing the numbers it's hard to tell if the business model for these AI providers actually works, and I suspect it probably doesn't at the moment, but selling an oversubscribed product with baked in usage assumptions is a functional business model in a lot of spaces (for varying definitions of functional, I suppose). I'm surprised this is so surprising to people.
Don't forget gyms and other physical-space subscriptions. It's right up there with razor-and-blades for bog standard business models. Imagine if you got a gym membership and then were surprised when they cancelled your account for reselling gym access to your friends.
If they rely on this to be competitive, I have serious doubts they will survive much longer.
There are already many serious concerns about sharing code and information with 3rd parties, and those Chinese open models are dangerously close to destroying their entire value proposition.
The Business model is Uber. It doesn't work unless you corner the market and provide a distinct value replacement.
The problem is, there's not a clear every-man value like Uber has. The stories I see of people finding value are sparse and seem from the POV of either technosexuals or already strong developer whales leveraging the bootstrapy power .
If AI was seriously providing value, orgs like Microsoft wouldn't be pushing out versions of windows that can't restart.
It clearly is a niche product unlike Uber, but it's definitely being invested in like it is universal product.
> selling an oversubscribed product with baked in usage assumptions is a functional business model in a lot of spaces
Being a common business model and it being functional are two different things. I agree they are prevalent, but they are actively user hostile in nature. You are essentially saying that if people use your product at the advertised limit, then you will punish them. I get why the business does it, but it is an adversarial business model.
>Without knowing the numbers it's hard to tell if the business model for these AI providers actually works
It'll be interesting to see what OpenAI and Anthropic will tell us about this when they go public (seems likely late this year--along with SpaceX, possibly)
> Selling dollars for $.50 does that. It sounds like they have a business model issue to me.
its not. The idea is that majority subscribers don't hit limit, so they sell them dollar for 2. But there is minority which hit limit, and they effectively selling them dollar for 50c, but aggregated numbers could be positive.
> It's not, because the agreement says you can only use CC.
it's like Apple: you can use macOS only on our Macs, iOS only on iPhones, etc. but at least in the case of Apple, you pay (mostly) for the hardware while the software it comes with is "free" (as in free beer).
reply