I'm pretty sure that high performance open source CPUs will have their own obscure problems. Too much complexity, too many dependencies, too many possible feature interactions.
They will, but you'll be able to understand the problem, the fix, and how it combines with other fixes.
I can def see a world soon where all of Intel's woes have combined to the point that they've run out of patch space for their microcode updates, and you have to pick and choose what you want mitigations for.
That is undoubtedly true, but at least you will have more engineers that are able to dig into them to identify the root cause of these behaviours and fix it.
If you are badly impacted by a bug and no one else is, you are the only one with an incentive to find and fix it. You might pay the CPU manufacturer to share the incentive with them, but you'd need quite deep pockets for this.
I wouldn't be surprised if widespread open source CPUs also had better debugging tools at their disposal.
Is that actually better in this case? Intel found the issue internally. Nobody knows what it is. The advisory isn't sufficient information to figure it out. People can patch at their leisure, fairly sure that nobody is about to pop up with a 1-day exploit for it.
With an open source CPU, by now someone would have looked at the commits that fixed the Verilog/microcode, figured out what the bug is, and there'd be a convenient command line tool to get root on the hypervisor uploaded to GitHub within an hour.
This is one of those times when from a practical perspective proprietary seems to win.
> This is one of those times when from a practical perspective proprietary seems to win.
I'm not sure about that, but I must say that Intel is being surprisingly candid. Similar errata have been swept under a rug and published a dozen at a time with no workarounds for years.
I can be fairly sure, as otherwise Intel wouldn't be claiming the issue was found via their own internal audits, and likely someone else would be writing about it.
Is that actually better in this case? Intel found the issue internally. Nobody knows what it is. The advisory isn't sufficient information to figure it out. People can patch at their leisure, fairly sure that nobody is about to pop up with a 1-day exploit for it.
That's pretty bad, actually. It means a determined adversary can simply look at the patch to figure out how to exploit vulnerable systems. (Presumably there exists a way to look at the actual unencrypted bytes being modified; if so, you can work out what it's doing.)
And since people can patch at their leisure, a determined adversary will have lots of targets to choose from after they analyze the patch.
To be fair, I don't know much about CPU microcode. But while it's true that lonewolf hackers are less likely to be a threat here, a threat does exist: governments are increasingly turning to industrial espionage-type practices (apparently even the NSA https://en.wikipedia.org/wiki/ECHELON#Examples_of_industrial...) and this type of exploit seems, at a glance, pretty lucrative: unauthenticated users can achieve privilege escalation.
It's easy to imagine some facility somewhere of industrious Chinese reverse engineers who are pretty darn good at this, and that it's their full-time job to find and weaponize such exploits. In fact, swap out "Chinese" with "American" and you get the NSA.
> With the Pentium there are two layers of encryption and the precise details explicitly not documented Intel, instead being only known to less than ten employees.
I guess I'll leave the comment up, since... well, I was formerly a pentester, and it seemed like a logical sequence of arguments. That's where I learned about the technique of looking at binary diffs to work out what security patches were doing.
It's very strange to me that this is possible to encrypt, though. Isn't it "just" a matter of getting your hands on a processor + the update? Why is it impossible to dump the microcode as it's being decrypted? Sure, you won't be able to analyze the patch before it's decrypted, but are we just relying on the idea that it's too much work for someone to figure out how to listen in on the decrypting process?
Following that Wikipedia citation, the quote about it being in the heads of less than 10 employees is from 1997, so it's ancient information. I'm curious what the current state of the art is.
They're encrypted with a key that's shipped on every processor they ship. A combo of classic espionage and electron microscopes means that we should assume state actors can know the exact mechanism of microcode update changes.
Thanks! In that case, they seemed to have a good point: if it was an open source CPU, it seems like security might be an issue.
One way to do it: ship security updates using the same technique as intel, and don't release the source code for the fix until much later. I think I remember an open source project doing something similar. But of course, it seems pretty hard to manage that complexity: what if the fix introduces code changes that future commits depend on?
I'm by no means an expert, but open design and verification of secure enclaves seems quite feasible -- keys would differ between different chip makers, I imagine. Folks could write patches, but perhaps not sign them for hardware they own. Though I'd expect most maintainers to work with the community to get bugs fixed.