I'm starting to consider whether this reflects a larger failure in the industry/community: Traditionally, many of us (I'd say almost all) have been focused on security at the OS level and above. We've assumed that the processor and related hardware are safe and reliable.
However, below the OS level much new technology has been introduced that has greatly increased the attack surface, from processor performance enhancements such as branch prediction to subsystems such as Intel ME. I almost feel like Intel broke a social compact that their products would be predictable, safe commodities on which I can build my systems. But did those good old days ever really exist?. And of course, Intel naturally doesn't want their products to be commodities, which likely is why they introduced these new features.
Focusing on OS and application security may be living in a fantasy world, one I hesitate to give up because the reality is much more complex. What good are OpenBSD's or Chrome's security efforts, for example, if the processor on which they run is insecure and if there are insecure out-of-band management subsystems? Why does an attacker need to worry about the OS?
(Part of the answer is that securing the application and OS makes attacks more expensive; at least we can reduce drive-by JavaScript exploits. But now the OS and application are a smaller part of the security puzzle, and not at all sufficient.)
The issue of hardware security really has been ignored too long in favor of the quest for performance enhancement.
Perhaps there is a chance now for markets to encourage production of simplified processors and instruction sets that are designed with the same philosophy as OpenBSD.
I would imagine companies and governments around the globe should have developed a new interest in secure IT systems with news about major exploits turning up every few months now it seems.
It reflects the industries priorities: performance and productivity. That's all. You can make the argument that these priorities are wrong, but we've known such attacks were theoretically possible since the vulnerability was introduced.
Even now I'm certain there are many companies not even bothering to patch against Spectre and Meltdown because they've deemed the performance degradation to be worse than the risk, and that's a perfectly rational decision to make.
I'd heard Jon Callas of PGP talking about concerns over hardware-level security -- CPU and baseboard systems / BMCs -- in the mid naughties. So this stuff has been on at least some peoples' radar. Not particularly widespread, perhaps.
Theo de Raat turned up with a ~2005 post specifically calling out Intel as well, though not necessarily speculative execution that I'm aware.
However, below the OS level much new technology has been introduced that has greatly increased the attack surface, from processor performance enhancements such as branch prediction to subsystems such as Intel ME. I almost feel like Intel broke a social compact that their products would be predictable, safe commodities on which I can build my systems. But did those good old days ever really exist?. And of course, Intel naturally doesn't want their products to be commodities, which likely is why they introduced these new features.
Focusing on OS and application security may be living in a fantasy world, one I hesitate to give up because the reality is much more complex. What good are OpenBSD's or Chrome's security efforts, for example, if the processor on which they run is insecure and if there are insecure out-of-band management subsystems? Why does an attacker need to worry about the OS?
(Part of the answer is that securing the application and OS makes attacks more expensive; at least we can reduce drive-by JavaScript exploits. But now the OS and application are a smaller part of the security puzzle, and not at all sufficient.)