Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Their response makes perfect sense and is a careful contrast to the "everything is broken" hysteria from the other side. This isn't like Heartbleed or a remote code execution vulnerability --- the attacker has to be able to run very specific code on the processor in order to exploit anything. Thus they are basically saying "you're fine if you don't run untrusted code" --- something which bears repeating since it is the only positive thing in this whole debacle.

but in embedded, industrial and network devices - many of which will never seem workarounds from their OS / software for the hardware fault.

...many of which won't ever run anything but the original firmware they came with from the factory anyway, making it a somewhat moot point (and if there are exploits that lead to remote code execution, chances are there are better things for an attacker to go after than try to exploit a timing side-channel.)

The ones most affected by this are cloud providers and other scenarios where multiple mutually untrusting users are sharing the same hardware on which they can run arbitrary code.

The ones least affected (i.e. not at all) are isolated single-user/single-purpose machines running trusted code. This includes the majority of embedded systems, which is presumably why ARM is emphasising the point so much compared to Intel or AMD.



> Thus they are basically saying "you're fine if you don't run untrusted code" --- something which bears repeating since it is the only positive thing in this whole debacle.

Except for that whole web thing that has most of the world running untrusted code just about every minute of every day. Security hygiene doesn't protect you from this.


Except for that whole web thing that has most of the world running untrusted code just about every minute of every day

Certainly those who e.g. have JS off by default are currently in the minority, but perhaps this will be the defining event that causes everyone else to think more deeply about letting untrusted code run, regardless of how sandboxed it is.

Things like TEMPEST[1] have shown for many years that side-channel attacks are extremely difficult to defend against, even for an attacker who is merely in proximity to the hardware and can't influence it at all; nevermind running code directly on it. It was only a matter of time. A lot of malware researchers already don't trust VMs and use separate physical hardware, precisely because of these risks of sharing trusted and untrusted code on the same hardware.

[1] https://en.wikipedia.org/wiki/TEMPEST


Sorry, but no way. Local execution of sandboxed or VMed code isn't going away, and shouldn't go away, and suggesting that it should or will is honestly a bit deformation professionnelle, not to say a tiny bit crackers. Neither Spectre nor several more Spectres will change that. Computing mostly got along fine without sandboxing in the 'Seventies, but it could do so because computing in the 'Seventies was a radically different world for many other reasons too. We need more, and more reliable, sandboxing not less. If that means that, for instance, hardware manufacturers have to start getting serious about relative timing guarantees instead of cheerfully doing whatever it takes to beat the benchmarks, well Too Bad Really. It's a direction that things should probably have been going in anyway in the interests of real-time perfomance guarantees.


IBM offered VM-level sandboxing back then, starting from s/370 hardware in 1972.

https://en.wikipedia.org/wiki/VM_(operating_system)


There’s not a single program that i trust. Not even the programs I’ve written myself; even taken all reasonable precautions, it’s far too easy to introduce a vulnerability.

There are programs that I choose to run with escalated privileges, like the operating system, out of necessity.

If I can’t run untrusted code; I simply can’t compute. It’s the main thing hardware is designed to do.


For instance, even if you read your compiler's source code and then compile it, there's no way to know that the compilation process didn't insert a back door, as in this classic essay: https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


I'm not saying you're wrong, but you seem to be using a different definition of untrusted than the rest of the thread. Your code may not be perfect, but you certainly don't suspect it's launching a timing attack against your machine's kernel, and exfiltrating what it discovers, right?


> but perhaps this will be the defining event that causes everyone else to think more deeply about letting untrusted code run, regardless of how sandboxed it is.

Your average user: Javascript is on the website not on my computer so it's fine.


Your average user: what’s a Javascript?


Such users simply expect the computer they bought to work as advertised. Is it really their fault when it can’t run a browser correctly?


while not directly related to meltdown and Spectre it's not just the web. Almost all code really shoud be considered untrusted. Every game on steam. every app on the app store. many apps use ad or analytics libraries the app devs don't know the innards of. there are plenty of apps that are just skinned web browsers effectively downloading new code all the time and while I might trust Facebook or maybe Slack to check all the 3rd party libraries and updates they import I doubt your average app dev team does any of that.


> The ones least affected (i.e. not at all) are isolated single user/single purpose machines running trusted code

What about people who use personal machines to browse the web and run untrusted JavaScript?


> What about people who use personal machines to browse the web and run untrusted JavaScript?

They're running untrusted code, and that doesn't sound like a single-purpose machine.


From the spectre paper, https://spectreattack.com/spectre.pdf

"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs (cf. Listing 2). "

That looks like is the current limit of javascript base attack. It doesn't seem to be able to access system resources nor execute system command script (yet....).

That kind of JS attack vector likely can be mitigated with web browser update.


So you think the majority of existing IoT devices currently in existence won't be exploited in the next 5 years? That's rather optimistic when a large portion seems to use hardcoded passwords and with the rise of easy to build botnets.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: