Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost as disgusting as Intel's response.

> "It is important to note that this method is dependent on malware running locally which means it's imperative for users to practice good security hygiene by keeping their software up-to-date and avoid suspicious links or downloads."

If you're running something on your ARM CPU, it's running locally!, they're using careful language to make the problem seem to impact them less than it does and lay the blame on the user.

This affects a huge number of processors out there, not just in phones and tablets but in embedded, industrial and network devices - many of which will never seem workarounds from their OS / software for the hardware fault.

Also OH: "The bug doesn’t happen if you don’t use our product"



Their response makes perfect sense and is a careful contrast to the "everything is broken" hysteria from the other side. This isn't like Heartbleed or a remote code execution vulnerability --- the attacker has to be able to run very specific code on the processor in order to exploit anything. Thus they are basically saying "you're fine if you don't run untrusted code" --- something which bears repeating since it is the only positive thing in this whole debacle.

but in embedded, industrial and network devices - many of which will never seem workarounds from their OS / software for the hardware fault.

...many of which won't ever run anything but the original firmware they came with from the factory anyway, making it a somewhat moot point (and if there are exploits that lead to remote code execution, chances are there are better things for an attacker to go after than try to exploit a timing side-channel.)

The ones most affected by this are cloud providers and other scenarios where multiple mutually untrusting users are sharing the same hardware on which they can run arbitrary code.

The ones least affected (i.e. not at all) are isolated single-user/single-purpose machines running trusted code. This includes the majority of embedded systems, which is presumably why ARM is emphasising the point so much compared to Intel or AMD.


> Thus they are basically saying "you're fine if you don't run untrusted code" --- something which bears repeating since it is the only positive thing in this whole debacle.

Except for that whole web thing that has most of the world running untrusted code just about every minute of every day. Security hygiene doesn't protect you from this.


Except for that whole web thing that has most of the world running untrusted code just about every minute of every day

Certainly those who e.g. have JS off by default are currently in the minority, but perhaps this will be the defining event that causes everyone else to think more deeply about letting untrusted code run, regardless of how sandboxed it is.

Things like TEMPEST[1] have shown for many years that side-channel attacks are extremely difficult to defend against, even for an attacker who is merely in proximity to the hardware and can't influence it at all; nevermind running code directly on it. It was only a matter of time. A lot of malware researchers already don't trust VMs and use separate physical hardware, precisely because of these risks of sharing trusted and untrusted code on the same hardware.

[1] https://en.wikipedia.org/wiki/TEMPEST


Sorry, but no way. Local execution of sandboxed or VMed code isn't going away, and shouldn't go away, and suggesting that it should or will is honestly a bit deformation professionnelle, not to say a tiny bit crackers. Neither Spectre nor several more Spectres will change that. Computing mostly got along fine without sandboxing in the 'Seventies, but it could do so because computing in the 'Seventies was a radically different world for many other reasons too. We need more, and more reliable, sandboxing not less. If that means that, for instance, hardware manufacturers have to start getting serious about relative timing guarantees instead of cheerfully doing whatever it takes to beat the benchmarks, well Too Bad Really. It's a direction that things should probably have been going in anyway in the interests of real-time perfomance guarantees.


IBM offered VM-level sandboxing back then, starting from s/370 hardware in 1972.

https://en.wikipedia.org/wiki/VM_(operating_system)


There’s not a single program that i trust. Not even the programs I’ve written myself; even taken all reasonable precautions, it’s far too easy to introduce a vulnerability.

There are programs that I choose to run with escalated privileges, like the operating system, out of necessity.

If I can’t run untrusted code; I simply can’t compute. It’s the main thing hardware is designed to do.


For instance, even if you read your compiler's source code and then compile it, there's no way to know that the compilation process didn't insert a back door, as in this classic essay: https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


I'm not saying you're wrong, but you seem to be using a different definition of untrusted than the rest of the thread. Your code may not be perfect, but you certainly don't suspect it's launching a timing attack against your machine's kernel, and exfiltrating what it discovers, right?


> but perhaps this will be the defining event that causes everyone else to think more deeply about letting untrusted code run, regardless of how sandboxed it is.

Your average user: Javascript is on the website not on my computer so it's fine.


Your average user: what’s a Javascript?


Such users simply expect the computer they bought to work as advertised. Is it really their fault when it can’t run a browser correctly?


while not directly related to meltdown and Spectre it's not just the web. Almost all code really shoud be considered untrusted. Every game on steam. every app on the app store. many apps use ad or analytics libraries the app devs don't know the innards of. there are plenty of apps that are just skinned web browsers effectively downloading new code all the time and while I might trust Facebook or maybe Slack to check all the 3rd party libraries and updates they import I doubt your average app dev team does any of that.


> The ones least affected (i.e. not at all) are isolated single user/single purpose machines running trusted code

What about people who use personal machines to browse the web and run untrusted JavaScript?


> What about people who use personal machines to browse the web and run untrusted JavaScript?

They're running untrusted code, and that doesn't sound like a single-purpose machine.


From the spectre paper, https://spectreattack.com/spectre.pdf

"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs (cf. Listing 2). "

That looks like is the current limit of javascript base attack. It doesn't seem to be able to access system resources nor execute system command script (yet....).

That kind of JS attack vector likely can be mitigated with web browser update.


So you think the majority of existing IoT devices currently in existence won't be exploited in the next 5 years? That's rather optimistic when a large portion seems to use hardcoded passwords and with the rise of easy to build botnets.


Yeah this bothered me as well. It's not just a matter of not clicking suspicious links or downloads, apparently (according to the Spectre pdf) there's a javascript vulnerability for this. What does a suspicious page even mean anyways, and how does the average user determine that before clicking?


None of the Cortex M cores seem to be affected, according to ARM. These are the ones mostly used in embedded applications.


Cortex M and all ARM-developed cores prior to v7 are in-order CPUs, so they do not do branch prediction or speculative execution. Without branch prediction this kind of issue cannot exist.


In-order CPUs (e.g. Pentium 1, older Atom and ARM chips, POWER 6) perform branch prediction and speculative execution. They'll predict the branch and start speculatively decoding and executing instructions from the predicted target, then flush the pipeline if there was a misprediction. What they won't do is execute past an instruction that has an unresolved data dependency.


You're right, I was conflating two things. It looks like no in-order ARM cores have branch prediction though.


‘M’ just stands for microcontroller, as apposed to ‘A’ for application.

Many / most complex or multitasking embedded devices will use ‘A’ series to meet their processing requirements, e.g. network routing + OS + firmware / OS programming and multiuser processsing.

I probably should have better qualified my use of the term ‘embedded’.


But it does affect the processors used on the Beaglebone and older Raspberry Pi devices.


It does not affect Raspberry Pi devices to my knowledge. Please provide source for your claims!

The CPUs in Raspberry Pi 1-3 are not affected.

  ARM11, Cortex-A7, Cortex-A5
Raspberry Pi 2 v1 use a Broadcom BCM2836 SoC with a 900 MHz 32-bit quad-core ARM Cortex-A7 processor.

Raspberry Pi 3 (and Pi 2 v1.2) uses a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor.

According to ARM website https://developer.arm.com/support/security-update it especially says

  "*Only affected cores are listed, all other Arm cores are NOT affected.*" 
and it lists only

  "Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, 
  Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73, Cortex-A75"


    and older Raspberry Pi devices
https://www.raspberrypi.org/products/raspberry-pi-2-model-b/

But it looks like I confused the Cortex-A7 with the Cortex-R7, which is not listed.


Cortex M cores don't have MMUs, so they cannot be affected. Also in embedded applications you would only be running trusted code anyway


The MMU isn't part of this attack, and there are ARM cores with MMUs that are not affected. Cortex M cores are in-order CPUs, so they do not have branch prediction or perform speculative execution which the attacks rely on.


The attack is all about bypassing memory protection, which without an MMU to provide any, is moot --- code has access to all of memory to read normally anyway.


Cortex-M has an MPU, not MMU to provide protection for memory regions.

But as said above, the MPU/MMU has no effect on this bug, this is ab out speculative execution and determining information from a cache side channel attack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: