Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are CPUs that are immune to these attacks, including spectre.

The CERT recommendation was to throw away affected CPUs and use alternatives.

Now this isn't very realistic today when the in-order alternatives are slow and not comparable performance. But it does say that CERT is not giving up on langsec.

(Team Mill - we're immune too and will put out a light paper on how these attacks apply to in-order processors that support speculation and what we're doing to prevent them)



Been checking the Mill forums manually waiting for a detailed post from yourself or Ivan about the matter.

For reference, there is an interesting thread from someone who seems like they knew the nature of the issue before the disclosure, to which Ivan replies discussing some of the complications of implementing branch prediction [0]. Ivan concludes with:

> "To answer your final question: no, branch prediction is not actually necessary, and Mill uses exit prediction instead. But so long as memory latency is much longer than a compiler can see, and path fanout remains larger than hardware can handle in parallel, prediction of some kind will be necessary for performance."

This is interesting as many people are now debating just what types of speculative execution, if any, can actually be performed without exposing security risks.

[0] https://millcomputing.com/topic/prediction/#post-3049


> Team Mill - we're immune too and will put out a light paper on how these attacks apply to in-order processors that support speculation and what we're doing to prevent them

Will this be sent to the mailing list when released?


Yes probably.

I've already written the paper and it's just got to survive a lot of peer review. It turns out that it would be real easy to envisage an in-order processor that has some mechanism for speculation that was actually vulnerable to spectre and variations of meltdown and the paper explores that - so hopefully it's an interesting paper even if the tl;dr is we claim to be immune.


I'm curious, because I think the Mill as described in the talks is vulnerable to a variant of Spectre. If you have a sequence of code like this:

    if (index < bounds) {
        index2 = array1[index];
        ... array2[index2];
    }
If the compiler speculates both array accesses above the bounds check, then the first one can still succeed (i.e. not produce a NaR) while accessing attacker-controlled memory for the value of index2.

You could obviously fix this by never generating code that does double speculation, but you could also do that by modifying a conventional OoO microarchitecture.


Spot on!

This variant of Spectre would be software bug not a hardware bug on the mill.

Our specialiser had to be fixed to not produce code with this flaw.

And so we wrote a light paper on it, and perhaps a talk etc ;)


It seems that Mill's combination of a single address space and speculation-before-permissions-checks is still quite vulnerable to an ASLR leak. Have you made any changes to mitigate this, or do you just consider address space layout leaks acceptable behavior?


seriously? Mill is/will do speculation before perms? And here I thought turfs were the elegant answer to this nightmare.


See slide 72 of the metadata talk[1] and slide 51 of the IPC talk[2], which indicate that it does speculation before permissions checking.

Since turf permissions operate on the granularity of an arbitrary address range (rather than a page like traditional MMUs), the permissions cache (what the Mill calls a PLB) has a worse latency/power tradeoff than a traditional TLB. The Mill takes advantage of its single address space and reduces some of this hit by doing permissions checks in parallel with the access.

[1] https://millcomputing.com/docs/metadata/ [2] https://millcomputing.com/docs/inter-process-communication/


Thank you for watching the talks! :D

Luckily its not quite as you interpreted:

The L1$D is accessed in parallel with the PLB. Both at top-level caches - one for data, one for protection.

If there is a PLB miss we have no cache-visible side-effects until the protection has been resolved.

The paper we're preparing will cover this in detail, because as you can see, the talks are a bit light on exactly what happens in what order when here.


>Our specialiser had to be fixed to not produce code with this flaw.

Isn't prefetching/load-hoisting pretty much required to get any sort of performance out of an in-order VLIW-like machine?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: