Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey! Author of the talk here. Feel free to ask me anything.


How does this do anything to alleviate spectre and friends? Software isolation doesn't work (https://arxiv.org/abs/1902.05178), so the only protection you get is from process isolation, and I assume this doesn't change the OS mechanisms to enforce that.


It doesn't do anything directly about this. It makes it easier for you to choose other CPU's in the future. I guess Dagger as it is may _technically_ be invulnerable to spectre given it has no time reading support at all currently, but honestly part of the security here is done by the WebAssembly VM being slow enough the time spookiness doesn't happen as easily.

I think a lot of this is limited by existing OS mechanisms. I've been digging into seL4 to create a platform for WebAssembly code, but moving internationally eats up all your time. :(


Another method is to limit access to timing mechanisms: https://www.infoq.com/presentations/cloudflare-v8


To which the V8 developers say:

> We might consider adjusting the precision of timers or removing them altogether as an attempt cripple the program’s ability to read timing side-channels.

> Unfortunately we now know that this mitigation is not comprehensive. Previous research [30] has shown three problems with timer mitigations: (1) certain techniques to reduce timer resolution are vulnerable to resolution recovery, (2) timers are more pervasive than previously thought and (3) a high resolution timer can be constructed from concurrent shared memory. The Amplification Lemma from Section 2.4 is the final nail in this coffin, as it shows (4) gadgets themselves can be amplified to increase the timing differences to arbitrary levels.


I have been looking at doing things like arbitrarily limiting the WebAssembly execution engine to only run an instruction per microsecond. This would then make full execution speed something programs have to be configured to do rather than something they get by default. I still don't know though, this stuff gets tricky.

I think the ultimate goal for my implementation of this stuff is to remove anything higher than seconds resolution of time unless the program actually demonstrates a need for it. I'd like to have javascript and the browser processes be in separate machine processes, with javascript artificially only allowed to use 10% of the CPU time at max. Honestly I think that letting everything run at full speed on the CPU is probably a mistake.


> remove anything higher than seconds resolution of time unless the program actually demonstrates a need for it

As someone working with Web Audio, I wonder if it's even possible to tell if a program "legitimately" needs milli/microseconds timing precision? Typically it'd be running on its own worker/audiolet(?) thread, but I imagine it could be exploited for some nefarious purpose.

Edit: I realized the talk is about WASM on the server, but, who knows, maybe in the future it could also involve audio that needs high-precision timers.


Yeah, my thought there is make resolution of timers a dependent capability. My ultimate goal is to let users be able to say "no, $APP doesn't need more than second resolution of timers", and if the user is wrong the app just has to deal with it.


That makes sense. In browsers there are already restrictions around audio/video autoplay, as an example, and an application needs logic around waiting for user permission. So I can imagine something similar, where the default timer could be coarse, and high-resolution timing would require elevated privileges.

Anyway, thank you for the notes/slides about WebAssembly on the server, fascinating stuff with a bright future!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: