Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But those are all just mitigations, not fixes. Such changes don't guarantee that it's impossible to still leak information - just that it will be harder (but not inherently impossible) to exploit.


The root of all evil is nondeterminism. A program which executes deterministically cannot receive a timing channel! So "all" we have to do to end this entire class of problems is deny all nondeterministic features, including shared memory threading and timers and two way network communication, to all untrusted code.

A practical vision of computing without these things is challenging, though!


We can also try to avoid concurrently executing untrusted code on a single processor?

At least on the server side, that's realistic. Big companies like Google already run their own trusted code on their own hardware. For everyone else, it's not impossible to move from VPS hosting to "bare metal cloud" like Scaleway offers.

(Vision for the Future™: instead of their tied-to-Linux "someone else's Raspberry Pi" experience, it would be a bunch of tiny single core processors, each connected to their own tiny DRAM chip. You upload unikernel images or CloudABI binaries (that would be then wrapped into unikernels) into the Cloud™, where the load balancer boots up your images on-demand on as many of these tiny computers as needed. RAM is zeroed before boot.)

The other big environment with tons of untrusted code is of course the web browser. This is significantly more ridiculous, but: why not embed a ton of tiny simple in-order ARM cores without any shared cache into desktop processors, and run JS/WASM on them? (One core used per origin per site) :D


> The other big environment with tons of untrusted code is of course the web browser. This is significantly more ridiculous, but: why not embed a ton of tiny simple in-order ARM cores without any shared cache into desktop processors, and run JS/WASM on them? (One core used per origin per site) :D

I definitely chuckled. :) On a more serious note- I'd bet at least a couple bucks it would be possible (albeit super weird and difficult) to channel information through hardware-acceleration mechanisms. Lots of rendering happens on the GPU these days.


Yes exactly, the problem is non determinism. But I think it should be possible to have deterministic execution while still keeping compatibility with existing software, by replacing the real time with a "fake" time that is deterministic upon the stream of instructions executed, which is on average close to real time. This requires support by the CPU, but it is a relatively simple change, compared to changing speculative execution and caching. Fake time could be kept in sync with real time, by periodically running a privileged process that takes a fixed amount of fake time, but variable amount of real time. (I brought this up in a different comment thread that got marked as duplicate.)


... and then you have to make, for example, scheduling of threads based on "fake time".

rr does just this! I think it uses the "instructions retired" performance counter as its "fake time"; that turns out to be deterministic enough for its purposes. Whether it's deterministic enough in a security context, I don't know for sure.

But this approach, though it can run threaded software, will not let untrusted software (which should really mean all software!) use more than one real core or hardware thread.


What's rr?



> A program which executes deterministically cannot receive a timing channel!

Only if you define 'deterministically' to mean 'determined only by the program's own legitimately accessible context'. These timing side-channel attacks work precisely because the processor performs in a deterministic and detectable manner, but does so based on information which the attacking program should not be able to access.


No, hardware bugs could work like you describe, but Spectre/Meltdown does not make the result of executing a program depend on the results of speculative execution. It just makes timings depend on such results. If you can't time things, which a deterministic program can't (access to any real timer makes a program nondeterministic), you can set up spectre attacks but you can't receive the results.


All interactive programs are nondeterministic. If you remove all other timers, you can still tell the user to click something and see how much work you can do before they click it. That's an exceptionally noisy timer, but a timer nonetheless. The only way to completely remove timers is to revert to batch mode programs.


If you buffer all I/O until the program has finished executing, I don't think this type of timer works. (The message to the user is displayed only after the process you are trying to time completes, and then when the user clicks the button you are executing again)

The program is equivalent to a pure function from (state, events) to (state, actions) and the computing device that evaluates this function can't leak any information to the function without evaluating it incorrectly. It can still leak information in how long it takes to evaluate the function, but there is no direct way to get this information back into the function.

You could ask the user what time it is, or how long something took, but that starts to look suspicious fast!


But shared information is not restricted to shared memory in a computer and it’s threads. As soon you interact with some other system you will have this problem, in a database or any other non trivial (non computer) system?


It won‘t work. The mechanism that hides the timing information from untrusted code will itself leak timing information.

You just need to refine statistics. The underlying program can‘t be solved in a physical universe.


If you increase the number of bits in a hash, you do not guarantee against collisions, but you can make them so infrequent that searching for them becomes impractical.

If the randomizations have such an effect that data sniffing is still possible but its speed is 1 bit per week, trying to sniff something interesting becomes impractical. (Or not; depends on your threat model.)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: