I'm talking about the shared computation resources (e.g. ALU), not about shared caches. Cache hits and cache misses affect how much and when one thread utilizes the ALU, which means the other thread sharing it won't be able to use the same resources for computation. Since speculative execution also uses these shared resources, it leaks information about its cache misses to the other thread, even if a rolled-back speculative execution doesn't modify any caches.