Because the compiler optimizes based on the assumption that consecutive reads yield the same value. Reading from uninitialized memory may violate that assumption and lead to undefined behavior.
(This isn't the theoretical ivory tower kind of UB. Operating systems regularly remap a page that hasn't yet been written to.)
If you read something where you have not written, who cares whether the compiler optimizes things such that if you read from there again, you get the same value, even though that is not true?
Anyone who wants to be able to sanely debug. Code is imperfect, mistakes happen. If the compiler can optimise so that any mistake anywhere in your program could mean insane behaviour anywhere else in your program, then you get, well, C.
(E.g. imagine doing a write to an array at offset x - this is safe in Rust, so the compiler turns that into code that checks that x is within the bounds of that array, then writes at that offset. If the value of x can change, then now this code can overwrite some other variable anywhere in your program, giving you a bug that's very hard to track down)
I see what you're getting at: situations in which the compiler trusts that the location has not changed, but needs to re-load it because the cached value is not available. When the location is reloaded, the security test (like a bounds check) is not re-applied to it, yet the value being trusted is not the one that had been checked.
This is not exactly an optimization though, in the sense that it will mess up even thoroughly unoptimized code (with more likelihood, due to caching optimizations being absent).
So that is to say, even the generation of basic unoptimized intermediate code for a language construct relies on assumptions like that certain quantities will not spontaneously deviate from their last stored value.
That's baked into the code generation template for the construct that someone may well have written by hand. If it is optimization, it is that coder's optimization.
The intermediate code for a checked array access, though, should be indicating that the value of the indexing expression is to be moved into a temporary register. The code which checks the value and performs the access refers to that temporary register. Only if the storage for the temporary registers (the storage to which they are translated by the back end) changes randomly would there be a problem.
Like if some dynamically allocated location is used as an array index, e,g. array[foo.i] where foo is a reference to something heap allocated, the compiler cannot emit code which checks the range of foo.i, and then again refers to foo.i in the access. It has to evaluate foo.i to an abstract temporary, and refer to that. In the generated target code, that will be a machine register, or a location on the stack. If the machine register or stack are flaky, all bets are off, sure. But we have been talking about memory that is only flaky until it is written to. The temporary in question is written to!
> The intermediate code for a checked array access, though, should be indicating that the value of the indexing expression is to be moved into a temporary register. The code which checks the value and performs the access refers to that temporary register. Only if the storage for the temporary registers (the storage to which they are translated by the back end) changes randomly would there be a problem.
You'd almost certainly pass it as a function parameter, prima facie in a register/on the stack, sure, and therefore in unoptimised code nothing weird would happen. But an optimising compiler might inline the function call, observe that the value doesn't escape, and then if registers are already full it might choose to access the same memory address twice (no reason to copy it onto the stack, and spilling other registers would cost more).
I don't know how likely this exact scenario is, but it's the kind of thing that can happen. Today's compilers stack dozens of optimisation passes, most of which don't know anything about what the others are doing, and all of which make basic assumptions like that the values at memory addresses aren't going to change under them (unless they're specifically marked as volatile). When one of those assumptions is broken, even compiler authors can't generally predict what the effects will be.
Makes sense. When a temporary is the result of a simple expression with no side effects that is expected to evaluate to the same value each time, the temporary can be taken back. An obvious example of this is constant folding. We set a temporary t27 to 42. Well, that can just be 42 everywhere, so we don't need the temporary. The trust "evaluate to same value each time" is based on assumptions, which, if they are wrong, things are screwed.
(This isn't the theoretical ivory tower kind of UB. Operating systems regularly remap a page that hasn't yet been written to.)