There are essentially three different flavors of floating-point models: strict, precise, and fast. Which one you get is dependent primarily on compiler flags, although most compilers will default to "precise" (a few default to "fast", but they really shouldn't).
The strict model requires that everything be evaluated at runtime and that the optimizer be aware of the potential for floating-point operations to have interactions with floating-point control and status words (i.e., rounding modes and fp exceptions)--essentially all optimizations are floating-point are forbidden.
The precise model allows compilers to ignore the dependency on rounding mode and fp exceptions, and treat floating-point optimizations as side-effect-free pure expressions, but still requires them to generally follow IEEE 754 arithmetic. As such, standard optimizations like code motion or constant propagation are legal, but arithmetic optimizations (like replacing x + 0.0 with x) are not generally legal.
The fast model allows compilers to also apply the arithmetic optimizations.
Most people who actually care about this level of detail are aware of the panoply of compiler options that are used to control floating-point optimization in the compiler.
The other case it doesn't work is that x + 0.0 is defined to quiet an sNaN (because almost every fp operation turns sNaN to qNaN)... except even generally strict floating-point models generally say "fuck you" to sNaN and treat sNaN and qNaN as interchangeable.
(Speaking of fast math, my planned blog post title if I ever write all this up into a blog post is "Floating Point, or How I Learned to Start Worrying and Hate Fast Math.")
> and I'm pretty sure that NaN addition, while resulting in a NaN, does not need to preserve the original bit pattern.
"it be complicated"
Except for certain operations (fneg, fcopysign, and fabs are all defined to only affect the sign bit, to the point that they're an exception to the sNaN-always-signals rule; and there are a few operations defined to operate on the NaN payload information), IEEE 754 generally says no more than a qNaN is generated. There's a recommendation that NaN payloads propagate, and most hardware does do that, but compilers actually typically don't even guarantee that sNaNs are not produced by operations.
The strict model requires that everything be evaluated at runtime and that the optimizer be aware of the potential for floating-point operations to have interactions with floating-point control and status words (i.e., rounding modes and fp exceptions)--essentially all optimizations are floating-point are forbidden.
The precise model allows compilers to ignore the dependency on rounding mode and fp exceptions, and treat floating-point optimizations as side-effect-free pure expressions, but still requires them to generally follow IEEE 754 arithmetic. As such, standard optimizations like code motion or constant propagation are legal, but arithmetic optimizations (like replacing x + 0.0 with x) are not generally legal.
The fast model allows compilers to also apply the arithmetic optimizations.
Most people who actually care about this level of detail are aware of the panoply of compiler options that are used to control floating-point optimization in the compiler.