I dont think this is true. Integer math is used in some game engines because they need to be deterministic, between networked computers (and different CPUs rounds floating point numbers differently). I have written an engine like that, and i know that Starcraft 2 is all integer for the same reason. No one does it because its faster or easier. Its a pain.
It depends on which 0 you are talking about. The 1-1=0 or the 1/infinity zero. Somewhere deep in a physics department there is an equation where the distinction either confirms or denies the multiverse.
Actually, the rounding for basic add/multiply are strictly defined and configurable (some algorithms need other rounding rules than the default "round to nearest even").
If you only use those (maybe including division, I'm not sure), and don't rely on hardware support for sqrt, sin/cos/tan, approximate-inverse, etc., you can totally use them deterministically between different architectures.
>And computing integers IS faster than computing floats, at least on a CPU.
In the abstract yes. In reality not so much. A lot of what floats are used for in games is vector math (you are often in a 2D/3D world), and vector math in fixed point requires a lot more work in integer math since you constantly need to shift down things in order to avoid overflow. Overflow bugs are a constant problem when doing dot products in fixed point integer math.
Another common operation in vector math is square root (to normalize vectors) Modern CPU's do that very fast, and you can also use clever approximations that use the way floating point's are represented in hardware.
Most of whats done in 32 bit floats in games, needs to be done with 64bit integers to manage these problems if you decide to use integers, and that means that your data grows, more cache misses, and things slow down.
On top of this everything you do in a game needs to be drawn on the GPU so you need to convert everything to floats anyway to show them on screen, and converting between integers and floats is also slow.
I never use the square root in games, I always compare squared distances (since the properties are the same).
If you try to do with Fixed Point Arithmetic what was intended to be done with Floating Point, you're the problem.
A 32 bits integer can hold values up to 4 billions, if I have that kind of value in a simple game, then yes i will switch to Floating Point Arithmetic, but when does that use case happen if you're not writing a physic simulation game ?
> A 32 bits integer can hold values up to 4 billions, if I have that kind of value in a simple game, then yes i will switch to Floating Point Arithmetic, but when does that use case happen if you're not writing a physic simulation game ?
All the time. Let me give you an example from a game I made. During early R&D I used 32 bit numbers and a fixed point of 11 bits. That means that a normalized vector is between 1024 and -1023. You can do 2 dot products between vectors before it breaks it breaks. (10 + 10 + 10 bits plus one sign bit). That means that you have to do a lot of shifting down. the world can only be 20 bits large, because you need to be able to multiply world coordinates with vectors without getting overflow.
20 bis is very low resolution for a real-time world. You get problems with things not being able to move slow enough at high frame rates. (this was in 2D). Switching to 64 bit was the right move.
Technically, 32-bit fixed-point numbers have 8-bit more precision compared to 32-bit floating-point numbers, but the real problem is it's quite cumbersome to fully utilize the precision offered by fixed-point numbers as you'd have to keep in mind their numerical range all the time.
The biggest problem with fixed point is reciprocals behave very poorly; ie. x·(1/x)=1 and 1/(1/x)=x fail very badly. In order for these to work you obviously need the same amount of numbers above 1 as below 1, like in floating point.
Using fixed point with 11 bits after the decimal, with numbers as low as 100 (18 bits) we already have errors as large as 100·(1/100)=0.977.
Modern CPUs, like Zen2, have vastly more f64 multiplier throughput than 64 bit integer multiplier throughput.
This goes so far, that Intel added an AVX512 instruction to expose this 52/53 bit multiplier for integer operations, to accelerate bigint cryptography calculations like RSA and ECC.
They all follow IEEE 754. But that defines the representation of the bits in a floating point number, not what happens when you operate on them. Different implementations do rounding differently, and there are arguments about how to do this correctly. There can be multiple ways to represent the same number even (this is a reason not to ever do a == compare on floating point number that have been operated on). Not only have there been differences between CPU makers, but also between different CPU of the same vendor.
A separate, complicating factor is that floating point math often don't yield the same results with different levels of optimization turned on. The reason to use integer math for games is because you want it to be deterministic so can be an issue.
IEEE 754 does also specify operations and their behavior. E.g. section 5.1 (from 2008 edition):
> All conforming implementations of this standard shall provide the operations listed in this clause for all supported arithmetic formats, except as stated below. Each of the computational operations that return a numeric result specified by this standard shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that intermediate result, if necessary, to fit in the destination’s format (see 4 and 7).
Well the fact is that different hardware rounds it differently, so its the world we live in. My guess is that actually implementing something in hardware " to infinite precision and with unbounded range" might not be realistic.