Deterministic engines are very hard to pull off. I looked into it for my own engine, and decided not to.
The problem is floating point calculations on different processors. Maybe in the 90's that wasn't such an issue, but good luck now. Plus, if you don't need a physics engine, maybe you can stay in integer space.
That would explain small deviations at first, and getting bigger by time.
For example a game like Braid also chose not to use a deterministic engine for that reason.
The Factorio developers claim that they didn’t have much issues with floats, beyond trig functions (which are in software).
> Originally we were quite afraid of Floating point operations discrepancies across different computers. But surprisingly, this hasn't been a big problem so far (knocking on the table). We got away with implementing our own trigonometric functions.
AFAIK the basic functions (add/sub/mul/div) work fine, but the trig functions don't [0]. It's not a processor issue; it's a libc (or whatever) issue. Of course you need the trig functions in almost any engine, but at that point you can write them yourself.
It's an IEEE-754 problem... Only the above operations and sqrt require full accuracy. Just about every other floating point operation is allowed to be less accurate and fully compliant, and widely differ in exact answers. Even on say x86, using SSE vs FPU vs libc (which often will not use above versions in order to be more accurate) will give different answers.
And it's not gone. I can't seem to find the reference anymore, but I'd swear I recently saw a talk from a Supercell fellow describing they use fixed point today, for their online games.
In the 2000's I wrote games on MS PocketPC. It had emulated floating point, which was painfully slow for games. Most games could do with just integers. Once I made some physics based games, I implemented fixed point indeed, and it went more or less fine.
A 3D engine in fixed point (which I never fully finished) gave me a lot of headaches. So glad all those devices have proper floating point and a gpu's.
Quake was the first big game to use floating point for everything and require a FPU, and it was released in 1996. I would guess that by the late 90's plenty of games used floats.
The problem is floating point calculations on different processors. Maybe in the 90's that wasn't such an issue, but good luck now. Plus, if you don't need a physics engine, maybe you can stay in integer space.
That would explain small deviations at first, and getting bigger by time.
For example a game like Braid also chose not to use a deterministic engine for that reason.