Hacker News new | past | comments | ask | show | jobs | submit login

>And computing integers IS faster than computing floats, at least on a CPU.

In the abstract yes. In reality not so much. A lot of what floats are used for in games is vector math (you are often in a 2D/3D world), and vector math in fixed point requires a lot more work in integer math since you constantly need to shift down things in order to avoid overflow. Overflow bugs are a constant problem when doing dot products in fixed point integer math.

Another common operation in vector math is square root (to normalize vectors) Modern CPU's do that very fast, and you can also use clever approximations that use the way floating point's are represented in hardware.

Most of whats done in 32 bit floats in games, needs to be done with 64bit integers to manage these problems if you decide to use integers, and that means that your data grows, more cache misses, and things slow down.

On top of this everything you do in a game needs to be drawn on the GPU so you need to convert everything to floats anyway to show them on screen, and converting between integers and floats is also slow.




I never use the square root in games, I always compare squared distances (since the properties are the same).

If you try to do with Fixed Point Arithmetic what was intended to be done with Floating Point, you're the problem.

A 32 bits integer can hold values up to 4 billions, if I have that kind of value in a simple game, then yes i will switch to Floating Point Arithmetic, but when does that use case happen if you're not writing a physic simulation game ?


> A 32 bits integer can hold values up to 4 billions, if I have that kind of value in a simple game, then yes i will switch to Floating Point Arithmetic, but when does that use case happen if you're not writing a physic simulation game ?

All the time. Let me give you an example from a game I made. During early R&D I used 32 bit numbers and a fixed point of 11 bits. That means that a normalized vector is between 1024 and -1023. You can do 2 dot products between vectors before it breaks it breaks. (10 + 10 + 10 bits plus one sign bit). That means that you have to do a lot of shifting down. the world can only be 20 bits large, because you need to be able to multiply world coordinates with vectors without getting overflow.

20 bis is very low resolution for a real-time world. You get problems with things not being able to move slow enough at high frame rates. (this was in 2D). Switching to 64 bit was the right move.


It appears you have a very niche definition of "game", one which excludes basically all 3D games.


Yes, that's exactly why I wrote:

> some game engines

and

> simple games

A 3D game is not a simple game.


Ugh, do you really have to be this nitpicky? Fine, you wrote:

> the approximation is generally good enough for a game

No, it's not generally good enough for a game – only for some games, and not good enough for any game with 3D graphics.


Technically, 32-bit fixed-point numbers have 8-bit more precision compared to 32-bit floating-point numbers, but the real problem is it's quite cumbersome to fully utilize the precision offered by fixed-point numbers as you'd have to keep in mind their numerical range all the time.


Speaking of nitpicky, I thought the gamecube's GPU was integer-driven? And it has plenty of 3D games.


The biggest problem with fixed point is reciprocals behave very poorly; ie. x·(1/x)=1 and 1/(1/x)=x fail very badly. In order for these to work you obviously need the same amount of numbers above 1 as below 1, like in floating point.

Using fixed point with 11 bits after the decimal, with numbers as low as 100 (18 bits) we already have errors as large as 100·(1/100)=0.977.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: