Pedantic nitpick on floats: floats are exact. The issues with them are:
- they are binary rather than decimal, so rounding behavior doesn’t match what humans expect
- some floating point operations have rounding/error accumulation
You can multiply and divide a float by 2 all day long and every result will be exact, with no error accumulation. However, adding floats together is a different story :-)
That said, decimal types are still much preferable for anything that needs to be “exact” for human usage. It’s very easy to make a mistake with floats, even if they could support your use case.
I think you're meaning 'exact' as in they're deterministic. I don't know anyone who knows what they're talking about that argues against this, so I think you're not disagreeing with anyone.
Other people mean 'not exact' as in they can't represent all real numbers. Which is true.
Integers can't represent all real numbers either. Integers can represent more decimal numbers, which tends to be what humans use. But that doesn't make integers more exact. It does mean that integers will probably more accurately reflect the precision of your inputs. (But that presumes that your inputs are entered by humans, and not floats coming out of a sensor that produces readings as floats.)
careful with your terminology there: integers are by definition not real numbers. All ints are reals, but only an infinitely small proportion of reals are ints.
If, however, you meant "the integer datatypes can't represent all integers", then you probably want to say that instead of mixing maths terminology. And you'll also have to back up that claim because bigintegers are as accurate as you have space to store them in; their limitation is not imposed by their spec, but by your hardware.
Unlike IEEE floating point decimal numbers, where the limitation is explicitly part of the spec itself. That limitation is literally why floats even work at all.
That's too pedantic. Floats are not exact real numbers because they cannot represent most reals exactly.
There's no need to use a float operation to see this inexact behaviour - floats cannot even represent most reals that humans can write in a computer program. If you write `0.1` in your programming language of choice such that it gets treated as a float, the result is not, in any way, exactly 0.1.
I didn’t claim they are _any_ exact real number or that they are _exactly_ the same as the decimal literal they are conveyer from. I said they are exact, which they are. (And further, that they are binary and not decimal.)
That's certainly true. The rationals (of which BigDecimal cannot represent the whole space) are a 0-measure part of the reals. I would not claim that BigDecimal is an exact representation, which is a fact that should be clear to anyone that tries to calculate 2 * PI in arbitrary-precision finite decimals. Even not considering irrational numbers, BigDecimal cannot handle 1/3 + 1/3 + 1/3 = 1 correctly.
It might be fair to claim that floats are more inexact - certainly moreso than Rational, but probably moreso than BigDecimal for the numbers that programmers tend to care about.
It's not clear what you're getting at here. Are you just saying that each floating-point value precisely represents a real number? Or are you saying that floating-point arithmetic is often exact? [0]
My own pedantic nitpick: the article states that but BigDecimals are [exact]. BigDecimal is only exact in the way floats are exact. BigDecimal has arbitrary precision, but is not able to precisely represent every real. There's no way to come up with a system that would be capable of doing this. No matter how much memory you throw at BigDecimal, you can't precisely represent pi.
I don't think that's a very useful definition of exactness. The article is quoting the Ruby float documentation itself, for one thing.
Perhaps it would be clearer to say that floats and floating point operations are deterministic, so you don't have to convince someone that 0.30000000000000004 is the exact answer to 0.1 + 0.2
If ruby's float's are exact, how do they translate arbitrary literals to IEEE 754? Surely the vast majority of literals are not representable with IEEE 754, so any exactness of these floats would solely exist in comparison between floats, not with literals.
Nice read. Just wanted to add to your part about Floats (decimals) and how they are not unique to Ruby. A lot of developers, specially self thought developers (myself included) are not aware of this and might never discover it unless you start working with currencies or something similar. I guess academically thought developers learn this in school.
Great article. Just one small tip: instead of using the BigDecimal("123.45") notation, I much prefer typing 123.45.to_d.
Anywhere you used Floats, you can just replace them with BigDecimals. They work just the same. Some people get scared by them because they are always printed in scientific notation in the console, but you can just call `.to_s` on them to see them in a regular format:
123.45.to_d
=> 0.12345e3
123.45.to_s
=> "123.45"
So don't get worried about the display. The behave exactly the same as a float, you can sum, multiply, and so on, and then you call .to_s when it's time to display it.
Also any BigDecimal#round will give you an integer (just like .to_i will), but you can also call Bigdecimal#round(2) to round to, for instance, 2 decimal places.
Also, if you are coding anything related to currencies, I strongly suggest you always use integers to store the amounts in cents. And when you need to divide it for calculations, just call `.to_d`, do your math, and then `.round` at the end, so you discard the fractions of cents. It works wonders, and the math will add up just like in Excel :)
BigDecimal("123.45") != 123.45.to_d. You should always prefer the former. In the latter, 123.45 is first read as a float, which introduces an error because it is not exactly representable as a float.
I was working on a problem yesterday at work where a remote system was sending prices (market data) to us in IEE754 binary doubles
One of the test vectors was 5.80, but when we were printing them out to 15 digits of precision (the maximum a double is guaranteed to preserve) we were seeing "5.80000019073486", where of course we expected to see "5.8".
As it turns out this is the closest to 5.8 you can get in single precision so the remote system was obviously sourcing these from a C 'float'.
Everyone says that 'banks never use floating point to represent money'... but when you talk to people actually working in banks and other financial institutions and actually look at their code turns out they're using floating point all over the place.
I can tell you for a fact that at least one large data vendor uses floating point almost exclusively internally :-)
The thing is, it's actually 'fine' until you actually need to do a calculation. Once you get there you need to convert to decimal and round to however many dp's you should be using.
Holy cow. Which market data provider in their sane mind would send decimals as floats? I don't think I've ever encountered binary market data using anything but fixed-point decimals (so essentially, integers) for representing prices.
I don’t understand the direction of the arrows in the class diagram. In some cases they are pointing from a class to one of its subclasses, but in others they point from a subclass to its parent class.
- they are binary rather than decimal, so rounding behavior doesn’t match what humans expect
- some floating point operations have rounding/error accumulation
You can multiply and divide a float by 2 all day long and every result will be exact, with no error accumulation. However, adding floats together is a different story :-)
That said, decimal types are still much preferable for anything that needs to be “exact” for human usage. It’s very easy to make a mistake with floats, even if they could support your use case.