Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can absolutely use doubles for money. Excel does it and so do many other financial tools. There are pros and cons but as long as you do rounding and comparisons correctly it works perfectly fine.


For example 16.10 is not exactly 16.10 in IEEE floating point. When you do enough operations, and depending on the order of operations, you can wind up several cents off. That sounds small, but can be enough to give your auditors heart-burn. COBOL does BCD arithmetic, (not really but at least conceptually), and it's penny accurate to 31 digits (as per the standard - implementations may have greater accuracy). Frankly, it's stupid that 63 years after COBOL we're still treating money and currency as an afterthought in languages that are supposed to be business oriented. Proper currency handling should be part of the language.


It would be neat to have a fundamental "money" primitive alongside int, float, string, etc.

It might not be simple to implement, though. Off the top of my head,

* Can you confidently say that $100>¥10 in an offline environment?

* You'd need to support different bases for splitting units to deal with eldritch values such as farthings.

* Where is the line drawn? Is a Bitcoin a valid currency? A gold Krugerrand? A bundle of stock shares? A bushel of apples?

All of that said, I would love to see a world where I could write code like:

    money balance = money.lumber.grade.kilograms(8);


> Can you confidently say that $100>¥10 in an offline environment?

Money is never converted. It is exchanged. Trying to solve this is like trying to solve the question "Is "100 USD" > "1 LAPTOP".

When you turn 100 USD into 90 EUR, you didn't convert it. You exchanged it. You bought EUR, at a price given to you by someone or something exchanging it. This could be a bank, a well-established currency office, or some dude on the street. There is no real difference between all three of those: The third party gave you a price, and now has more USD and less EUR, whereas you have more EUR and less USD.

There are various entities publishing standardized average rates which are calculated after the day closes, based on a variety of datapoints they have access to. Those are often used in eg. accounting, to establish the "real" value of something you bought in a currency you don't often use, but it's not true conversion.

If you have, as a datatype, a currency becoming another, there is ALWAYS a "rate" attached to this. So the question "$100>¥10" you asked above requires more data, it should be "$100>¥10 @ 144.28". ANYTHING else is a terrible leaky abstraction. Don't do it. Source your rates automatically from a single source if you like, but make it explicit.

Anyway, a "Money" object really is just this: A precise decimal object, with an ISO currency code. The latter simply being a short string among an included, limited set.


Currency conversions are a transaction not an operation, the conversion rate fluctuates constantly and typically involves fees and involve tax liabilities. For a money type I’d go so far as to want it to either disallow or throw an exception if attempting an operation on two monies in different currencies.


I don't think it's as big a heft. First, there are standards bodies that list currencies as "default" set, much like we have ISO standard country codes. No one really complains if Narnia isn't a country, that Disneyland isn't a country, or the Austro-Hungarian empire, for ISO Locales.

At a bare minimum, it should be a reasonable fixed point type that correctly handles rounding and intermediate values. So a dollar amount like 123.45 times a rate like 0.3450 doesn't exceed 4 decimal places but intermediate values are extended so we get correct rounding. The destination should probably determine the number of places. That bare minimum wouldn't stop you from comparing yen to dollars, any more than a floating point representing mph stops you from comparing it to a value representing kph.

But there are times where we need to track prices to the nearest tenth or hundredth of a cent. So it should be extensible so that 123.456 dollars * 0.3450 winds up at a correct round/decimal places.

You also don't need always-on, real time currency conversion. You could have a conversion type, operator, or method that does safe conversion based on the value I give it. So if I estimate that Yen are about 130 to the dollar, I can just use that. If I happen to write an application that queries a data provider and can populate that in 'real time,' that's up to me.

If you really wanted, you could find a way to create new types that represent currencies that aren't part of the basic implementation. That might mean you need to specify some things like the representation for different locales, or the default number of digits.


"Can you confidently say that $100>¥10 in an offline environment?"

A major problem money has is that it isn't a unit in the sense we usually take the term to mean. We expect, for instance, that translating one unit to another with a suitable level of precision should be translatable without loss back to the original unit, but that's not true for money, even ignoring transaction costs. If "US dollar" is a unit, it is a unit that technically stands alone in its own universe, not truly convertible to anything else, not even other currencies. All conversions are transient events with no repeatability. But that is very inconvenient to deal with, and with sufficient value stability of all the relevant values, often it's a sufficient approximation to just pretend it's a unit. But if you zoom in enough, the approximation breaks down.

For that and similar reasons, while you could theoretically write that line of code, it would be implicitly depending on a huge pile of what would in most languages be global state. It would be a dubious line of code.


It looks like math but really it is describing an exchange of goods.


Here's the implementation we use at work. You might find some interesting ideas there. It's nicely documented.

https://hackage.haskell.org/package/safe-money-0.9.1/docs/Mo...


> Can you confidently say that $100>¥10 in an offline environment?

It’s not even clear what the second currency is. US$100 is over 14000JP¥ or 725CN¥.

If you’re offering me to wager whether $100 is more than ¥10, I’ll take the wager that it is.


> Can you confidently say that $100>¥10 in an offline environment?

Yeah, one more reason for it to be a different type.

> Where is the line drawn?

It's not.

As a rule, measurement unities have an absurdly bad support from computers.


> There are pros and cons but as long as you do rounding and comparisons correctly it works perfectly fine.

This is exactly the issue with using floats where an arbitrary precision decimal with proper rounding is really needed. Easily solved with a good library and if your languages supports it, type, but it's really easy for a dev in a hurry to not use the library and roll some a=b+b*c_rate code that forces some type conversion. The rounding rules often are tied to contracts, and a subtle bug that's off a few mills here (total problem created $2.33) and there can lead to audits (total cost of audit $14,800) that cost a lot.


I'm tired of having to drag in another dependency and lose operators if I'm doing money math. I can create an experience in C++ that's almost rational, with operator overloading, but most other languages were designed well after we knew that doubles are not sufficient. And there's more than just arbitrary precision. For example,, some currencies use three decimal or no decimal places. Two just happens to be convenient for the Euro and Dollar. In addition, sometimes you carry prices to 3 or 4 places. But you still want banker's rounding. And I shouldn't be able to add Turkish Lira to US dollars, any more than the language allows adding floats and integers, without conversion. Then there's locale correct display for currencies (e.g. $ vs USD and before or after the money amount).


you can also easily hurry up subtly wrong money math with ints. (overflows, wrong rounding method, etc). So you should not in any case use the default math operators


At least with ints you can capture those issues with intent and handle them properly (as the better libraries that handle money tend to do).


The question is why? What does the double give you over a 64-bit integer? Sure when you divide and it leaves a fractional part you loose it and need to think about what happens there explicitly but you need to do the same for doubles to avoid pennies going missing and snowballing into larger errors.


Using integer representations for currencies becomes very messy when dealing with more than one currency at a time.

The United States dollar is famously subdivided into cents, but is also subdivided into 'mills' (one thousand to the dollar).[0]

The Mauritanian ouguiya is divided into five khoums.

The Madagascan ariary is divided into five iraimbilanja.

The Maltese scudo is divided into twelve tarì, which are divided into twenty grani, which are divided into six piccioli.

Historically, such currencies were ubiquitous. For example, prior to 15 February 1971, the pound sterling was divided into twenty shillings, each of which was divided into twelve pence, a system that originated with Roman currency and was used throughout the British Empire.

Exchange rates are typically quoted in terms of the largest unit, whereas integer representations of currency would need to be done in terms of the smallest unit, so extensive information about currency structures would need to be used to correctly represent exchange rates. Floating point or binary-coded decimal representations are consequently much better.

[0]: https://en.wikipedia.org/wiki/Mill_(currency)


There are multiple factors:

Doubles can exactly represent all 16 digit ints (IIRC) which is good enough for most use cases, you can catch the out of range cases (as you should do with ints as well)

If you use long ints you must track the decimal precision along the value which is not always trivial if you use mixed currencies.

Long ints are not guaranteed to correctly roundtrip through json serialisation / deserialisation

Doubles are easier to handle in the frontend

Currency math is different enough from regular math that you need special operator functions anyway so it’s not like ints are easier to handle either


Using ints forces you to deal with all that. Using floats lets you easily ignore it and sweep it under the rug.


If Excel uses doubles for money, it should be a warning sign. It uses simple numeric data type (I guess just ints) for freaking dates.. I can trace at least a couple of bugs in my career to just that fact.


Apart from its quirks Excel is fine, if you know its limitations. I think the real warning sign would be an analyst/programmer working with Excel and expecting high precision results.


Everything is fine apart from its quirks, if you know its limitations. The problem is when a tool's quirks and limitations are neither exhaustively documented nor can they be inferred from having reasonable knowledge about the base principles of the tool, but are rather learnt by experience (to be read as "through bad experiences").


I find Excel to be fine as long as I never use it.


If you want a simple real life example (unconnected to the way numbers are stored) here it is (accounting may have quirks that arrive unexpected).

In EU prices to customers are required to be comprehensive of VAT, so a price is € 60,00 included VAT 10%.

But in an invoice/receipt you have to explicit how much is the net and how much is the tax, so 60 / 1.10 = 54.55 and VAT is 54.55 x 0.10 = 5.46 which makes a nice 60.01.

You may be tempted to round down the 60 / 1.10 = 54.54 and have VAT 54.54 x 0.10 = 5.45 but this makes 59.99.


What do you mean by "correctly"? It sounds like victim blaming. For money, I don't think it's perfectly fine to use numbers where this isn't true.

    0.1 + 0.2 <= 0.3  // sure looks true to me


It's sufficient for quantitative finance and trading but not for accounting. For financial engineering and modeling it doesn't matter if results are a few fractions of a cent off as long as your profit margins/model tolerances are greater than the error, because your broker/bank/exchange will keep track of the exact values in your account. But if you are building a bank/broker/exchange, then tracking it precisely enough for GAAP is now your problem.


>as long as you do rounding and comparisons correctly it works perfectly fine.

At least until you need to add things, at which point you need Kahan's algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: