Fixed point numbers have finite precision. As soon as you multiply two fixed point numbers you risk losing precision unless you use more space for your result. You can’t fix your “long chain of calculation” by making it fixed point as you’ll still be rounding things (unless it happens that all your numbers need 54-63 bits of precision and you were wasting bits on the exponent)
Apart from in accounting, the two problems with floats are nonassociativity and speed, neither of which are particularly relevant here.
The risk of losing money does not look like subtle floating point rounding errors. It looks like someone fucking up a formula in excel or the model being wrong or the model being right but risk not being hedged.plenty of people in finance will add percentages as x + y instead of doing (1-(1+x)*(1+y)) because it’s simpler (this works fine because log(1+x) is approximately x when it’s small, and it can obviously be made better by taking the log earlier but plenty of percentage inputs to a model may be rough guesses anyway (who cares if you write 5% or log(5%)))
Apart from in accounting, the two problems with floats are nonassociativity and speed, neither of which are particularly relevant here.
The risk of losing money does not look like subtle floating point rounding errors. It looks like someone fucking up a formula in excel or the model being wrong or the model being right but risk not being hedged.plenty of people in finance will add percentages as x + y instead of doing (1-(1+x)*(1+y)) because it’s simpler (this works fine because log(1+x) is approximately x when it’s small, and it can obviously be made better by taking the log earlier but plenty of percentage inputs to a model may be rough guesses anyway (who cares if you write 5% or log(5%)))