I do not think IEEE-754 is mandatory for any work load, it is just that using fixedpoint arithmetic, especially if mapped onto of a number of common integer types tends to be more complex to implement then simply using float.
With fixed point you have to be much more aware about e.g. the min-max-range of values and the precision needed, potentially changing them between different computation steps.
If you have any complex math algorithm you should also look into this aspects with float for error calculation etc. So maybe using fixed-point would not be so bad there, but for a lot of "just get it done" tasks people want to just have a float and not bother about it.
If you have a way to quickly translate math specific programs using floats + little amounts of meta data into math specific programs using fixed-point arithmetic including error analysis/correction* you probably would get ride of >90% of float requirements, through you would "still" have all that legacy software ;=)
[*] If I remember correctly someone did that for Neural Networks using tensorflow, but I don't remember the details
With fixed point you have to be much more aware about e.g. the min-max-range of values and the precision needed, potentially changing them between different computation steps.
If you have any complex math algorithm you should also look into this aspects with float for error calculation etc. So maybe using fixed-point would not be so bad there, but for a lot of "just get it done" tasks people want to just have a float and not bother about it.
If you have a way to quickly translate math specific programs using floats + little amounts of meta data into math specific programs using fixed-point arithmetic including error analysis/correction* you probably would get ride of >90% of float requirements, through you would "still" have all that legacy software ;=)
[*] If I remember correctly someone did that for Neural Networks using tensorflow, but I don't remember the details