Sure. Makes sense. The question really is about whether or not some of these applications can tolerate errors in the order of 1%.
I think the idea here is threefold:
1. The vast majority of computations do not need extremely high precision. A billion times more floating-point operations go into rendering video for games every day than for MRI processing (made-up statistic). Even now, we have double-precision floating point for this reason: most operations only need single-precision.
2. Applications that need a particular level of precision can be written with that level of precision in mind. If you know the ability of a particular operation to provide precision, and you are aware of the numerical properties of your algorithms, you can write faster code that is nevertheless still as accurate as necessary. Most of this isn't done by normal programmers, but rather by numerical libraries and such.
3. Many performance-intensive applications already use 16-bit floating point, even though it has very little native support in modern CPUs. AAC Main Profile even mandates its use in the specification. The world is not new to lower-precision floating point, and it has gained adoption despite its lack of hardware support on the most popular CPUs.
The entire idea that this is "let's make ordinary computations less accurate" is a complete straw man and red herring that nobody actually suggested.
I think the idea here is threefold:
1. The vast majority of computations do not need extremely high precision. A billion times more floating-point operations go into rendering video for games every day than for MRI processing (made-up statistic). Even now, we have double-precision floating point for this reason: most operations only need single-precision.
2. Applications that need a particular level of precision can be written with that level of precision in mind. If you know the ability of a particular operation to provide precision, and you are aware of the numerical properties of your algorithms, you can write faster code that is nevertheless still as accurate as necessary. Most of this isn't done by normal programmers, but rather by numerical libraries and such.
3. Many performance-intensive applications already use 16-bit floating point, even though it has very little native support in modern CPUs. AAC Main Profile even mandates its use in the specification. The world is not new to lower-precision floating point, and it has gained adoption despite its lack of hardware support on the most popular CPUs.
The entire idea that this is "let's make ordinary computations less accurate" is a complete straw man and red herring that nobody actually suggested.