This would be beautiful for protein folding. That particular application is extremely parallel, numerically heavy, and should tolerate the loss of precision very well. It also eats up processing power like a black hole, so a few orders of magnitude speed improvement would definitely be nice.
Based on my experience, different stages of folding should require differing precision.
For ClusPro (protein docking, http://dx.doi.org/10.1002/prot.22835), we the first stage are rough energy functions for global sampling of the protein surface. For these functions, we use floats because it is a rigid body sampling and is very tolerant to clashes. However, when it comes to the minimization/refinement stage, we have seen weird things happen with floats and instead use doubles.
Similarly, the functions used in early stages of protein folding can probably deal with loss of precision, but the stages for producing high quality structures would not.
Any type of physics simulation, from protein folding to FEM to real-time game physics, plus image/video processing, graphics rendering, computer vision, speech recognition, neural networks, ...