16-bit integers are good enough for most image-processing algorithms. But you have to code pretty careful to avoid overflow or loss of precision in your intermediate values.
Floating point is much easier to work with, and sometimes makes a difference with quality. On modern CPUs it's also about as fast as integer processing, if not a bit faster sometimes.
On GPUs, it might actually be the other way around -- image processing tasks tend to be limited by memory bandwidth, so you might get better performance with 16-bit integers. But I haven't tried it.
Floating point is much easier to work with, and sometimes makes a difference with quality. On modern CPUs it's also about as fast as integer processing, if not a bit faster sometimes.
On GPUs, it might actually be the other way around -- image processing tasks tend to be limited by memory bandwidth, so you might get better performance with 16-bit integers. But I haven't tried it.