It's a fair point. They would use more decimal places than that if it were necessary, regardless of whatever a double-precision floating point does. Since it's not necessary, the double is adequate (and already widely available by default across programming languages and system architectures).