I’m not sure if a general system should deviate from established definitions but in specific programs I certainly bend the rules.
For instance, I made a game, and I might accidentally end up with a value that won’t modulus correctly due to zero denominator. In that app, mathematical accuracy for this single outlier does me no good at all because it manifests as “might crash randomly” when I want “doesn’t crash, period”. I therefore wrote a “safe modulus” routine that essentially guards against zero and makes a command decision to return a value. The added robustness against crashes is preferable, since I may not know if I have found every stupid case of the math accidentally working out to exactly zero.
In fact, more generally, exact zeroes mask lots of bugs. They are often default initial values, meaning you might not notice something working accidentally. Sometimes I use a really tiny float as my “meant to be zero” to distinguish, e.g. 0.0001 means “item was supposed to appear at coordinate 0”, that way anything that accidentally ended up at 0 is easier to detect.
For instance, I made a game, and I might accidentally end up with a value that won’t modulus correctly due to zero denominator. In that app, mathematical accuracy for this single outlier does me no good at all because it manifests as “might crash randomly” when I want “doesn’t crash, period”. I therefore wrote a “safe modulus” routine that essentially guards against zero and makes a command decision to return a value. The added robustness against crashes is preferable, since I may not know if I have found every stupid case of the math accidentally working out to exactly zero.
In fact, more generally, exact zeroes mask lots of bugs. They are often default initial values, meaning you might not notice something working accidentally. Sometimes I use a really tiny float as my “meant to be zero” to distinguish, e.g. 0.0001 means “item was supposed to appear at coordinate 0”, that way anything that accidentally ended up at 0 is easier to detect.