Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't quote me on this, but I recall the rational for NaN is because NaN is typically the result of a division by 0.

Divisions by 0 can be thought of as infinity (for the sake of this explanation, but mathematicians will cringe), but it is not any particular infinity. In the sense that x / 0 does not necessarily have to equal y / 0. For that definition, the result of a division by 0, NaN, must not equal itself.

You can use `isNaN`, tho.



I understand the point perfectly. However, if I have a NaN which is captured in a lexical variable (perhaps the result of a division by zero, as you note) then in fact I do have a particular infinity: whatever object is inside that darned variable! If I do another division by zero, then sure, hit me with a different NaN which doesn't compare equal to the first one I got. But don't make my variable not equal to itself.


Normal division by zero gives you Infinity. To get NaN, you have to do something as numerically confounding as divide zero by zero, which isn't any infinity, because the numerator is zero, and which isn't zero or any finite number, because the denominator is zero.


Yes, and some programming languages will let you express that, eg C you can compare &x == &x not x == x.


IEEE division by zero gives positive (or negative) infinity (with the sign determined by the sign of the zero). NaN crops up with e.g. sqrt(-1) and infinity - infinity for which there is no way to define a sane answer.


Determining positive or negative infinity from the "sign of the zero" is not sane to begin with. Zero has no sign in mathematics. It's just a representational accident in floating-point: there is a one bit wide field for sign, and for the sake of uniformity, representations of zero have that field too. Treating these as different is dumb; they are just different spellings of the same number in a bitwise notation.

To drive the point home, this is somewhat like making 0xF different from 15.


> Zero has no sign in mathematics.

Not wrong, but zero having a sign is useful for several complex plane algorithms--it's not just an accident.

As a general rule, anything that is in IEEE 754 (or 854) has a damn good reason for being there, and you had best take some time to understand it or risk looking stupid. A lot of hardware people screamed about a lot of the obnoxious software crap in IEEE 754, so, if something survived and made it into the standard, it had an initial reason for being there even if that reason has gone away with time/the advance of knowledge/Moore's Law.

The original designer's commentary about it is: William Kahan, "Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit", in The State of the Art in Numerical Analysis (eds. Iserles and Powell), Clarendon Press, Oxford, 1987.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: