Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Determining positive or negative infinity from the "sign of the zero" is not sane to begin with. Zero has no sign in mathematics. It's just a representational accident in floating-point: there is a one bit wide field for sign, and for the sake of uniformity, representations of zero have that field too. Treating these as different is dumb; they are just different spellings of the same number in a bitwise notation.

To drive the point home, this is somewhat like making 0xF different from 15.



> Zero has no sign in mathematics.

Not wrong, but zero having a sign is useful for several complex plane algorithms--it's not just an accident.

As a general rule, anything that is in IEEE 754 (or 854) has a damn good reason for being there, and you had best take some time to understand it or risk looking stupid. A lot of hardware people screamed about a lot of the obnoxious software crap in IEEE 754, so, if something survived and made it into the standard, it had an initial reason for being there even if that reason has gone away with time/the advance of knowledge/Moore's Law.

The original designer's commentary about it is: William Kahan, "Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit", in The State of the Art in Numerical Analysis (eds. Iserles and Powell), Clarendon Press, Oxford, 1987.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: