You could call it an incorrect answer if there was a correct answer to division by zero, but it's undefined instead with no correct answer. Sounds pedantic, but in math pedantic stuff matters, and apparently you can expand things to define division by zero as zero and not break math, https://www.hillelwayne.com/post/divide-by-zero/
> apparently you can expand things to define division by zero as zero and not break math,
You really can't though.
> If x/0 is a value, then the theorem should extend to c=0, too.” This is wrong. The problem is not that 1/0 was undefined. The problem was that our proof uses the multiplicative inverse, and there is no multiplicative inverse of 0. Under our modified definition of division, we still don’t have 0⁻, which means our proof still does not work for dividing by zero. We still need the condition. So it is not a theorem that a * (b / 0) = b * (a / 0).
This is like saying there's nothing wrong with defining 2 + 2 = 5, and addition will still be associative because (a + b) + c still = a + (b + c) unless b = 2. Like, sure, you can redefine division to not have the normal properties that it does, and then argue that your redefinition is sound because the theorems only apply to things that have the normal properties of added numbers. But that's not what + means!
If these people really believed the arguments they're making, they would actually define x/0 = 5, or 19, or something on those lines.
Are you objecting to the formal system breaking down or to the deviation from expected meaning? You could just say something like "to simplify error handling, our programming language uses a 'zivision' operator that behaves exactly like regular division except zivision by zero is defined as zero". Then everyone just goes on to do math as usual, unless there's something inconsistent in the new formalism that makes mathematical reasoning break down.
> Are you objecting to the formal system breaking down or to the deviation from expected meaning?
I'm saying that's a false distinction, because as soon as you have that deviation from expected meaning, you have valid theorems that silently stop being valid and your formal system quickly breaks down. And while you can redefine your way out of each individual instance of this, everything you redefine just means more and more theorems that don't have their normal meaning which in turn means more things that you have to redefine.
> You could just say something like "to simplify error handling, our programming language uses a 'zivision' operator that behaves exactly like regular division except zivision by zero is defined as zero".
This would be a much better approach, because then existing theorems that use or refer to division are obviously not necessarily true of zivision and if you want to use those theorems to talk about zivision then you have to check (and prove) that they're actually valid first.
Zero is a fine answer for integer division by zero. "Oh noes you can't do that math, my teacher told me so in a school" isn't a brilliant heuristic to work from.
How about modulo zero? That can be completely correctly define or a floating point exception if you decided to define it in terms of divide and also decided divide should do that.
What about floating point? It gave up on reflexive equality and yet programs do things with doubles despite that.
The integer divide zero industry best practice of promptly falling over isn't an axiom of reality. It's another design mistake from decades ago.
>Zero is a fine answer for integer division by zero.
It's not. Division (edit: of a positive integer) by zero is better approximated by +infinity rather than zero.
In real life to divide something into zero groups is nonsensical, thus the number system in programs should reflect this. In the exceptionally-rare case where you want to divide by zero and it makes sense (can't think of a scenario where that's true but let's stipulate), then you can use your language's equivalent of try/catch or (if (zerop x) ...) to get around it.
Don't fuck up normal mathematics for the rest of us just because some lazy programmer doesn't like to add error checking.
> Division by zero is better approximated by +infinity rather than zero.
No it's not. Division by zero is UNDEFINED. How does a calculation return +infinity anyway?
On the positive side of the graph, 1/x approaches +infinity. However from the negative side of the graph 1/x approaches -infinity. So at zero, 1/x is simultaneously +infinity and -infinity, which is not possible. The answer to 1/0 is "there is no answer" which is UNDEFINED.
A reasonable result for a calculation is to return "?" or possibly NULL or nothing, depending on what other parts of the system are expecting.
When performing integer "division" of x by y, you're finding a solution {q,r} to the equation y * q = x - r. When y=0 any choice of q works, and using 0,x is a perfectly reasonable and intuitive way of defining things.
Computer integers aren't the real numbers you learned about in gradeschool. INT_MAX+1 is not greater than INT_MAX. :)
x/0 = program explodes is also a justifiable choice. It is not however more or less fundamentally correct than making the result 0.
Floating point division by zero doesn't (typically) crash programs the way integer division by zero does (it typically returns a NaN-- and the programmer is free to turn nans to zeros if they like :)).
>Computer integers aren't the real numbers you learned about in gradeschool
Why would anyone think computer integers are real numbers? Anyone who's given it a modicum of thought will know intuitively they're a subset of "real life" integers, not reals.
>using 0,x is a perfectly reasonable and intuitive way of defining things.
I would argue it's neither reasonable nor intuitive. If you want to create some special data type then have at it, but if the behavior of `int` doesn't approximate the behavior of IRL integers, call your data type something else.
> When performing integer "division" of x by y, you're finding a solution {q,r} to the equation y * q = x - r. When y=0 any choice of q works, and using 0,x is a perfectly reasonable and intuitive way of defining things.
Nope. You forgot that there's a requirement that r < q. Otherwise I could say e.g. 5/2 = 1, which is certainly no less valid than saying that 5/0 = 0, but not something that anyone sane wants.
I think not, because in the y=0 case the only sensible r is x, because remove the mod 0 subset you have the unmodified set of integers itself-- the remainder can only sensibly be an identity in that case.
> remove the mod 0 subset you have the unmodified set of integers itself-- the remainder can only sensibly be an identity in that case.
What are you talking about? There are no possible remainders mod 0 (just as there is one possible remainder mod 1 and there are 2 possible remainders mod 2) and there is no sensible definition of the remainder function; defining it to be identity is no less silly than defining it to be, IDK, 7.
Yeah OP is suggesting that the answer to lim x-> 0 1/x = infinity and thus that’s an intuitive answer. Unfortunately that’s obviously not correct precisely because it depends which side from 0 you approach and why just 1/0 is undefined (you could argue about 0+ and 0- but I view that more as IEEE754 weirdness that is used in niches rather than something that would meaningfully change the situation)
> How does a calculation return +infinity anyway?
Not sure what your actually asking but floating point representations generally support a concept of +/- inf.
> A reasonable result for a calculation is to return "?" or possibly NULL or nothing, depending on what other parts of the system are expecting.
- I am a moron, and so are the developers of lean, coq and isabelle
- Most of the responders to this thread are failing to think for themselves
- Responders all think "number" must mean "real number"
Reading through the various responses, it's looking somewhat likely that "number" means "real number" for most people here. And divide zero on the reals is not well formed. Which is weird given it's a programming themed board and your language is far more likely to give you integers mod word size and floating point than real numbers.
By "number" I mean "the number your computer can represent", which appears to have been accidental trolling on my part. The reference to a mechanical calculator struggling was perhaps insufficient.
This is a much clearer argument to me. Theorem provers can be used to ensure certain constraints are met for all possible inputs. It doesn't matter to a theorem prover if 1/0=0, 1/0=infinity, or 1+1=3 so long as you can show the system stays within its constraints. It's not a question of math, it's about what you can prove the machine will do.
I believe most of us doing systems programming and building control systems use IEEE 754, so it's surprising to see a conflicting suggestion. I'd say the biggest thing is that programmers need to be aware of how 1/0 is handled, as 0, infinity, or an exception can all cause undesired results.
Ok now we're talking: important clarifications. Neat read on the type systems. But this part is extremely important:
> The idiomatic way to do it is to allow garbage inputs like negative numbers into your square root function, and return garbage outputs. It is in the theorems where one puts the non-negativity hypotheses.
The equivalent condition here would be for everyone to include "zero-ness" checks on their numeric inputs. But that's awful, because whereas everyone agrees that nullptr is a meaningless pointer, zero is in fact a perfectly good integer/float whatever. So now you have something worse than null pointers- which have course caused us a huge amount of pain ever since being inflicted on the world
So x / 0 = 0 is still a terrible, terrible, idea. But introduce something like the floating-point equivalent of NaN, and say x / 0 = NaN, and now your outputs will at least be obviously wrong, instead of just silently wrong
Thanks for an interesting read. I don't think it's a convincing argument for general purpose languages, though.
In Lean, the zero check is still there, just not in the arithmetic operator itself but in the various division related theorems you need to use in order to do anything useful with the division. But in C/Java/Python/whatever, there are no such guardrails, so 0 or 37 or whatever bogus value is returned from a division by zero will propagate to the rest of the program and cause subtler and harder-to-find bugs.
Throw an exception, because the input is probably invalid and the output definitely will be. Of course, it follows that you need to either plan for the program to crash or catch the exception. But that's still better than just running with wrong numbers.
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out? ' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
I mean that seriously, is this just a random 78.6/0.0 or is it part of a string of computations based on spatially related measurements (ie. physics | geophysics processing).
The answer you're looking for is the behaviour to trigger when an undefined result occurs.
That behaviour can and does vary with respect to the bigger picture.
Instrumentation can return a NULL (no reading - not a number and not a zero), a zero (meaning zero was measured), a zero (meaning non zero was measured but smaller than the granulated bucketing of analog -> digital). Piped data can suffer a static burst, data in is good, data out is bad, may include zeroes.
Desired behaviour may well be to use every value that you can be 'sure' of and running filter replace values that are "bad" | suddenly spike | goto zero - a smoothed "bestguess" result is produced for use (or a local limit that a series was trending toward) and anomalies are flagged for highlighting | operator attention in some manner.