Hacker News new | past | comments | ask | show | jobs | submit login
Does 1+2+3+... Really Equal -1/12? (scientificamerican.com)
197 points by CarolineW on July 16, 2016 | hide | past | favorite | 168 comments



Here’s what’s really going on:

You can define a function f from a subset of the complex numbers to the complex numbers where f(z) = \sum_{n=1}^\infty 1/n^z. Be careful with the domain of this function: the series does not converge for all z. You can plug in -1 and see that symbolically, f(-1) = 1 + 2 + 3 + .... But the series does not converge for z = -1, and it is simply not true that \sum_{n=1}^\infty n = -1/12; the series does not converge; equating it to something is a nonsensical thing to do.

What is going then? Even though f is not defined for all complex numbers, there exist functions from the complex numbers to the complex numbers that -- restricted to the domain of f -- are equal to f. They "continue" f to all of the complex numbers. And if one imposes a restriction on these continuations (namely that they are analytic), then it turns out that there is a unique analytic continuation of f: the Riemann zeta function. And zeta(-1) = -1/12.

Don’t confuse the definitions here. The Riemann zeta function can be defined as the analytic continuation of the series, the series is not defined in terms of the Riemann zeta function!


What bothers me is that it seems there could be many ways of writing 1 + 2 + 3 + ... as the "specialization" of a formal series depending on a variable z, and for which the formal series converges to an analytic function on some domain away from the specialization. I can imagine there's a way of doing this in such a way that the function's analytic continuation to the specialization evaluates to any number you want, not just -1/12. However, I'm having trouble cooking up such an example.


A trivial and artificial example is g(z), defined as the sum of G(n, z) for n from 1 to infinity, where G(n, z) = 0 except if z is -1, in which case it is n.

Thus g(-1) is 1 + 2 + 3 + ... while g(z) is otherwise 0 + 0 + 0 + ... = 0. And then the zero function is the unique analytic continuation of g.

The reason people care about the Riemann zeta function is because of its deep connections to analysis, number theory, and physics.


Hm yeah, I was thinking of something more natural, like requiring the G(n,z)'s themselves be analytic on some domain. Anyway, sure the Riemann zeta is important, but I'm not sure how it's canonical.


It indeed makes you wonder whether it is true or not and whether it justifies the usage. This is one of the many reasons I left particle physics, so I have a biased perspective.


This is a good concern. FWIW, though, there ARE natural summability methods which turn mere indexed sequences of terms into sums, without any particular choice as to how to interpret those sequences as specializations of formal series, which happen to assign the sequence whose n-th term is n a sum of -1/12. For an example of one such method, see the end of my post elsewhere in this thread.


I can't speak for applications in physics, but in combinatorics and analytic number theory there is no magic involved. The idea in combinatorics is that you start by looking at the ring of infinite sequences over C with componentwise addition and Dirichlet multiplication as the product. That is, if a, b : nat -> C we define

  (a <*> b)(n) = \sum_{k | n} a(k) * b(n/k)
It is easy to check that this is a ring, and it has wonderful properties which make it very easy to solve many equations of interest in this ring. This is usually called the ring of "arithmetic functions".

However, it is difficult to get asymptotic estimates for the coefficients of a series by purely algebraic means. This is the first and last time that complex valued functions enter the picture, but it's a very neat trick. Let's consider two series a, b and define the functions

  A(s) = \sum_{n >= 1} a(n) * n^-s
  B(s) = \sum_{n >= 1} b(n) * n^-s
then we have

  A(s) * B(s) = \sum_{n >= 1} (\sum_{k | n} a(k) * b(n/k)) * n^-s = \sum_{n >= 1}  (a <*> b)(n) n^-s
So this mapping, from the ring of arithmetic functions to the ring of (partial) complex functions with pointwise addition and multiplication. Glossing over some details for now, this allows you to analyze the function belonging to a sequence to gain information about the sequence itself. In particular, you can use the theory of complex integration and Cauchy's residue theorem to gain information about (partial sums of) coefficients.

Unfortunately, the world is not quite this simple. The functions we are mapping into typically aren't very well behaved and usually aren't defined on large parts of the complex plane (consider a(n) = n^n). This means that all of our nice tools from complex analysis actually won't work very well!

The whole idea behind "analytic continuations" is that we aren't actually using this mapping! We are constructing a different (partial, injective) ring homomorphism from arithmetic functions to meromorphic complex functions.

The idea behind this is that meromorphic functions are rather restricted in what they can do. In particular, there is at most one meromorphic function A with A(s) = \sum_{n >= 1} a(n) n^-s for s with Re(s) > k, for some k. We define our mapping from sequences to functions by mapping the sequence a(n) to the meromorphic function A(s) with A(s) = \sum_{n >= 1} a(n) n^-s for Re(s) > k for some k, if this function exists.

By the same argument as above, this is a ring homomorphism, and since it is injective we can still use information about the functions to gain information about the underlying sequences.

For example, the Riemann zeta function is not really defined by the equation Zeta(s) = \sum_{n >= 1} n^-s. It is defined to be the unique meromorphic function with Zeta(s) = \sum_{n >= 1} n^-s for all s with R(s) > 1. In particular, Zeta(-1) has nothing to do with \sum_{n >= 1} n. The latter expression doesn't define a complex number at all, but for the former it is not too difficult to show that Zeta(-1) = -1/12.

The main advantage, though, is that meromorphic functions are very well behaved. This allows us to use Cauchy's residue theorem and Mellin transforms to get very deep results about the underlying sequences. If you play this game with the "von Mangoldt" sequence you can, for instance, derive an asymptotic bound on the density of the prime numbers. This is a surprisingly simple derivation, given that this problem had the worlds greatest mathematicians stumped for a hundred years!

Summing up, the mapping or "continuation" you use is choosen (!) so that multiplication of functions corresponds to your chosen multiplication in the ring of sequences and so that you get functions which are as well-behaved as possible. There is a large design space here, and you can find different "analytic continuations" for a given sequence.


This equating the analytic continuation to the divergent series is done throughout physics, specifically for a process known as integral regularization. For example, it's been a while since I really learned string theory, but IIRC it's this very summation that is relevant to the derived result that space time needs to have more than 4 dimensions in order for the photon predicted from string theories to be massless (which experimentally, they are).

It makes one wonder if there can be a better theory then that doesn't require magic. There is at least one book I know of (before I became a plasma physicist) which apparently avoids the regularization altogether, but people still do regularization like this in particle physics.


This is a very approachable explanation by [mathematician] John Baez https://youtu.be/vzjbRhYjELo?t=426 and going into this with a bit of depth mentioning the 'rigorous' (i.e., analytically 'acceptable' form of the summation of Riemann Zeta function and the radius of convergence and all) along with the naive Euler summation. He goes on to mention a few times as to why it's fundamental in string theory (not to be confused with super string nor m-theory, which are of course different).

A bunch of people had issues with the Numberphile episode where these restrictions where elided (due to time, I'd imagine), but they had another episode featuring the zeta function and a math professor (rather than physics professors, who were featured on the one that went viral) explaining the summation with more context: https://www.youtube.com/watch?v=0Oazb7IWzbA.


There is a much simpler mathematical treatment of this series that appeared in my class on quantum field theory in the computation of the vacuum energy. In that case, the series was treated as

lim \epsilon -> 0+ ( \sum_{n=1}^\infty n e^{- \epsilon n} + ...),

that is the series was multiplied by a decaying exponential function with a rate of decay that goes to zero. This sum can easily be evaluated for small epsilon takes the form

sum = 1/epsilon - 1/12 + ...

Crucially, there was another term in the calculation that naturally appeared that canceled the 1/epsilon. Without that other term, the sum would of course be infinite when epsilon -> 0.

This is much simpler than analytic continuation through the complex plane, and again, this is how the physics calculation appears in QFT courses. There is no need to appeal to complex analysis here, which leads to all of this mysticism and confusion.


That's completely unreadable.

So how exactly is it that by multiplying a regular sum of positive numbers by a decaying exponent do you get a negative number when you take the limit?


I think the argument goes like this:

1) You introduce a family of series, parametrized by epsilon, whose terms, for small epsilon, closely approximate your original series. Sort of; obviously for large n they don't. But the idea is that if you pick any N and delta you can pick epsilon such that for n < N the approximation is within delta of the actual terms.

2) You show that the series in this family can all be summed and the sum of each one is 1/epsilon - 1/12 + O(epsilon).

Now of course what this means is that the sums blow up as you approximate your original series better and better, since 1/epsilon gets large. That's good, because your original series totally diverges off to infinity. ;)

The part after this point I'm less clear on, but it sounds like in the computation involved what you actually have is your (divergent) series 1+2+3+... plus some _other_ (also divergent, going off to negative infinity) stuff. And that you might be able to arrange things such that the other divergent stuff looks like -1/epsilon, cancels out the 1/epsilon from your approximation, and you come out with the sum of the two things being -1/12.

The obvious issue here is that once you start adding up divergent things by rearranging terms and telescoping you can come up with whatever answer you want: see <https://en.wikipedia.org/wiki/Riemann_series_theorem>. So this procedure all only makes sense if there are some sort of fundamental reasons to think that this particular rearrangement is the "right" one in some sense.


Thank you for clarifying the approach (and with not trying to use latex!) :).

The issue is, of course, the illegitimate manipulation of a diverging series, which was the exact issue that prompted the original article (due to Numberphile doing it) in the first place.


One thing I _will_ say for the "decaying exponent" thing is that it's somewhat similar in concept, but only somewhat, to the "smoothed sums" thing described in <https://terrytao.wordpress.com/2010/04/10/the-euler-maclauri.... That is, if you use "e^(-x)" as your "cutoff function" and set epsilon to 1/N the two start looking quite similar.

Now e^(-x) is a totally bogus "cutoff function" per the definition in Terry's blog post, since it is not compactly supported, but it _is_ bounded, _does_ equal 1 at 0, and drops off fast enough that for practical purposes it can be used to do smoothed sums. In particular the smoothed sums will converge for most cases (e.g. anything where the sequence we're "summing" has at most polynomial growth will do so), which means you can at least try to do the rest of the analysis. I suspect, but have not checked, that the other places where compact support is used in his presentation also work out for the sorts of sequences we're talking about.

Either way, the upshot is that in some sense you have some sequence of approximations to your "actual" sum, indexed by N, and you show that for large N they all look like "power series" in 1/N which allows some finite number of negative exponents and all the approximations have matching coefficients for the negative exponents and the same constant term. And then you compute that constant term. Calling that the sum of the series is nonsense, of course, but it can still give you interesting information about something, maybe.


You don't. You get -1/12 plus a positive infinite expression, which is then reduced by a positive infinite expression later. Its theoretical physics, not math.


> which leads to all of this mysticism and confusion.

Funny, I thought the entire point of this 'simplification' was not to be mystical and hand wavey.


I would say it's much like f(x) = sqrt(x) over real numbers. sqrt(-16) doesn't exist, but it is still convenient to think about the square root of negative number (for say factoring polynomials). In this case, our function happens to have a representation at f(-1) being equivilent to 1 + 2 + 3 + ..., and being able to calculate f(-1) = -1/12 much like sqrt(-16) = 4i when we introduce i = sqrt(-1).


Tl;dr: No it doesn't. Under some unintuitive definition for infinite summation that is useful in some physical calculations it does which is surprising.

Under the normal rules which hold for direct use obviously the answer is positive infinity just like you would be expect, you're not stupid and you could be a mathematician if you wanted to.

edit: For fun, a short story:

If Muhammed is on top of a strangely shaped mountain that with every step down gets one step wider. The mountain is so high he can't see the bottom yet Muhammed wants to move this mountain. So Muhammed starts fetching horses and ties them to the mountain with ropes to move it. That's a direct use of this equality, and you can't stand from afar and look at the scene and say "my, I think that's about -1/12 horses Muhammed is fetching". You'll see Muhammed taking an infinite amount of time fetching an infinite number of horses, and you'll definitely seem him do it more than once.


How does an infinite sum of integers appear in nature (physics)?


It appears in some quantum field theory calculations, in particular for Casimir Effect.

In simple words Casimir Effect consists of a force that emerges between two conductor planes that are parallel to each other. The force is proportional to sum of energies of all possible standing electromagnetic waves between the planes. In calculations for this force a divergent series of sum of all natural numbers (or their powers) appears and physicists use 1 + 2 + ... = -1/12 to calculate it (or continuation of zeta function in other points if appropriate).

https://en.wikipedia.org/wiki/Casimir_effect#Derivation_of_C...

and

https://en.wikiversity.org/wiki/Quantum_mechanics/Casimir_ef...


I've never seen a physics book that treats this using the zeta function, except popular articles that try to present this calculation as mysterious. In practice, in my QFT class we needed to compute the sum

lim \epsilon -> 0+ ( \sum_{n=1}^\infty n e^{- \epsilon n} + ...),

that is the series was multiplied by a decaying exponential function with a rate of decay that goes to zero. This sum can easily be evaluated for small epsilon takes the form

sum = 1/epsilon - 1/12 + O(epsilon).

The 1/epsilon term (which goes to infinity) drops out of the final physical result when you do the calculation properly.


Well I'm not a physicist, but I think at the very least we perceive time (locally) as proceeding in a linear fashion infinitely.

My mountain example obviously couldn't happen in the physical world. I suppose in that case you might as well substitute the infinite value for an arbitrary large one. Which is not really what infinite values are about in mathematics, as they are more about describing the (imaginary?) limit of a divergent series.

I guess my point is more that for a mathematician it would probably be obvious that when you talk about the limit of a divergent series it could be any imaginary or intermediary value. But for a layman infinite values and infinite series are interpreted as larger than any value you can come up with, and more than any repetition you can write down. So any explanation for this equality should, I think, involve first deconstructing that.


Are you suggesting that time extends infinitely into the past? Because if it doesn't, there will never be an infinite amount of time, so there won't be any point in time when this -1/12 becomes relevant to physics in this sense.

> I guess my point is more that for a mathematician it would probably be obvious that when you talk about the limit of a divergent series it could be any imaginary or intermediary value.

I'm a mathematician and I might agree (not entirely sure what you mean). But -1/12 is a concrete value so it doesn't apply here.


Why can't time begin at a point in the past, but extend infinitely into the future?


Well it might, but in what calculation would the fact that time extends infinitely be a factor? You can say all past time resulted in the current moment, but as far as I know there's no such thing flowing from the future into now. So any physics calculation you do will deal with finite amounts of time.


I won't pretend to know every possible calculation that might be desired, but I can imagine any sort of prediction or simulation about the future may want to use such an infinite view of time going forward.


Length of the coast of Britain?


That assumes details can be arbitrarily small, which Planck refuted.


We don't have to get anywhere near the Plank scale. If you trace the 'edges' of atoms you'll end up with a finite multiple of a meter-scale or kilometer-scale measurement. And that's the end of the fractal.


Also, at an atomic scale, the word "coast" doesn't really make sense any more...


Refuted? I thought that hypothesis was far from proven.


Not Planck, but by a few others after him. https://en.wikipedia.org/wiki/Quantum_spacetime


Fourier transform, which is used in classical wave theory (optics and electricity) and in quantum mechanics


It doesn't mean that there are an infinite number of real things that add up, it means that the mathematics we use to model physics comes up with infinite sums of abstract things.

Nevertheless, there are infinite sums of "real" things in physics too. I have put "real" in scare quotes because it turns out they aren't real :)

In quantum electrodynamics, the charge of an electron turns out to be infinite. And it turns out that in Real Life, the charge of an electron is indeed infinite. Ish.

... but we know it isn't, right?

So what happens is that the real electron gets surrounded by positively charged "virtual" particles. Virtual particles are basically quantum probabilities of a particle appearing out of nowhere with its antiparticle (among other things). So you can say that with some probability, that particle is there. Since there's an electron nearby, the positively charged particle is attracted to the electron, while the negatively charged antiparticle is repelled. This screens the electron charge. With an infinite number of virtual particles, the electron's charge is screened enough to become finite again. Basically, we subtracted two infinities and got something finite. The subtraction done here is called renormalization -- and a similar thing is being done in the -1/12 sum. While mathematics tells us that divergent series can be rearranged to get any "sum", this trick is often used in physics -- provided you can justify that rearrangement.

In fact, if you probe an electron hard enough (by bombarding it with other charged particles with tons of energy), its apparent charge increases since the particles used to measure its charge "pierce" the shielding.

Of course, this is all really a fancy way of saying that charge itself is energy-dependent, and what we call charge is actually the 0-energy charge.

But for modelling purposes, virtual particles work better, and thinking about things in those terms gives a physicist a cleaner abstraction boundary to deal with. You get infinities everywhere, though.

This is basically an example of the pattern I'm talking about. Abstractions in the model may have all kinds of infinities popping up. In the real world, these don't really manifest themselves because they're not directly linked to observables. You can apply your model to your detection mechanism to get values for non-observables and say "hey, look, an infinity", but that's really circular logic. The "Real Charge" of an electron isn't something we see. Virtual particles aren't something we see; unless we make them into real particles, but you can't do that to the infinite virtual particles around, so you'll never see an infinity.


> No it doesn't. Under some unintuitive definition for infinite summation that is useful in some physical calculations it does which is surprising.

It's not surprising, you can pretty much get any result you want like this.


[flagged]


I wouldn't assume my definition is a better one than any other. But I am just like the articles author confounded by the idea that you can communicate this equation on a public forum without making it absolutely clear that you are interpreting the equation in a way that is different from the way an undergraduate or just regular person would interpret it.

I get how it's fun for intellectual people to come up with new definitions for things that expand our knowledge and understanding of the universe, and tease people who have the old simple understanding with seemingly impossible quandaries. But I would like the emphasize that I think this teasing is counterproductive. It leads people to believe mathematicians operate in a field that is fully disconnected from reality. Either take the time to explain how you define your construct, so they can truly appreciate the usefulness and ingenuity of the equation, or use different symbols and present it as a valuable yet opaque contribution to practical mathematics by showing its applications. Both cases will yield the much deserved admiration.


What you're actually saying: "Make math simple enough for regular people to understand"

Problem with what you're saying: Hundreds and Hundreds of years of history and tradition and the sheer knowledge that's been built up in the current way we do Maths.

Another problem: It's not teasing if the ideas are actually complicated. Which this one most certainly is.

Another problem: Not all Math is practical and applied, some of it almost certainly operates in a field that is "disconnected from reality".

Another problem: Your solution of using different symbols has the same problem as XKCD standards.

Another problem: "Regular people" is vague. People fear math for a reason. They can suck at it.

Another problem: This idea is also fairly old. So your part about "old simple understanding" also doesn't hold true.

Only thing we can agree on: Math can suck at communicating ideas if you're not used to it and/or don't have the context and knowledge.


I think you're missing one important consideration, here: they're saying "make math simple enough for regular people to understand when your audience is regular people."

Scientific American is targeted at educated laypeople. The author's job is to communicate with that target audience. And it absolutely is a tease to use a headline that, to that audience, will come over as a bizarre indication that they're actually clueless.


He didn't say "bad definition". He said "unintuitive" and I agree. The video is obviously meant for an audience that is not intimately familiar with infinite series or their applications in string theory. I'm sure most people found it most unintuitive as it is very much in conflict with how sums behave in everyday life. The fact that the video failed to discuss the important details is well worth criticizing, as the article explains.


> The arrogance here is breathtaking.

Oh we can definitely agree to that

> that what you consider the bad definition is actually the good one

Sum is pretty much well defined for different sets used in mathematics. In none of them the sum of infinite positive numbers is a negative number.

Here's a definition for a bad definition: it is ambiguous


id say thats a fail.

if i have a circle of circumference 5

the sum of 1+2 is equal to both 3 and -2.


Having two possible answers is not ambiguous, if you're defining a position in your circle in two ways

sum continues to work in the same in your circle (Commutativity, Associativity, Identity element and a + 1 is the next successor of A)


the definitions are identical to a number line + is a move to the right - is a move to the left

The point is, in "the universe" most of "infinity" are actually finite circles that you can travel along "infinitely" - because you eventually arrive back where you started.

and -1/12 just means 1/12 to the left before you get back where you started moving right.


What are you talking about? How is it ambiguous?


Have you?


not checked this myself. but. my assumption is that this also applies when defining the numberline as a finite circle.

for example on a circle of circumferance 15, 14 is also equal to -1, 13 is equal to -2 etc.

which says the sum of 1+2+3... averages to -1/12


This is normally called the integers modulo N, and this is not a useful way to look at this infinite sum. One way to tell is that if you didn't already know the answer, there's no way to get it from the construction.


Nonesense. Since when did 1.2 modulo 2 equal -0.8


Er, if you're working in the reals modulo a number, then all numbers that differ by a multiple of the modulus are regarded as equal (or equivalent). So in the reals modulo 2, it is indeed the case that 1.2 is equal to -0.8.

Why do you think otherwise?

But now you've moved away from talking about the integers mod N, which is what zeroer was talking about[0] which was in response to you talking about the numbers wrapped around a circle[1]. Their response to you seemed reasonable, but I don't understand why you leaped to talking about real numbers, nor why you claim that 1.2 is not equal to -0.8 when working modulo 2.

[0] https://news.ycombinator.com/item?id=12106633

[1] https://news.ycombinator.com/item?id=12106409


because to get 0.5 for the sum of 1-1+1-1+.... requires real numbers. I never restricted it to integers.

The point of the construct is it applies to a circle of arbitrary/unknown length 14,12.2, 500,000 million billion point 6 light years.

and the average of the sum of 1+2+3+4+... will be/tend to -1/12

and its easy to test, just use a signed x bit number.


  > ... to get 0.5 for the sum of
  > 1-1+1-1+.... requires real numbers.
Actually it doesn't, it only requires the rationals.

But it's clear now that you're not really talking about maths at all, so the comment about existing, established theory about modular arithmetic doesn't really help. You seem to be doing something, well, different.

And regardless, in the long-established theory of modular arithmetic, 1.2 is equal to -0.8 mod 2, regardless of you claiming that it's nonsense.

So at this point I have no idea what you're talking about.


meh, I tend to skip thinking about the rational numbers when I move onto this hardcore theory stuff :p

There are two ways of constructing a number line from -inf to +inf

The first, is that "nothing exists" to the left of -inf or to the right of +inf The other, more useful, is that -inf=+inf+1 and +inf=-inf-1, (or -inf=+inf, never remember which is the more useful) and they form a loop. Such as that constructed by a signed integer. e.g. e.g. with an 8 bit number 127+1 = -127

This has nothing to do with modulo afaik. (but all the basic construct stuff is related) and is more to do with every dimension being curved in another (meaning they always form such loops)


>Under the normal rules which hold for direct use obviously the answer is positive infinity just like you would be expect, you're not stupid and you could be a mathematician if you wanted to.

There is no answer for an infinite sum. It is impossible to sum an infinite quantity of integers. The answer is definitely not positive infinity, as that is not a number and the sum of integers must be an integer.

The article points out a few times that such a "sum" is undefined.


The main point of this calculation is tricking laypeople by sneakily changing definitions. It's a bit like a school kid asking you to deny something embarrassing then informing you that it's Opposites Day. But there's no need to accept their definition. If they don't specify then the best definition is the commonly used intuitive one, under which it's indeed possible to sum infinite positive integers, resulting in positive infinity.

This number system was used to invent calculus, and worked just fine for over 150 years despite theoretical unsoundness. And it turns out that it's possible to formally define a provably consistent number system that obeys our intuition, the hyperreal numbers. See:

https://en.wikipedia.org/wiki/Non-standard_analysis


> If they don't specify then the best definition is the commonly used intuitive one, under which it's indeed possible to sum infinite positive integers, resulting in positive infinity.

I disagree that it is possible to sum an infinite amount of integers. It would take infinite time and space to perform the calculation. There is literally no end to the integers, so the calculation would never complete.

I also still claim that the result cannot be positive infinity due to the definition of addition on integers. The result of addition of integers must be another integer and positive infinity is not an integer.

I do agree with the article however, that one can take the limit of a well-defined infinite series; but the limit is not the sum, only the bound that will never be exceeded no matter how long you are able to continue adding numbers for.

I completely agree with you and the article that 1+2+3+...=-1/12 is a sneaky trick and that the definition should be rejected.


>I disagree that it is possible to sum an infinite amount of integers. It would take infinite time and space to perform the calculation. There is literally no end to the integers, so the calculation would never complete.

That same argument gives you Zeno's paradox.

You can sum a pattern of numbers in O(1) time if you use logic instead of brute force. It doesn't matter if physically spending O(n) time on something is impossible when you only need O(1).


>That same argument gives you Zeno's paradox.

I believe Zeno's paradox is on the rationals, not integers.

>You can sum a pattern of numbers in O(1) time if you use logic instead of brute force.

There may be closed solutions for finite summation patterns, but infinite summation patterns of integers have no solution (by definition).


Reread the parent comment. In the hyperreals, the reals are extended by infintesmals and positive and negative infinity. Two things happen there: 1. You no longer have +:Z -> Z, you have +:R'->R', which means that plus can be closed under the hyperreals.

Also, yes, you can't literally compute an infinite sum but among any crowd that has likely taken calculus 1, you can place implied limits. :P (which, arent actually needed in the hyperreals because it HAS infinity, but whatever)


>I completely agree with you and the article that 1+2+3+...=-1/12 is a sneaky trick and that the definition should be rejected.

This not something you can agree or disagree on. You can make physical calculations with this result and get a prediction that you can measure and confirm. This result is sound.


You can definitely agree or disagree on whether a definition should be rejected. If I tell you I want to replace 'five' with 'fish', even though my new system can be used to calculate things, you should tell me it's a terrible idea.

In this case you might find the -1/12 useful, but have the opinion that they really should not be using '=' as a shorthand for what they're doing with the zeta function.


> There is no answer for an infinite sum. It is impossible to sum an infinite quantity of integers.

I agree. Infinity is a process that can yield a number but is not an actual number and, Cantor et. al. notwithstanding, there is no such thing as a "completed infinity" other than terminating it at a finite step. If you are careful and in certain contexts you can use the "limit" of an infinite converging process but you must make that assumption explicit to avoid errors.

All these bizarre math tricks rest on treating it as a number when its undefined. Its like those puzzles I read as a kid that "prove" 1=0 and they typically depend on an implicit division-by-zero step which is also undefined just like infinity. Once you start working with the undefined you have to very careful and even Gauss made errors when he was laying the groundwork for infinite series. To the degree that this math has ANY validity it is in the context of some esoteric and specialized area of math and it is NOT appropriate to foist it on the general public as a general result. The motive in such attempts is to impress or intimidate or destroy math (nihilism) which I find despicable.


Just watched the entire video. Using the same "proof" that they used, I can also prove that 1+1+1+1.... = 0

Proof:

S1 = 1 + 2 + 3 + 4.....

S2 = 0 + 1 + 2 + 3 + 4...

S1 - S2 = (1 + 2 + 3 + 4 + ...) - (0 + 1 + 2 + 3 + ...) = 1 + 1 + 1 + 1....

S2 == S1 (by definition, since all you're doing is adding a 0) => S1 - S2 == 0

Therefore 0 = 1 + 1 + 1 + .....

Obviously this is pure nonsense. You can't just "shift things around" and use elementary mathematics when dealing with infinite series that don't converge. Maybe there's a more convincing proof out there, but the one they presented in the video is bogus.


> Obviously this is pure nonsense.

This is begging the question. Why can't 1 + 1 + ... = 0?

Also, I wouldn't be so sure that S2 == S1. You can't re-arrange infinitely many terms in an infinite series and still be guaranteed the sum is the same.


S2 is identical to (0 + S1). Are you suggesting that (0 + S1) != S1 ?


Precisely. Adding 0 to the front of an infinite series is shifting every term by one to the right. It's not clear that shifting terms in series keeps the sum the same. For instance, re-arranging infinitely many terms in conditionally convergent infinite series changes the sum.


> Adding 0 to the front of an infinite series ... not clear that ... keeps the sum the same

I don't know if I would go that far... but I agree with the general spirit of your comment. Which is also the point of my original post. If you think that my appending a zero calls my proof into question, the proof presented in the video takes far more dubious and horrific liberties.


I don't know for certain if (0 + S1) ?= S1, but infinite series require care. Consider this:

Let S1 = 0 + 0 + 0 + ... = 0 Then surely, S1 = (1 - 1) + (1 - 1) ... = 1 - 1 + 1 - 1 + ... -1 + S1 = -1 + 1 - 1 + 1 - 1 ... = (-1 + 1) + (-1 + 1) ... = 0

But then -1 + 0 = 0


Your error is here, equality is wrong: -1 + 1 - 1 + 1 - 1 ... = (-1 + 1) + (-1 + 1) ...

In first series, there are 2 different elements (1 and -1) and series can end at any one of them rendering the end result of the sum uncertain. On second one there is only one element - (-1 + 1) which is 0, so wherever you end it the result is always the same.


My whole point was there are operations that work in finite mathematics that don't work on infinite series, so yes, I didn't prove mathematics inconsistent, I just proved grouping is illegal for divergent infinite series :)

To be clear, yes

(1 - 1) + (1 - 1) + ... != 0 + 0 + ... either


> To be clear, yes

> (1 - 1) + (1 - 1) + ... != 0 + 0 + ... either

I don't see why. I understand the sets themselves are not equal, but the sum of the elements of those sets is at any given index.


It's not obvious that S2 == S1.

I can see some differences: for any finite N > 0, it's false that the sum of the first N terms of S2 is the same as the sum of the first N terms of S1. And what do we know about the infinite sum? Maybe you're right, but you'd have to prove it; they are definitely not equal "by definition"!


You're right - "infinity" is more of a type than an actual value - S1 and S2 have different counters.


This is actually a much better calculation than the one in the video, because it doesn't rely on undefined == 1/2. But it does show that some of the other steps in the video are bogus too, so it's even worse than I thought. I thought that just the 1-1+1-1+1... = 1/2 was the problem (the equivalent of the division by zero in all those 1==2 proofs).


That 1/2 instantly bothered me. Who decides that average of fluctuation between 2 values is their sum? I understand how it works out to call the sum 1/2, but only if the definition of "sum" is changed. More accurate description of number would be something like "convergence of average of sums of Nth element, where N is going to infinity" (call it X). And when looked at it like that, we see that other numbers, like 1/4 are understandable too. But the -1/12 is not even X. It is a number that we get by manipulating X of one series with X of a couple of other series by applying some arbitrary rules that doesn't necessarily make the number mean what the mathematician in the video thinks it means. I am sure the number can be made useful in certain problem, but to call it "sum of infinite series" is just plain wrong.


Didn't watch the video, but what is mentioned in the article is correct - analytic continuation is used to extend this series to an analytic function on the complex plane, and it happens to be the Riemann zeta function. By uniqueness of analytic continuation (due to a theorem or lemma that when an analytic function is constant at an infinite number of points, it is constant everywhere), we know this is the only valid extension that makes sense to work with (consider being analytic similar to what it means for a function to be differentiable in the real numbers).


From my point of view, what Numberphile did was wildly successful. I think the video's "wow" factor, as the author put it, is doing its job of getting people interested in maths. Yes they could have explained some things a little better. However as noted by the author and some of her other field-mates she referred to at the bottom of the article, I think really the only sore people are the mathematicians. I also think that this is because they already know the rest of the story behind this interesting result. Once we know something in detail we as humans tend to scoff at incomplete explanations because in our eyes it does injustice to the topic.

However, to the normal viewer this video probably made maths look incredibly interesting and more than likely even caused them to research it a bit more. I would hazard to say an article like the one Dr. Lamb wrote would not have that same effect, though it is technically more correct. Numberphile to me is more about reinstating the interest in maths in a society where you are usually introduced to the topic by doing repetitive, seemingly impractical calculations and this video of theirs as referenced in the article has definitely done that.


The 6 or so people I know who have seen the original Numberphile video responded with statements like "I'll never understand math", "that's just stupid", and "I don't get it". None expressed a greater interest in maths as a result; quite the opposite.

Just like a good host at a party, a good educational video should leave the viewer felling positive and better about themselves. So clearly, in this case, it depends on the audience.


That was my reaction too: "This is clearly stupid". The seemingly correct answer to me is that the sum is undefined. Also 1-1+1-1+1-1...!=0.5. That also has to be undefined. I'm not even sure "sum" makes sense as a concept when applied to non-convergent series. But that's just my intuition.


You're right, in the sense that the partial sums are always 0 or 1, so 0.5 away from 0.5. However the mean of the partial sums converges to 0.5 (this is called Cesàro summation). It converges "on the average" towards 0.5. However this isn't normally what people mean by convergence.


Interest, with positive or negative context, is interest nonetheless imo. (clarified language to refer to context rather than ambiguous use of "negative")


That the word negative denotes the absence of what it describes would indicate that negative interest is not interest.


> Just like a good host at a party, a good educational video should leave the viewer felling positive and better about themselves.

Is the purpose of education to make the other person feel positive and good about themselves?


No. However, unless you have a captive audience you will quickly lose your education opportunity of you don't. Even with a captive audience you are likely to achieve better results if your students enjoy the topic and can feel like they are accomplishing something. Motivation matters as it turns out.


Yes. The purpose of education is to impart knowledge and to instill confidence to wield/apply that knowledge. So if you diminish confidence to "feeling good about oneself", then yes it does.


No, the purpose of education is to teach you stuff. A good way to educate is to make people feel good about themselves.


The problem isn't just that they explained it wrong, but that they tried to squash disbelief by pointing to a string theory book and saying "Look, it's in here! It must be true!" That second level of ignorance is a big part of what made me so angry.


Pause ...

The people in the Numberphile video may well have misjudged their audience, but I do not believe there is a "second level of ignorance" here, or even a first level of ignorance. At least, not about the mathematics.

The people in the Numberphile video know full well all the underlying details of the mathematics, rigorous and informal. Like may others, I believe they badly misjudged the content and approach. I agree entirely with others that there are many people who have watched this and thrown their hands up and declared "Just proves I'm crap at maths and will never understand it." And that is unfortunate, which is why I personally believe it was a poor decision to make the video and explain it the way they did.

But I don't believe it's fair to level an accusation of ignorance at them.


Both of the experts in the video are physicists, and in my experience physicists play fast and loose with mathematics. I think this is often because the mathematics that makes their arguments rigorous comes long after the sloppy methods are entrenched in the physics literature.

In any case, I have talked to many physicists who have expressed to me their desire to better understand a topic like, say, differential geometry, but abhor the idea of actually reading a proof or studying a precise definition. In their words, "I only want an intuition for the subject." An understandable desire, but now my bar for believing a physicist when they make a mathematical claim is quite high. This video isn't helping them any.

To be sure, there is a right way to do it, a simple practice that the men in the video could have employed to appease everyone: when you do something egregiously false, mention that strictly speaking you're not allowed to do it, and that you're actually sweeping a lot of complexity under the rug, and then continue anyway. This is the opposite of the blind appeal to authority in the video, and maintains their integrity.


I must admit that I got this feeling of ignorance, or rather false logic, as well. "We know it's right because we see this kind of stuff occur in physics!" is I believe the quote in the video towards the end. However, it only shows up because someone decided that it was the best mathematical description of the behavior we witnessed. Kind of a circular validation. The maths are right because it shows up in physics, the physics are right because the maths checks out is what I got from it.


yes this I have to agree with. It also provides a rather circular argument for existence.


It backfires badly when pop science writers draw their audience in with patently false claims that just make math seem like nonsense.


Reading a couple of the linked posts, I thought this was the best take on it from mathematician Jordan Ellenberg (it certainly made sense to my comp-sci brain):

    It's not quite right to describe what the video does as “proving” that
    1 + 2 + 3 + 4 + .... = -1/12. When we ask “what is the value of the
    infinite sum,” we've made a mistake before we even answer! Infinite
    sums don't have values until we assign them a value, and there are
    different protocols for doing that. We should be asking not what IS
    the value, but what should we define the value to be? There are
    different protocols, each with their own strengths and weaknesses. The
    protocol you learn in calculus class, involving limits, would decline
    to assign any value at all to the sum in the video.  A different
    protocol assigns it the value -1/12. Neither answer is more correct
    than the other.


I think a good analogy to make this seem less mysterious would be the various methods of determining an average. E.g. we have no problem saying the "average family" has 2.4 children despite the impossibility of any family having 2.4 children. The number is just useful in other calculations for income, expenses, etc.

From there it's not hard to imagine that the -1/12 result could be useful and justifiable as an intermediate step in other calculations despite being an "impossible" destination on its own.


No need to invoke the Riemann Zeta function and analytic continuation. In high school they summed this series(valid for x < 1):

1 + x + x^2 ... = 1/(1-x)

Plug in x = 2 to get:

1 + 2 + 4 + 8 ... = -1

There's a million of these series in your dusty old Calculus textbook. Or you could look in 'generatingfunctionology' to find others.

Complex analysis makes it more interesting, because the additional Cauchy-Riemann constraints make the solution unique. And so people are more willing to say that the unique solution is the "true" answer.

Really, I say this is just another example of the complex numbers being weird. I took a couple different courses in it(one in college, 2 online) because it was clear something interesting was happening with them, but there was never a unifying theme. I can kind of spot a rule like "analytic functions preserve 90-degree angles" in the Cauchy-Riemann equations, but it hardly explains all the crazy theorems.

That series is just another example of someone taking a well-behaved series, analytically continuing it into the complex numbers, and now it's clear something interesting is happening but it's not clear what.


Who says it's only valid for |x| < 1? That series isn't just a cute bit of trivia, it is the basis for signed integer mathematics in your CPU. "True" enough for you?


> Who says it's only valid for |x| < 1?

Anybody who understands the definition of convergent series.

> That series isn't just a cute bit of trivia, it is the basis for signed integer mathematics in your CPU. "True" enough for you?

Eh. That's only half true. It's related, certainly, by means of a similarity between 2-adic numbers and two's complement representation, but I'd hardly call it the basis for signed integer mathematics.


Are you doing something like this:

* Define the series over the 2-adic numbers rather than the reals

* Associate every 2-adic integer with its corresponding sequence in the inverse limit Z/2^nZ for n = 1..infinity

* Truncate the sequence to a certain precision(an index i), and make an equivalence class a ~ b if a and b agree on the first i elements. You get a field Z_(2^n), and identities in 2-adic analysis should carry down to the equivalence classes. Here 1 + 2 + 4 ... = -1 in the 2-adics.

That isn't the way I would've thought about signed integers, but it seems like it would work.


It's not necessary. Just interpret the bits as coefficients of powers of 2, in a purely conventional manner. Say that leading digits match the high bit rather than being 0 regardless of the high bit. And you're done, you have working signed arithmetic that displays all the behavior you expect and conforms to the geometric series equality. Here 1 + 2 + 4 + ... = -1 in the integers.


This is what you get when mathematicians start "hacking". Instead, they should have invented a proper notation conveying what is really meant by the sum and the "..." ellipsis. This notation apparently does not cut it.


ζ(s) has been in use since 1859, when Riemann introduced it. Riemann's paper isn't as thorough as it could've been, but it seems clear he knew exactly what he was doing, being one of the founders of Complex Analysis:

http://www.claymath.org/sites/default/files/ezeta.pdf

This series is ζ(-1). I don't know about the Youtubers or Scientific American, but mathematicians studying Complex Analysis know exactly what it means.


> This is what you get when mathematicians start "hacking".

I.e. physics.


As soon as I see people dropping GIFs and image macros into intelligent discussions like this, I can't help but immediately become sceptical of what they're saying.

It happens a lot in fairly serious technical computing blog posts and I've been trying to wrap my head around why people do it.


I believe it's mostly use to avoid having huge blobs of text and force the readers to have a pause to reflect about the text instead of leaving the page.

Nevertheless, I feel the same way. I wish people would use simple diagrams relevant to the discussion (like the 1/2 + 1/4 + ... = 1) instead of overused macro images.


Many times now when people have looked at my carefully written, carefully reasoned, well-laid out writings, have gone:

  Aaarrrggghhhh !!!

  WALL OF TEXT !!!

  Aaarrrggghhhh !!!
It seems that many people need humorous (for some definition of "humorous") images and animations to make them think tat what they are reading is entertainment. I hate it, but it is an increasing trend, and I'm not surprised.

Disappointed, but not surprised.


I am surprised, because this presentation style used to be restricted to children's books. I was unable to finish this article because I refuse to have Riemann mixed up with memegenerator.net.


Unable is not the same as unwilling.


I had a hard time as well - the moving image was very distracting, I had to cover it to try to concentrate on what the text was saying.


That is not what the comment I was replying to said. There was a "refusal" to mix academic content with Internet memes.


Now substitute blog post with children book. It's unfortunate that text length, imagery and animation have all descended to the level of fairy tale writings.


It's the author showing personality, free from the strict form rules of journals.


No.

Source: https://en.wikipedia.org/wiki/Betteridge%27s_Law_of_Headline...

EDIT: Some further elaboration: I'm sick of the question. The answer is: not in any sense that would be meaningful to the people to whom this stuff is being told. You're just being misleading by implying that the sum of positive integers can converge. I don't want to hear any "But if you take this analytic continuation..." or "But in a certain sense...", they're just misleading as the thousands of proofs that 1=0.


In the numberfile video he doesn't explain the way he calculates 2・S_2. He says he shift it, but doesn't explain why that is valid.

    Shifted version:
         1-2+3-4+5-6 ...
           1-2+3-4+5 ...
    sum: 1-1+1-1+1-1 ...
    
    Multiplied version:
         2-4+6-8+10-12 ...
The multiplied version shifts between +(2n) and -(2n). Following the logic that S_1 = 0.5, because that is the average between 0 and 1, I would argue that the multiplied version of S2 should equal 0, as that is the average between a postitive constant and its negative (but the variance is going to be infinite. Doesn't that have a say?).

What if we triple shift?

    Triple shifted version:
         1-2+3-4+5-6+7-8+9 ...
               1-2+3-4+5-6 ...
    sum:     2-3+3-3+3-3+3 ...
Look! now 2・S2 is equal to 2!


I don't think your math adds up.

Your triple shifted version would be between -1 or 2 depending on the cut right? So, still 1/2.


Yeah, that was a brainfart on my end.

What about the multiplied version? That is how I would intuitively understand 2S_2, and I still don't accept that shifting is the same.


I'm gonna be contrarian and say, yes it does - it's been physically proven! The casimir effect, as other people here have mentioned, depends on this being true, and it has been experimentally tested.

To me it's one of those things where you just go 'damn' because of the perplexing relations that exist between math and physics. If anything hints at what the hell goes on in this universe, to me it's stuff like this.


I strongly disagree. The Casimir effect's math works, yes, which means that "there exists a sense in which the summation equals -1/12 in this calculation", but I interpret this to mean that the calculation being done isn't the most accurate representation for the physics at hand. QFT is, after all, filled with weird tricks involving infinities - it seems perfectly plausible to me that the Casimir effect's math actually involves something like "-1/12 + <infinity terms>", but we're only measuring the non-infinite term in some sense, so we get away by only using it in our calculations.

I think it's Very Not Good to see a totally unplausible mathematical result in physics and take it as anything other than an open problem that needs more work. It's okay to be amazed by it, but it's not okay to say "sure, okay, let's just leave this as it is".


Are you saying that it's a coincidence that -1/12 ends up in both the casimir effect and in this proof? This coincidence is what I'm referring to..


It's not a coincidence. I'm saying that both calculations - the math we're using to understand that Casimir effect, and the math we use to 'find' the sum equals -1/12, are making the same simplification that 'misses the point' of what's 'actually happening'. In some sense that's hard to put one's finger on.


You could get any result you want from this kind of proof. Someone else here already used some of these steps to prove that 1+1+1+1+1...==0.


With all due respect, you are not really saying that the series 1+2+3+... converges to -1/12 with the obvious metric, are you? Because that's the only reasonable way to interpret 1+2+3+... = -1/12 (and it is false). The fact that there is some physics, that relies on particular results from complex analysis (I guess), which can imply that the sum of the positive integer is "in some sense" -1/12, does not exempt anyone from giving those symbols their true meaning, or they risk looking fool rather than smart.


Not sure what you mean by the last sentence here, can you elaborate? What I'm saying is that the fact that this strange mathematical result happens to give correct results in a physical experiment must be a pointer to something.


What I'm saying is that there is no strange mathematical result whatsoever related to the sum of the positive integers, which should be the only meaning reserved to 1+2+3+...

There are stuff like this https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%C2%B7..., but writing 1+2+3+... = -1/12 is purely formal and pretending to deduce it from physics is silly because you are not performing any sum at all.


The analogy to complex numbers is really useful. Of course sqrt(-1) doesn't exist. But if we just pretend that is does exist, we can build a rigorous theory out of these imaginary numbers. Once we do that we notice that these numbers are really useful for calculating real physical things. So maybe imaginary numbers aren't so imaginary.

Same with these infinite sums. By the math you learn in middle school, you can't have infinite sums. But break the rules for just a second and again we have something that is helpful with real physics.


I don't know what you mean by sqrt(-1) not existing.

What does it even mean to exist?

Far better to talk about whether a number is defined in a particular numerical system.

In the real numbers, sqrt(-1) isn't defined. But why privilege the real numbers as "existing"? Despite an official-sounding designation, they're very deeply weird.

The real numbers are famously uncountable. But any subset of them that can be enumerated is by definition countable.

Think about the consequences of that for a moment. No matter what you do, the subset of the reals you can enumerate is countable, meaning the subset you can't enumerate is uncountable. In a rather flippant way, you could describe the real numbers as "mostly useless." Most of them exist to make some theorems work, rather than being a number that you could ever use to describe anything - solely because describing the number would require an infinite amount of information.

In a pretty significant sense, it's valid to say that the real numbers are mostly figments of analysts' imagination.

If they "exist", might as well say complex numbers exist too. They're actually more useful in physics than real numbers are.


I'll admit that the concept of numbers "existing" here is quite poorly defined. I was trying to capture the effect of people upon encountering complex numbers for the first time to just sort of shut down and refuse accept that they are "real" ( pun intended). It is hard to remember how weird complex numbers feel after years of middle school math teachers saying that you can't take the square root of a negative.

Similarly weird feeling is encountering zeta(-1) = -1/12 after years of calculus teachers telling you to ignore divergent sums because they are infinite.


I think the point is that if that's true, the concept of complex numbers is being taught incorrectly. If you tell a student that this number is fake but useful, what are they to make of that? It just starts to make mathematics seem spooky and unpredictable.

When you're first learning about imaginary numbers in 8th or 9th grade, the answer to "what is sqrt(-1)?" _should_ be undefined. If you claim otherwise, you're pulling the rug out from under their feet, because the number system that they are familiar with indeed has sqrt(-1) undefined.

Instead, the teacher should go on to introduce a new system of mathematical objects that have certain rules, and the students could play around with them and see how they have two components, how you can plot those two components in 2 dimensions, how you can think of them as arrows sticking out of the origin, how you can combine their components to rotate each other, etc. Then work backwards into showing that we can call these objects complex numbers for short, because those operations are similar to addition, multiplication, etc. And finally, just as a curiosity, you can see that sqrt(z) = i for z = -1 + 0i.

There's no need to introduce this whole concept of an imaginary number line that points off in a direction nobody can see or measure. The whole takeaway should be that you can't just square real numbers get negatives. If you have something that can "multiply" by itself to get its own inverse, then you have either overloaded the multiplication operator with something very very different, or you're dealing with an object that can "rotate" through another dimension. It's an ordinary two dimensional space, and the only difference between the two axes is their name, just like "x" and "y". In my opinion, this lesson should actually be reassuring to a young mathematical intuition: there's only so many ways to skin this cat.


You're likely aware of it, but in case others aren't, there's a classic online presentation that makes the same case: https://betterexplained.com/articles/a-visual-intuitive-guid...

And here's some "ancient" HN commentary on that article: https://news.ycombinator.com/item?id=2712575


Infinite sums do not require any breaking of rules. Infinite sums are formally defined as the limit of partial sums.


It doesn't satisfy the Cauchy criteria, it can't converge.

The trick step of assuming that 1+0+1+0.... converges is also not Cauchy.

It's interesting and string theory and some other physics get results by using -1/12 but it's not strictly correct. No more than the omni proof


For the mathematically inclined, Terence Tao's explanation provides an easier intuition of the relationship using real-variable methods: https://terrytao.wordpress.com/2010/04/10/the-euler-maclauri...


The other YouTube popularization that regularly irks me is when people beam about how "there are lots of different sized infinities", without explaining that when mathematicians use the word "size" and "infinity" in that context, it means cardinality (technical term) of an infinite set, within an explicitly defined set theory that includes the axiom of infinity (all technical concepts).


I had found this YouTube video before that explained it in a way I can understand - http://m.youtube.com/watch%3Fv%3DjcKRGpMiVTw


I think there's something wrong with the link you've provided:

    > Our systems have detected unusual
    > traffic from your computer network.
    > Please try your request again later.
Perhaps this is the video you intended:

https://www.youtube.com/watch?v=jcKRGpMiVTw


I came here to post this video as well, Mathologer is a great channel, I often prefer his explanations to Numberphile's. Highly recommended.


I want familiar before with the concept of analytic continuations and so this part really confused me:

("The" is the appropriate article to use because the analytic continuation of a function is unique.)

Let f: N -> N, f(x) = x be a function on the natural numbers. Then I could define two functions g(x) and h(x) on N u {foo} that behave just like f for natural numbers. However, g(foo) is 42 while h(foo) is 666. Wouldn't g and h both be valid analytic continuations of f in N u {foo}, according to the definition of analytic continuations explained in the article?

I'm wondering about that as the uniqueness seems to be an important property for the rest of the explanation, yet it is simply assumed here without any further explanation.


Uniqueness requires proof - it relies on a theorem/lemma having to do with if an analytic function is constant in a region, then it is constant everywhere. Uniqueness then arises as if there are two analytic continuations that are the same in that region, then the difference is 0 there, and thus 0 everywhere, resulting in the two being the same.

That is a non-trivial theorem/lemma though, and the proof is typically gone through at a graduate complex analysis course.


To clarify Bahamut's point: The functions must be analytic - https://en.wikipedia.org/wiki/Analytic_continuation#Initial_.... Regardless of the actual value of foo, at least one of g or h won't be differentiable at foo, so they can't both be analytic (assuming you choose some open superset of N u {foo}, otherwise the theorem breaks down even sooner).



Encountering it now, I think your followup article "Does it matter if the sum of all integers is -1/12?" makes an excellent case for why it does in fact matter a lot:

The fact that the unsound reasoning in this particular case led to a conclusion that superficially resembles a conclusion that can also be arrived at by sound reasoning just makes it that much worse. It encourages people to think: because this mode of reasoning led to a "correct" conclusion in that case, then it will probably lead to correct conclusions in other cases.

If the problem were confined to mathematics I might not make such a big deal out of it, but it's not. The problem of people uncritically accepting conclusions drawn by unsound methods of reasoning pervades our society and causes real damage.

http://blog.rongarret.info/2014/01/does-it-matter-if-sum-of-...


As it happens, I have a proof that 1 == 2. Of course my proof involves a step that involves division by zero, but if it's legal to define undefined results, then why can't I do that if it helps my proof?

Because as far as I can tell (with my admittedly limited understanding of mathematics), that's basically what's going on here: they define the result of 1-1+1-1+1-1... to be 1/2, which it can of course never be. The result is never 1/2; it's either 1 or 0. Taken to infinity, the only reasonable definition for that sum is undefined. If I can say that 1/2 is fine too, then I should also be able to attach my own definition to 1/0.

Also, if string theory really relies on such questionable mathematical steps, then that would make me question string theory even more. As far as I understand, string theory makes no testable predictions, which suggests to me that no results based on this questionable mathematical trick have been experimentally verified. If there is some real, experimentally verified physics that relies on the sum of all natural numbers to be -1/12, then I'd love to be corrected (though I doubt I'll understand it).


Terry Tao shows how to get from side of equality to the other:

https://terrytao.wordpress.com/2010/04/10/the-euler-maclauri...

It goes into detail and shows the derivation and internal consistency of the method.


From Zen and the Art of Motorcycle Maintenance <i> “The law of gravity and gravity itself did not exist before Isaac Newton." ...and what that means is that that law of gravity exists nowhere except in people's heads! It 's a ghost!" Mind has no matter or energy but they can't escape its predominance over everything they do. Logic exists in the mind. numbers exist only in the mind. I don't get upset when scientists say that ghosts exist in the mind. it's that only that gets me. science is only in your mind too, it's just that that doesn't make it bad. or ghosts either." Laws of nature are human inventions, like ghosts. Law of logic, of mathematics are also human inventions, like ghosts." ...we see what we see because these ghosts show it to us, ghosts of Moses and Christ and the Buddha, and Plato, and Descartes, and Rousseau and Jefferson and Lincoln, on and on and on. Isaac Newton is a very good ghost. One of the best. Your common sense is nothing more than the voices of thousands and thousands of these ghosts from the past.” </i>


To go a bit further, everything as one perceives it is necessarily in one's mind (or nervous system, to be precise). One consciously or unconsciously determines that something is happening outside of their mind, which makes that determination simply a philosophical one. This is a reminder to all, as to why the highest degree granted at any conventional institution is labelled a Doctorate of Philosophy in a given subject.

I would go as far as to say that just as the Uncertainty Principle prescribes a limit on what is knowable in quantum physics, so does philosophy suggest limits on what will ever be rationally proven through human perception, ideas, and knowledge.


This was in one of Rumanajan's textbooks that he sent to Hardy. It changes the semantics of established notation instead of invnting new.


Somehow the title is missing an ellipsis, which made me confused for a second.


Now added - apologies for the confusion.


Good time to remember 0^0=1. Or some people (like Knuth) like to assign that definition because it's convenient, and it makes everything work out in certain situations.

I don't think it's bad to say that 0^0=1 when you hear the whole story, which is that there are multiple right answers given how we define exponents and operations on 0. But it's misleading to say 0^0=1 and stop there. Similarly, after reading about this today, it's making more sense to me that we can, if we choose, define 1+2+3+...=-1/12, and it makes sense in some contexts. It's just that this isn't the only answer and it isn't the whole story.

I liked the article, and ended up reading a bunch of Evelyn's column earlier today. I see some people complaining about the pictures... I thought they were funny and relevant, my only nit pick is I can barely read the light blue text on the wink gif, the colors are so hard for me to look at.


Animated gifs made me ill. Couldn't read article :(


As a computer programmer, this actually feels fairly intuitive. If you had a function that computed the sums of convergent series and returned errors for inputs for which it was undefined, and replaced it with one that computed the reimann zeta function, you'd still get the same results for convergent series, but would get results for the formerly undefined inputs. Thought of this way, it's pretty clear that you're talking about performing two different calculations, and they aren't equal.


Glad SciAm is helping to spread correct info. I think that basically, the problem with many math videos in this style is that they tend to make uncommon, if interesting, assumptions without explaining what they are or why they make the assumption in order to get clicks. This is my comment from the last time this video was mentioned:

> "The sum of the series 1+2+3+4+5+6... = -1/12" is patently false, without a previous assertion that we have assumed the Cesàro sum of a series is equal to the series. Even mathematicians working with Cesàro sums surround such statements with "this holds only if we interpret the infinite sum defining Z to be the Cesàro sum..." [0] Precisely none of the times I've heard the "1+2+3+4...=-1/12" bullshit has the person stating it prefaced their statement with "this holds only if we interpret the infinite sum defining Z to be the Cesàro sum..."

> If you say that "1+2+3+4...=-1/12" without stating your prior assumptions, you suddenly allow anyone to make any assumption whatsoever, no matter how obscure it is. In your imaginary world, someone could walk into a store and claim that "this 95 cent pack of gum is free" because they just made the unstated assumption that all non-integers do not exist, and seconds later they could return it for a full refund of $0.95 after making the unstated assumption that in fact the rational numbers do exist. Numbers, and in fact the entire system of mathematics fail to work at all once you allow arbitrary, unstated assumptions no matter their obscurity. And in fact, the assumption that non-integer numbers do not exist is made far, far more frequently than the assumption that the infinite sum defining the sequence is the Cesàro sum.

> The only difference is that assuming the non-integer numbers do not exist is a defensible assumption in many, many scenarios... but Cesàro summations are only invoked about twelve times a year, in pure math or advanced physics papers.

> [0] Madras, Neal. "A Note on Diffusion State Distance." arXiv preprint arXiv:1502.07315 (2015).

My favorite post on the subject still has to be this: http://goodmath.scientopia.org/2014/01/17/bad-math-from-the-...


tl;dr: there is no "real" or "one" kind of maths, you'll have to first pick which field of mathematics you're working with. Arithmetics? Then no: 1+2+3+... is a divergent sum. Using calculus, specifically analytic continuation on the an Euler series? Then yeah, you can apply the rules in such a way that you get this answer and it's a mighty useful identity that can be exploited in complicated proofs.


I loved this video https://www.youtube.com/watch?v=XFDM1ip5HdU "An exploration of infinite sums, from convergent to divergent, including a brief introduction to the 2-adic metric, all themed on that cycle between discovery and invention in math."


So when you have an infinite series of the form 1-1+1-1+1-1+... then because there is no single point of convergence and so you average the two values 0 and 1 to give 0.5.

Why does an average of the two values that you can converge on help?

Also, what about something like:

sin(π/2) + sin(3π/2) - sin(5π/2) + sin(7π/2) - ...

what would this be?


The average is simply a summary description of the behaviorat infinity. Averages are a standard way to summarize complex phenomena.

More precisely, the sume is 0.5+/-0.5, which is greater than 0 and less than 1 and ambiguous in between.


I didn't think the Numberphile video on this was well done, in its "Math is a bunch of inscrutable magic you laypeople will never understand, and every attempt you (as represented by our camera man) make to voice your dumb intuitions is dumb, you big dumb dumbs!" way, but I even more was annoyed by the sneering "You're not allowed to do that! There are clear fixed rules and single, permanent, all-purpose definitions!" dismissals of the 1 + 2 + 3 + 4 + ... = -1/12 result in the backlash.

I'm going to copy and paste the explanation I originally wrote at Quora (https://www.quora.com/Whats-the-intuition-behind-the-equatio...), because I think it captures well everything I'd like to say about this at every level of the discussion:

The sense in which 1 + 2 + 3 + 4 + ... = -1/12 is this:

First, consider X = 1 - 1 + 1 - 1 + .... Note that X + (X shifted over by one position) = 1 + 0 + 0 + 0 + ... = 1. Thus, in some sense, X + X = 1, and so, in some sense, X = 1/2.

Now consider Y = 1 - 2 + 3 - 4 + ... . Note that Y + (Y shifted over by one position) = 1 - 1 + 1 - 1 + ... = X. Thus, in some sense, Y + Y = X, and so, in some sense, Y = X/2 = 1/4.

Finally, consider Z = 1 + 2 + 3 + 4 + ... Note that Z - Y = 0 + 4 + 0 + 8 + ... = (zeros interleaved with 4 * Z). Thus, in some sense, Z - Y = 4Z, and so, in some sense, Z = -Y/3 = -1/12.

In contexts where the above reasoning is applicable to what one wants to call summation, we have that 1 + 2 + 3 + 4 + ... = -1/12. In other contexts, we don't.

That's it. It's that simple. Everything else I'm going to say is just to comfort those who are uncomfortable with the game we've just played.

Note that I've said "in some sense" several times in the above argument. That's because, while we all know how to add and subtract a finite collection of numbers in the ordinary way, when it comes to adding and subtracting an infinite series of numbers, there are many different ways of interpreting what this should mean. Just knowing how to add finitely many numbers doesn't automatically tell us what it means to add a whole infinite series of them. And when it comes to summation of infinite series, it turns out there's not just one nice notion of "summation"; there are many different ones, which are nice for different purposes.

One such notion is "Keep adding things up, one by one, starting from the front, and see if the results get closer and closer to some particular value; if so, that value is the sum". On that account of what summation means, you clearly won't get any finite answer for 1 + 2 + 3 + 4 + ...; since the terms never get any smaller, the partial sums will never settle down to a finite value (and certainly not a negative one like -1/12!). They instead, in a natural sense, should be understood as summing to positive infinity.

And there's nothing wrong with this! You are not wrong to feel that 1 + 2 + 3 + 4 + ... is positive and infinite, and math does not deny this; there absolutely is an account of summation corresponding to this intuition.

It's just not the only account of summation worth thinking about.

We could instead consider other notions of "summation", including ones designed precisely so that arguments like the one we made at the beginning (which are very natural arguments to make!) counted as legitimate ways to reason about such "summation". And then, by definition, we will have that 1 + 2 + 3 + 4 + ... = -1/12, on such accounts of "summation". (In doing so, we will lose certain familiar properties such as "A sum of positive terms is always positive". But this is how generalizations work; generalizations very often lose familiar properties. Even the textbook, limit-based account of infinite summation loses familiar properties like "The order of summation doesn't matter". Even finitary summation of integers loses the familiar property "If a sum is zero, so are all the summands" from basic counting. But there is a web of resemblances to more familiar kinds of summation which can justify, in certain moods, thinking of each of these generalizations as a form of summation itself.)

If you insist that "Keep adding things up and see if the results get closer and closer to some particular value" is the only account of summation you're interested in, you'll object to the argument we gave at the beginning, saying "You're not allowed to do that kind of shifting over and adding to itself reasoning all willy-nilly; look at what nonsense it produces!".

But it can be made sense of, and is even fruitful to make sense of, in certain contexts in mathematics, and there is no need to blind ourselves to this insight.

Again, that's it. It's that simple. Everything else I'm going to say is just to comfort those who are still uncomfortable. For those who want a more systematic, formal account of series summation of a sort which validates the above manipulation, read on:

[Comment too long, will be continued in reply]


[Continuation of original comment]

We can look at it this way: We can try to assign values to a non-absolutely convergent series by bringing its terms in at less than full strength, producing an absolutely convergent series, and then increasing the terms' strengths towards full strength in the limit, observing what happens to the sum in the limit as well.

This is the idea behind the traditional account of series summation, mind you: at time T, we bring in all the terms of index < T at 100% strength and all other terms at 0% strength. This gives us our partial sums, and as T goes to infinity, each term's strength goes to 100%, so we can consider the partial sums as approximating the overall sum.

But we don't have to be so discrete as to only use 100% strength and 0% strength. We can try bringing in terms more gradually. For example, rather than having strengths discretely decay from 100% to 0% at some cut-off point, we can instead have the strengths decay exponentially in the index. (So at one moment, we may have the first term at 100% strength, the next term at 50% strength, the next term at 25% strength, etc.). Then we consider what happens as the rate of exponential decay slows, approaching no decay at all.

In symbols, this means we assign to a series a0 + a1 + a2... the limit, as b approaches 1 from below, of a0 * b^0 + a1 * b^1 + a2 * b^2 + .... Put another way, the limit, as h goes to 0 from above, of a0 * e^(-0h) + a1 * e^(-1h) + a2 * e^(-2h) + ..., where e is any fixed base you like. (Let's take e to be the base of the natural logarithm for convenience, and call this function of h the characteristic function of the series).

Again, this is not so different than the traditional account of series summation; we're just using exponential decay rather than sharp cutoff in our dampened approximations to the full series. (Actually, for the results we're interested in, it's really just the smoothness of the decay that's of interest. We could use other forms of smooth decay as well, and get the same results, but exponential decay is so convenient, I won't bother discussing in any further generality right now)

Now we've turned the question of determining the value of a series summation into the question of determining the limiting behavior of some function at 0.

Well, it's easy to determine limiting behavior at 0. Just write out a Taylor series centered at 0, and drop all the terms of positive degree, leaving only the term of degree 0. Boom, you've got the value of the function at 0.

Except... suppose the Taylor series has a few terms of negative degree as well. (As in, say, 5h^(-1) + 3 + 4h^2). Then the behavior at 0 isn't given by the degree 0 term; rather, the behavior at 0 is to blow up to infinity!

And, indeed, we'll find that this is precisely what happens when we look at the characteristic function of a series like 0 + 1 + 2 + 3 + ...; we get that f(h) = 0e^(-0h) + 1e^(-1h) + 2e^(-2h) + 3e^(-3h) + ... = e^(-h)/(1 - e^(-h))^2 = h^(-2) - 1/12 + h^2/240 - h^4/6048 + ....

Note that there is a negative degree term there. So in a very familiar sense, we can say that the behavior of this series is to blow up to infinity.

However, since any time a series DOES converge in the ordinary sense, the value it converges to is the degree 0 term of this characteristic function, it is very tempting and fruitful to think of the degree 0 term as the sum even when there are those pesky negative degree terms.

And in this more general sense, we see that the value of 0 + 1 + 2 + 3 + ... is that degree 0 term of f(h): -1/12. [In fact, we can understand the argument at the beginning of this post as outlining a rigorous calculation of this degree 0 term. (See https://www.quora.com/Mathematics/Theoretically-speaking-how... to see this spelt out)]

Now, you can propose other manipulations to produce other answers for this series in other ways, but this is one particular systematic account of summation which leads to this value alone and no other. [That is, for the series whose nth term is n. I should warn that, in the presence of negative degree terms in the characteristic function, this method is sensitive to index-shifting, so we would get different results if, for example, we considered 1, 2, 3, ... to be not the 1st, 2nd, 3rd, ..., terms, but rather the 0th, 1st, 2nd, ..., terms, respectively.]

Why should you care about this particular account of summation? Well, you don't have to; I can't force you to care about anything. But it's fairly natural and comes up with some significance in mathematics. It is, in a certain formal sense, precisely the account of summation which allows one to interpret the sum 1^n + 2^n + 3^n + ... for general complex n, yielding the Riemann zeta function (of great significance in number theory, and whose behavior (specifically, the Riemann hypothesis concerning its zeros) is generally considered one of the most important open problems in mathematics). So, you know, there's reason for some people to care about it, even if you don't.


No, it doesn't.


I forget who said, "In mathematics, you don't learn things, you just get used to them." But it applies here.


According to this link[0] it was John von Neumann:

    Young man, in mathematics you don't
    understand things. You just get used
    to them.
[0] https://en.wikiquote.org/wiki/John_von_Neumann


It boils down to what you mean by "equal."


I don't understand why people keep playing with infinite series like this... it's not sound math and doesn't lead to anything useful. Just a cheap way for worthless mathematicians to feel clever.


Is it saying that the complex version of the series (1+0i)+(2+0i)+(3+0i)... = -1/12?


Knew it was that numberphile video before I clicked.


[flagged]


Out of interest - and I'd really like you to answer honestly although obviously I have no way of knowing if you don't - have you read the article, or are you simply responding to the title?


Only responding to the title. Such titles annoy me, and I happen to have this bad habit of complaining in such cases.


Thank you for your honest answer. The article has a discussion that is good, careful, nuanced, and quite complete. It also has lots of links to related discussions, including talking about why - although this specific claim is "obvious nonsense" - there is actually some sense underneath it, although deeper than most people ever go.

If you're going to complain you should probably have an opinion on how to improve it. What title would you give such an article?


[flagged]


Then mightn't your time be more productively spent paying attention to your code you mention rather than commenting on articles you have not read about concepts you reckon you have no use for?


Compiling!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: