Even weirder is the Conway base 13 function, which not only is discontinuous everywhere, but its range when restricted to any interval is the entire set of real numbers (so its graph “fills up” the entire plane).
John Conway was amazing. Such a loss that he died of COVID.
As well as the game of life, surreal numbers and other famous things, he also produced one of my favourite lesser-known proofs ever - the proof that 91 is the smallest number that looks prime, but isn’t https://youtu.be/S75VTAGKQpk?si=IW791RaeCsXSOrrK “This is an important theorem and a discovery that I’m very proud of”.
Reply to add: Reading about that function it really breaks my brain. It’s incredibly strange to think a function could satisfy the intermediate value property on every interval and yet not be continuous. That really doesn’t feel like it should be possible.
there is a whole book “counterexamples in analysis”, basically for each theorem they find why every condition is required. these functions and ideas based on them are indispensable.
this weierstrass function is definitely a mind bender I remember from high school.
I don’t know if they are the same books as the names are quite generic, but i picked up both this one and the topology book sibling suggested at the library during my undergrad, and read both cover to cover. Both slim books and very clear.
My favorite counterexamples in analysis were based on the cantor set as it allowed counerexamples for many different scenarios while still producing functions that were “nice” in other respects
yup, that's a standard example / counterexample. Called the discrete metric. If you are trying to prove something about metric spaces, you should try your statement on that metric. A lot of things that seem true because they intuitively sound right with Euclidean distance break with the discrete metric.
Because there is no bijection between the rationals and the reals, wouldn't that imply that there are some irrational reals with no rational between them, allowing this function to be continuous, at least in places we can't actually compute?
That doesn't actually follow- even though the reals are uncountable, they're what's called "separable", which means that there is a countable set which is "arbitrarily good at approximating them", basically- the rationals!
So even though there are uncountable reals, for any real number, you can find rational numbers that get arbitrarily close. You can actually see this pretty easily if you think of reals in decimal notation- sqrt(2) is irrational, but
1.4, 1.41, 1.414, 1.4142, 1.41421, 1.414123, ... etc are all rational numbers that get closer and closer to sqrt(2).
So you should think of real numbers as uncountable, but nonetheless surrounded by rationals no matter how close you zoom in.
Not even the rationals are needed here. Just the rationals whose denominators are powers of two, for example, will do it (these have binary representations with a finite number of digits).
Any two real numbers x and y are either equal or have an infinite number of reals between them don’t they? If your real numbers are x and y, then (x+y)/2 for example. The density of the rationals in the reals means that one of the numbers between them is provably a rational number.
I'm a bit late to see this comment, but it was disappointing that none of the replies so far show why this is not true.
Take any irrational numbers x and y, and suppose for convenience that x < y.
Let d = y-x i.e. the length of the interval between x and y.
Let b be an integer so large that 1/b is less than d.
Consider fractions of the form a/b. As a increases, a/b will eventually (strictly) exceed the value of x. Suppose we fix a to be the smallest such value, i.e. (a-1)/b < x < a/b.
Adding d to both sides we get:
a/b - 1/b + d < x + d
Using the fact that 1/b < d and x + d = y, we get a/b < y.
> continuous at exactly one place (0) and nowhere else
after a bit of thought, less surprising if we regard g(x) as a function that is defined pointwise as either i(x) = x or z(x) = 0 depending on some highly discontinuous property of x
z(x) = 0 is a pretty good approximation of i(x) = x at x=0
It is unintuitive to me why the rational numbers are dense in the reals, since rational numbers are countably infinite, as opposed to the reals. I think infinity is hard to grasp.
It’s because for every pair of irrational numbers, there is a first place in their decimal representation where their digits differ, which means you can construct a number with finite decimal representation that fits between the two, which thus is rational.
In other words, it’s because while there are uncountably many irrational numbers, their representation is still only countably infinite each.
Or in yet other words, uncountable infinity is only a teensy bit larger than countable infinity, not that much larger. Consider that every prefix of an irrational number is a rational number. ;)
In decimal form, almost every real number between 0 and 1 is zero-point followed by an infinite sequence of random digits. No computer in the universe has enough hard drive space to store an arbitrary fixed real number between 0 and 1. This is of course not true for rationals: any rational number can be saved on a big enough hard drive. In particular, given unbounded resources, we can build a computer that approximates (0,1) by storing a finite set of rational numbers, and reaches a given real number x with arbitrary nonzero error. But we will never get zero error on a physical computer.
The tough part with analogies like this is there are obviously rationals too large for any computer in the universe as well and anything which fixes that portion goes back to needing to reckon about the different types of infinities involved in the original problem.
I don't think that's the case here unless you are referring to a busy beaver thing I don't understand :)
If you are referring to the observable universe being finite, then that's not relevant for the discussion: I am just putting a few more grounded terms on the theorem that computable reals (including rationals) are a countable set. The point is that "for every integer n you can get n+1" is unphysical, yet "grokkable" symbolically, so it works well within a conceptual mathematical universe (regardless of what the physical universe has to say about it). Within this math universe we build an abstract computer that can hold an arbitrary rational/computable number, but only a countable subset of the real numbers, since almost all real numbers cannot be described by any "physical" program, even if that program is larger than the entire universe.
I wish I understood the busy beaver problem / connections to Ramsey theory / etc. However for this intuitive discussion it seems like a serious digression.
This is what I mean in that it only appears more grounded if you already understand why a countable set has a different type of infinity than an uncountable set in the first place and what type of universe that implies. Otherwise you're left wondering what type of universe is needed and why it is that type of universe can account for some infinities but not others. The latter part is just the answer to the original question of what the difference between a countable and uncountable set is again so if you can answer that you didn't have the question to start with!
I think you are getting away from the actual original question, which is why (intuitively) the rationals are dense in the reals despite being a different form of infinity. The confusion wasn't about different forms of infinity, it was really about the topology of R with respect to Q - why is Q "big enough" yet Z "too small" despite the sets having the same cardinality? And that is intimately related to any fixed real number having a computable/rational approximation up to any accuracy, yet most real numbers not actually being computable.
I think precisely the rationals being dense in the reals means that for any two real numbers x and y where x < y there exists a rational number m/n (m and n being integers) such that x < m/n < y.
Rationals are also dense in the p-adic numbers, which you can think of as the other way to form their completion, if I understand correctly (with a different notion of absolute value.)
I always thought using countable and uncountable was a little confusing and that introducing aleph/beth numbers would have made things clearer when those ideas were introduced.
Introducing a stochastic function is cheating. You expect calculus to break if you just throw in stochastic functions willy-nilly, so that doesn’t challenge my intuition at all.
Some of those are continuous and differentiable everywhere. Examples:
f(x) = 0
f(x) = ½ x²
Also: doesn’t the claim that you can pick such a function require an extreme version of the axiom of choice, where you claim you can pick an element of each set in a set of sets that has uncountably many items?
Well yes. The thing I particularly like about the Dirichlet function is it’s so simple to state and yet just completely breaks my intuition about so many things.
There's a wonderful book on Mathematical counterexamples in French[0], meant for undergrad/engineering school hopefuls.
So many of the continuity counter-examples are throwing Weierstrass at the wall and getting something to stick. It's fun but also feels a bit like cheating.
I do recommend this book for any french-speaking mathematician-adjacent person though. Real great dictionary for remembering why certain things only work in one direction.
It is interesting how France became so focused on analysis and properly proving theorems and stuff, while the applications don't have the same highlight in prépa.
One professor of mine commented that most French engineers are better mathematicians than most mathematicians in Brazil.
It is the opposite of what the linked article mentions that was happening in Weierstrass' time.
When I first learned about limits (my senior year in high school), I visualized a "step" shape graphed (0, 0) to (0, 1) and then to (1, 1). Up one unit, right one unit. (I know, not technically a function since it has a vertical component.)
But now you subdivide both segments and put in an extra step. Up 0.5, right 0.5, up 0.5, right 0.5. Same starting point, same ending point. Even the same length of line segments as before — but the area bounded beneath is 3/4 the original.
I reasoned that if you continued this subdivision, as you approach infinity the area under the "curve" approaches that of a half-unit right triangle (i.e. root-two).
How is it, I asked my math teacher, that this shape can have the perimeter of a square but the area of a triangle?
I don't recall the answer from the teacher. Maybe somewhat hand-wavy; ultimately unsatisfactory to me. I still don't really know the answer. I suppose the answer is that it is not in fact ever a triangle — just a thing with a fat edge.
Nice example. You can avoid the issue with these being non-functions (because of the vertical segments) by rotating the picture so that the (0,0)-(1,1) diagonal is horizontal. That way you are talking about a sequence of continuous functions F_n with slopes alternating between ±1 that approach the constant-zero function F_0=0.
The reason your intuition was confused was that you felt that there was a way in which these functions with ±1 slope converge to zero function (it is called uniform convergence), but were unsure about the way in which their derivatives f_n=(F_n)' (which keep flipping between ±1) converge to zero.
The resolution of the seeming difficulty is that there are different modes of convergence. The ±1 slope functions F_n converge uniformly to zero, but their derivatives f_n, which determine curve length, do not converge uniformly or even pointwise to any limit (uniform convergence is stronger — more restrictive — than pointwise). This is why it is reasonable that the F_n curve length stays 2 throughout the exercise.
Note: if the derivatives f_n were converging to f_0=(F_0)'=0 pointwise while staying bounded, then the curve lengths of F_n would converge to sqrt(2), the curve length of F_0. This is called "dominated convergence theorem" for integrals — point being that the curve length of F_n is an integral of sqrt(1+f_n^2). But there is no pointwise convergence, and no such implication.
Finally, you may be curious as to whether there is a sense in which f_n converge to f_0=0 — does all that flipping back and forth amount to some way of converging to zero? Turns out yes! It is called weak* (weak-star) convergence and it applies to f_n if you think of them as measures (or distributions) — that is, if instead of pointwise evaluation you characterize f_n by the way they act on test functions by integration: instead of computing f_n(x) you multiply f_n * phi and integrate. Phi has to be a continuous function. Under this notion of evaluation, f_n do converge to zero.
You may be referring to the https://en.wikipedia.org/wiki/Staircase_paradox. As for how a shape can have the perimeter of a square and area of a triangle, it just can. As you've identified, it is easy to construct. It might be more interesting to ask what combinations of perimeter and area aren't possible.
You can always add more perimeter, so you want a lower bound on the perimeter given the area. The isoperimetric inequality (https://en.wikipedia.org/wiki/Isoperimetric_inequality) says that the smallest perimeter for a given area is that of a circle.
Yes, this is what I "re-invented" (I didn't know until now that it had a name). I saw it later in a math book that I flipped through on a co-workers bookshelf (but they didn't name it there either).
The perimeter (which is something very close to the derivative) is not continuous.
If you think about it a bit, you can easily change a shape in a way that greatly increases its perimeter but leaves it's area (almost) unaffected. Which means that just because two shapes are extremely close in shape, they can be very different in perimeter.
What is interesting is that it is semi continuous. The limit (this isn't quite correct) of the series is larger than the point. As the limit of the series is 2, but the actual distance is sqrt(2), which is smaller. So if you calculate the limit of the staircase perimeter you only know that it is larger or equal to the perimeter of the limit.
>I suppose the answer is that it is not in fact ever a triangle — just a thing with a fat edge.
It absolutely is a triangle. Because it converges in shape.
The thing is, each time you do a fold, you don't get any more of the triangle's perimeter covered by the staircase.
Not only that, every point that ends up on the triangle's line segment is adjacent to a point that isn't: there's always some little epsilon that you can step along, and you are no longer on an overlapping point. You've made no difference to the perimeter.
When it comes to the area, this is not true. Folding the triangular segments over the smooth line moves an area on which you can slightly move to one side or another, and you are still in an overlap.
Gotta wonder whether the sawtooth is actually a triangle. Each step makes it... not any closer to a triangle. Why would it be a triangle when you keep going to infinity?
> The thing is, each time you do a fold, you don't get any more of the triangle's perimeter covered by the staircase.
Not disagreeing with that.
> When it comes to the area, this is not true.…
Not following your point. And this part is the stickler for me: I believe the area does in fact approach the area of the triangle as the steps approach infinity.
Is this not the case as you see it?
Perhaps my "stickler" is not really one. To your first point, it's not really a triangle in the end, even as you approach infinity (though I still believe the area becomes that of said triangle). And there is nothing in geometry that says you can't have a shape with the area of a triangle but with a different perimeter.
It's just this "shape" looks an awful lot like a triangle but with a much larger perimeter.
Hm, it doesn't seem problematic for the area to be lower than expected for a given perimeter. You could do the same thing by having a triangle (an actual triangle), but then at some random point have the bounding line shoot off in some random outward direction, then come back (increasing the perimeter without enclosing any area at all). It's no more or less a triangle than your fractally jagged triangle, and it has the same perimeter and area.
I think the lesson for me is that any part of any line, if you zoomed into it enough, could actually be found to be all wadded up and much longer than you would otherwise expect. A line from point A to point B must be at least as long as the straight line from A to B, but could be any length greater than that.
The only thing that makes your jaggy line special is that the distance from any point along it to the nearest point on the straight (triangle's) line is zero.
>Why would it be a triangle when you keep going to infinity?
Because the difference goes to zero. They are identical, they are equal in the strongest mathematical sense. The sets which make up their shape are the same set. They couldn't be more equal.
The answer is that the perimeter “at infinity” matches the triangle.
There’s nothing that says the perimeter of the final “infinite” shape has to equal the limit of the perimeters as they go to infinity.
In order for this to work, you have to prove that they’re equal. Typically you’d come up with two limits, one which you prove is always greater than the value in question, and one which is always less. Then you show that they’re equal in the limit. That squeezes the number you’re interested in, proving that it’s also equal to that number.
For example, the typical proof of the circumference of a circle involves making two polygons, one inside the circle and one outside. The perimeter of the one inside is always less than the circumference, and the one outside is always greater. Then you take the limit of the polygon perimeters as the number of sides goes to infinity. They converge on the same number, 2 pi r.
Huh? I never said otherwise. I said that the perimeter of the limit doesn’t have to equal the limit of the perimeter. In order for this to work, you have to bound the value from above and below and show that they meet at the limit. This only bounds it from above. Obviously you’ll never find a sequence that bounds the perimeter from below that has a limit equal to the square one, so you’ll never be able to prove this equality.
What two perimeters? There’s only one. And it hasn’t been shown to bound the triangle’s perimeter from below. It is plainly greater than the triangle’s perimeter at every step, so it just establishes an upper bound. To derive the triangle’s perimeter, you also need another thing that’s always less than the triangle’s perimeter, then show that the lower and upper bounds converge on the same value.
function sum(f: (x: number) => number, upper = 500) {
let sum = 0;
for (let n = 0; n < upper; n++) sum += f(n);
return sum;
}
const weierstrass = (x: number, a = 0.5, b = 3) =>
sum(n => a ** n * Math.cos(b ** n * x));
Audio spectrum is bounded, so we are only hearing the first few terms of the series that make up the function (the coefficient under sine grows exponentially with n).
You could even replace the audible part with Chopin (or anything else, even f(x)=0) to get a Weierstrass-like function which sounds like Chopin (or silence) but is still not differentiable anywhere.
Weierstrass function is used prominently in Abbot's (2015) "Understanding Analysis" book. Abbott also relies heavily on three other mind-bending functions - Dirichlet (nowhere continuous), Thomae (discontinuous at every rational and continuous at every irrational point), and Cantor (increasing and continuous on [0, 1], yet constant at [0, 1]\C. where C is the Cantor set that is of measure zero).
Dirichlet, Thomae, and Cantor functions are central in Abbott to introduction and exercises on continuity, differentiation, and integration. I thought that was an interesting pedagogical choice for an undergraduate book, especially when it is used for the very first course in mathematical analysis as in Princeton’s MATH215 (I do think it is a really nice book).
Here’s a really cool thing about continuous-everywhere differentiable-nowhere functions. A consequence of the Baire Category Theorem is that most continuous functions are nowhere-differentiable (in the same sense that “most” real numbers are irrational).
The familiar functions from calculus are the vanishing minority of continuous functions.
Are there any hierarchies in mathematics that aren't like this? It seems like most hierarchies are constructed such that going up one "level" expands your scope so much that the previous rung looks like a drop of water in the ocean.
Examples include, for example: the Chomsky hierarchy of languages, where most context-free languages are not regular; Turing computability/solvability; and so on.
There are also functions that have a first derivate, but no second derivative. Much of my graduate research involved studying these type of functions.
Many of the original ideas came from the paper "The calculus of fractal interpolation functions"
https://www.sciencedirect.com/science/article/pii/0021904589...
I wrote a paper on how to compute the surface normal (for rendering) of related functions:
https://link.springer.com/article/10.1007/PL00013408
Interestingly enough, while you can not differentiate the Weierstrass function, you can integrate it -- i.e. you can treat it like a differential equation that has a set of well defined solutions.
These examples are what caused them to be this way. Hand waving was a lot more acceptable in mathematics while Weierstrass was alive. The discovery of clear counter examples to hand waving arguments lead to the desire to put mathematics on the strong footing it is on today.
It may annoy students today as there is seemingly little utility in these distinctions, but they actually are important. The more complex mathematics becomes, the more important it is to actually be on solid footing.
There was definitely some of that before. Ben Franklin, in his autobiography, writes that his Junto, a group of people devoted to self-improvement, included:
"Thomas Godfrey, a self-taught mathematician, great in his way, and afterward inventor of what is now called Hadley's Quadrant. But he knew little out of his way, and was not a pleasing companion; as, like most great mathematicians I have met with, he expected universal precision in everything said, or was forever denying or distinguishing upon trifles, to the disturbance of all conversation. He soon left us."
No doubt pedantic mathematicians had existed before, but what these examples did is actually convince mathematicians in general to be far more concerned about the subjects of mathematical foundations.
E g. an actually consistent and encompassing definition of what a "function" is, is something surprisingly recent. Certainly Euler, Gauß, Leibnitz or Newton did not have one. Only with Cantor's project of set theory did we get something remotely satisfying. Cantor faced significant backlash, which surely would have been much more effective if there weren't examples where actual foundational questions became questions of practical mathematics. And whether Weierstrass had actually found a "function" was one of these questions.
Way back when, the rigor (or lack thereof) of calculus was a culture war issue. Bishop Berkeley famously argued the dubious foundations of calculus rendered it no more trustworthy than religion. So, there was external as well as internal motivation for mathematics to clean up its act.
Good memories of my Real Analysis course in undergrad. I'm also a fan of the Devil's Staircase [0], a function that is both continuous everywhere and constant almost everywhere, while still managing to climb from 0 to 1.
I'll throw in my funky function (which is probably similar to a well-known one, but I'm no mathematician so I know not what is well known):
f(x) = a random number uniformly selected from [-1, 1].
(So every real number maps to a distinct random number.)
It's nowhere continuous, nowhere differentiable, yet its integral over any interval is exactly zero.
Which is kind of weird if you think of integrals and derivatives as inverses, because as I understand it the derivative of a constant g(x) = c is g'(x) = 0, yet the integral of the above function is F(x) = 0 yet F'(x) = f(x) = the random nonsense monster. So the derivative of zero can be either zero or random noise? (And while f(x) is not differentiable, its integral surely is.)
> yet its integral over any interval is exactly zero
That function doesn't actually have an integral in any sense that I know of. The Riemann integral doesn't work because, for every interval in every possible partition, the function's upper bound will be 1 and the lower bound will be -1.
Oh. Er... I didn't really consider whether it could be proven to have an integral, I just went off of intuition. After an additional lifetime of study, magically compressed into 15 minutes of skimming Wikipedia articles, I'm going to (tenuously) claim that the function is Lebesgue integrable. Heck, if it works for the {0 if irrational, 1 if rational} function, this should be much easier. I hope?
My original intuition is that every value in [0, 1] will be cancelled by a value in [-1, 0].
My updated intuition to make it more Lebesgue-compatible would be to sort the (infinite) set of values in the interval. This will give the everywhere continuous, linear function from -1 at the minimum coordinate of the interval to 1 at the maximum coordinate.
Hm... I notice that when applying Lebesgue to Dirichlet, it uses countability, and reals aren't countable. I have no clue whether that's a problem here or not.
It won't be Lebesgue integrable because it won't be measurable. Your function is actually a good (though not rigorous) argument for the existence of non-measurable sets.
I've increasingly come to doubt the utility of all these things.
Real numbers and Lesbesgue integration that stem out of 19th c. analysis came before Computability was a thing... and it turns out that all but a measure 0 set of real-numbers are actually computable.
So all these things that we worry about don't even "exist" in a computational sense.
Mathematicians haven't come to terms with this (barring non-standard analysis etc.), and I suspect this is one of the reasons it's losing mind-share.
Note that "computable" just means "computable by a Turing machine or equivalent system", at this time. We don't know if we are missing some other form of computation that goes beyond the abilities of a Turing machine, that could upend the whole concept. And given that we haven't proven that general human thinking is a computable process, there is at least one significant candidate for possible computation beyond Turing machines.
Note that I don't personally think it's likely that our thinking goes beyond the capabilities of a Turing machine, not at all. But I also think it would be premature to throw out all mathematics that doesn't conform to Turing machine computability before we are more sure that this is the best possible model.
And even if it were, there is also the question of whether physical processes are computable or not. Right now there are plenty of physical processes where our best and only reliable models for predicting their behavior require assumptions from calculus, like the existence of all real numbers. For example, there is no successful formulation of quantum mechanics where the distance between two particles moving relative to each other can be constrained to any subset of the [minDistance, maxDistance] interval of the real number line. Which means that, as the particles move away from each other, at some times the distance between them will have to be an uncomputable number (and given their density in the real number line, this will be approximately all the time).
Brownian motion mentioned at the end of the article and more generally Langevin Dynamics are incredibly useful. They're this weird interface between normal physics and statistical mechanics. When a big complex molecule is constantly jostled by smaller molecules, these nowhere-continuous motions are a good way to approximate what happens.
Plus, rather bizarrely, it helps to understand this area to do actual statistics, in software such as Stan.
>> and it turns out that all but a measure 0 set of real-numbers are actually computable.
This is the wrong way around. The computable numbers have measure 0. In fact, they are countable.
The non-computable real numbers have full measure.
(proof: by definition, computable numbers are generated by a program, which can be represented by a finite number of bits. The set of finite numbers is countable).
It is not /really/ countable in a computable sense; it is undecidable whether or not a program generates a real number or diverges (as a consequence of Rice's theorem). Thus you cannot compute a function that enumerates all programs that compute real numbers.
I'm not sure what you mean exactly by "countable in a computable sense." Sure, certainly you can't generate one program that enumerates all the programs that compute real numbers.
But the set of programs that compute real numbers is a subset of all programs, and the set of all programs is countable. Therefore the set of computable numbers is countable.
edit: I think maybe it wasn't clear that I'm talking about cardinality and measure? To be a little clearer - I'm saying that the computable numbers have (standard) measure 0 because they have countable cardinality. Any subset of the real numbers with countable cardinality has measure 0. And the set of all real numbers, of course, has uncountable cardinality.
I get what you are saying. My point is that someone who is philosophically disinclined to buy into the "existance" of non-computable real numbers, e.g. constructivists, because they are not effectively computable, are also going to be, by the same logic, disinclined to buy into your argument that the computable real numbers are countable, because in order to count the computable real numbers you would need a function to enumerate them, at that function is also not computable.
> But the set of programs that compute real numbers is a subset of all programs, and the set of all programs is countable. Therefore the set of computable numbers is countable.
A constructivist is also not going to buy into your argument that a subset of a countable set is countable. Heck, the constructivists are not even going to buy into an argument that a subset of a finite set is necessarily finite (they have a term for such subsets: 'subfinite').
Yes, I know that this constructivism feels so bizarre that it cannot possible be coherent; the whole notion of cardinality appears to become useless. But you get used to it after a while.
Heck, even in classical mathematics, trichotomy of cardinally requires (or rather /is/) the axiom of choice. So cardinality wasn't really super well behaved to begin with.
What could it mean to have full measure but nobody can provide any method of generating the digits to a single element of the set? We can define things like Chaitin's constants, but we don't know very many digits, and Chaitin's constants are countable. We claim there are these bountiful numbers, more numerous than the countables, but can't even specify the value of any of them.
I suspect some future generation is going to decide that diagonalizaation is based on questionable axioms. It leads to invisible pink elephants.
Curiously enough, the Downward Löwenheim-Skolem theorem proves that if any first-order theory has an infinite model, then it also has a countable model, assuming the theory itself is countable.
So even though ZFC talks about uncountable real numbers, there is actually a countable model of ZFC that satisfies the same axioms.
This isn’t an inconsistency, but it really does blur the lines between what we think of as mathematical objects that are “out there” in some sense (like uncomputable real numbers) or whether mathematics is just a game of symbol manipulation that also happens to predict the physical world pretty well.
They’re not finitely describable — for any member of that set, no matter how many digits you articulate, you’ve named a rational number.
The same is true even for numbers that have finite predicates defining them — because there’s only measure zero finite predicates.
Most real numbers are indescribable sequences (ie, not finitely describable), so it’s not surprising we can’t describe them. That’s what happens when you construct something by including all possible infinite sequences — almost all of them are total gibberish.
Diagonalization is a consequence of that: we have a schema of descriptions forming our predicate and prove no matter how we index reals by the integers, there’s more stuff out there. To get rid of that, we need to stop completing sequences.
Most people aren’t in favor of that finitism/constructive approach now — so I’m not sure why that would change.
Even Cantor's counterexample, where he traverses the diagonal of an enumerated list of numbers, he has a countable number of choices to pick his digits. So we've built a countable set, and he claims it's incomplete by picking one number from a countable set of counterexamples. So now lets union the original set with Cantor's countable set of counterexamples, and get a combined set which is also countable, right?
But that schema can be applied to your new set, as well — to create yet more counter examples. Your method didn’t exhaust the real numbers outside the original list. You merely added one of them.
For any listable subset of the real numbers, we can provide a counter example. And since we’re proving by contradiction in Cantor’s argument, we assume the original list is exhaustive, ie there’s a bijection between naturals and reals.
We show that assumption leads to a contradiction, because there’s at least one real number for which no natural maps to it. Therefore, there can’t be such a bijection.
- - -
Assume f is a bijection between N and [0,1].
Let an be the nth digit of f(n).
Then x = sum(1,inf)[10^-n * (an + 1 % 10)] is a real number in [0, 1] for which there is no f(k) = x. So f is not onto and therefore f is not a bijection.
Therefore, no such bijection can exist — by contradiction.
- - -
You can’t patch that schema up just by adding counter examples to your first choice of bijection — because a counter example is constructable for any bijection attempt.
The fundamental problem is that reals include limits — which is what allows the sum to be a real number.
I won't claim to have any proof in hand, but there's still something fishy about it all. I think things like Banach-Tarsky and such ought to be treated as counter examples to show the absurdity of the Reals.
At times I've thought maybe the problem with diagonalization is forcing me to do a (countably) infinite number of steps before Cantor gets to take an infinite number of steps. For instance, I can give a trivial counting scheme and claim it will eventually generate every number between zero and one, perhaps by writing a program to write programs that generate digits. It's all very deterministic, and I state exactly what I'm doing up front.
So I give the first number (program), and Cantor gives his first digit and says it's not in my list. Then I tell Cantor the countably infinite number of places where his first digit is found, and we select those programs. Then he gives me his second digit, and I select the subset where those are found. So we go back and forth, he keeps saying his number so far is not in my list, and I keep telling him all the places where it is in the list... Feels like stalemate there.
There are other places (such as summing the alternating harmonic series) where you have to be very careful about the ordering of operations. If he really has a higher order infinity of numbers not in my list, maybe I should be able to ask him for his list first, no? It's uncountably infinite, but he can't provide even one of them.
You don’t have to do anything but assume such a bijection does exist, as in my comment: its existence leads to a contradiction in the presence of limits — just like I showed. But without that same limiting to build sets, you can’t construct N from the successor function and you can’t discuss the power set of N.
So we either need to throw out countable limiting to build sets or accept that such set building rules out bijections between reals and naturals.
You think Banach-Taraki is weird (as do most people), but I’d argue a world where you have finite numbers or couldn’t discuss the set of subsets of the naturals is even weirder.
> maybe I should be able to ask him for his list first, no?
There’s no such list indexable by the naturals, as shown by that contradiction. If you allow indexing by the reals, that request is trivial.
You've edited your post a bit from what I originally replied to. Looking at your bijection stuff, I think you're just restating Cantor's diagonalization with `(a_n + 1) % 10` as your mechanism for choosing digits, but maybe I've got that wrong. I appreciate that it's well defined, but I think it ignores my complaint.
Let's say I tell you that I'm building my list of numbers between zero and one by taking the natural numbers, reversing the digits, and just putting a decimal place in front:
It's a silly mapping, and I'm sure there are a few problems with it, but let's start there. For now it doesn't matter what digits I'll choose afterwards - maybe it's all zeros, maybe it repeats the digits, maybe I'll do something more clever, but skip that for now.
My list has a number that agrees with your number to any number of digits we choose. You and Cantor have to force me to give my infinite list of numbers all at once, and then build your infinite length counter example all at once. What axiom lets either of us do an infinite number of steps and say we're finished? What axiom says you can do an infinite number of steps after I do my infinite number of steps? When we do limits with deltas and epsilons, we say that we will get closer as we keep going, not that we got there.
If you give me the first 1000 digits of your counterexample, I can tell you all the elements in my set that match your number up to the first 1000 digits. If you give me 1001, I can tell you all the places that match that too. A million, a gazillion, I can keep matching. And as you keep adding more digits, I can keep telling you an infinite number of places that match.
So it really seems like there's a problem with the order of operations. It's like playing chess, and you force me to make all my moves and come afterward to checkmate me. But if we take turns, where I make a finite number of steps and you give me your next digit, then it's my turn again, I can keep matching you all day and all night.
> What axiom lets either of us do an infinite number of steps and say we're finished?
The same one that says we can build the naturals from successors — without that, we don’t have all of the naturals. As I’ve been saying repeatedly, this is due to building sets from induction.
A limit L for an infinite sequence (an) is such that for any epsilon, there’s some n where after that n, | L - an | < epsilon. You have a set of rationals indexed by naturals building towards that real number — but you can do it the other way and define reals as those sets which converge to it.
> And as you keep adding more digits, I can keep telling you an infinite number of places that match.
The reals that are indescribable are truly weird: they look like subsets of N that are infinitely large but exclude infinitely many as well, without any pattern to inclusion or exclusion. And you can show this power set definition matches [0,1] by taking the binary 0 or 1 at position n to be if n is included in that subset.
To get rid of the weird reals, you need to force sequences like that out. What makes the reals larger is that they’re defined to include the limits of all converging sequences. Rationals don’t, eg, sqrt(2) or pi.
> But if we take turns, where I make a finite number of steps and you give me your next digit, then it's my turn again, I can keep matching you all day and all night.
But this doesn’t end in a bijection, because after you follow this construction for an infinite number of turns, I can name a real number not in your list — as my final move. That’s what the schema says: no matter how we construct the list, after it’s built, I can name a number outside of it. You’re not showing the list is complete, ie includes all reals, merely that we can include a particular number if I tell you it digit by digit — but we knew that by construction (ie, there will always be some countable converging sequence of rationals).
Every real is constructed by some countable sequence of digits; but there’s more real numbers than countable infinity.
> The reals that are indescribable are truly weird [...]
Well, I appreciate your patience. I'm certain I won't find a hole in any of your arguments or the conventional wisdom, but the Reals outside of the Computables (or Defineables might be a better choice) are more than weird, they're absurd. They're dense in the number line but only a countable number of them are anywhere to be found, named, or described past a few digits.
Additionally there is always a Computable number between any two Reals and a Real between any two Computables. *This* ought to imply a one-to-one correspondance, but I'm certain I can't defend that argument either.
As for applied math, any situation where we claim the Reals are required (perhaps as the position of a particle or the coefficients of a wave function or something) is uncomputable. So saying the Reals apply to reality probably says something very weird about determinism too.
They’re a consequence of subsets being more complicated to talk about than the set itself.
Real numbers are weird compared to naturals because the indescribable ones match to subsets with infinite complexity. But functions on the real numbers are similarly large compared to the reals themselves — because they can have any subset of reals as their output, including infinitely complex ones.
> This ought to imply a one-to-one correspondance, but I'm certain I can't defend that argument either.
The interval is like a fractal: between any two points is a full copy of it. And for any prefix there’s a lot more “and then the tail is infinitely weird” than coherent tails.
> So saying the Reals apply to reality probably says something very weird about determinism too.
Math is a model.
Nothing we compute can’t be don’t purely over computable numbers — but you simplify a lot of proofs if you work in the space where every convergent sequence has a limit, then just approximate. Including proving that approximations work nicely.
You’re not making a claim about reality; you’re simplifying your mathematical machinery by packing all of the badness into a few technical axioms about induction — and then just shrugging because Banach-Tarski paradoxes are probably unphysical and purely a construction of the model.
Btw, I wasn't saying to just add one of Cantor's numbers. His mechanism for picking diagonals, whatever it is, is a countable list. He picks a first digit to disagree with the first digit of my first number: That gives him 9 choices. He picks a second digit, another 9 choices. It grows exponentially, but I can make a one-to-one mapping for all (any) of his choices, so ALL of his numbers come from a countable set.
It’s useful I think to have a conservative impulse somewhere as a counterpoint to wild invention, and math has benefited a lot from the constructivist vs formalist debate.
No because Mathematics still has power. It would be awful if we couldn't dream beyond floating points or Mathematics were ties to the financial and physical forces that shape processor architectures.
That measure 0 set of computable real numbers includes (nearly) every real number you'd ever describe – including all the reals on this page so far. The only exceptions are things like Chaitin's constant, which isn't really interesting for real analysis.
As a constructivist, I somewhat agree with your viewpoint, though my views are a bit more nuanced. (e.g. Dedekind's construction of the continuum as a collection of points is perhaps the primary source of corrupted thinking, which muddles even well-meaning thinkers into thinking that it is even possible to separate computable points from non-computable points. The real numbers form a /continuum/, and a continuum cannot be described as simply a collection of points.)
That said I believe your comments are off the mark in regards to the topic of the Weierstrass function. The Weierstrass function is completely well behaved from a constructive point of view[0]; it is uniformly continuous and everything.
In particular we can (constructively) map (constructive) real numbers to (constructive) real numbers via this function.
True. You could partition math with the question "Could a computer do that?" If the answer is yes, "that" is practical. If the answer is no, "that" is probably useless.
f(x) = 1 if x is rational, 0 otherwise.
It is defined over all real numbers but continuous nowhere. Also if you take the Dirichlet function and multiply it by x so you get
g(x) = x if x is rational, 0 otherwise
…then you have something that is continuous at exactly one place (0) and nowhere else, which also is pretty spectacular.
[1] https://mathworld.wolfram.com/DirichletFunction.html