This is a nice gesture by H. Jerome Keisler, every bit helps to promote a math subject that frightens off so many.
That said, it's a shame the PDFs are in image format instead of text as it makes it more difficult to copy notes etc. from the text into one's workbooks. This would save the student time in taking notes.
Another minor criticism I have is that like many texts on mathematics it lacks both detail on the background to mathematical concepts and their applications. For example, the discussion of Eula's Formula on page 879 is somewhat incomplete. In some ways the Wiki page https://en.wikipedia.org/wiki/Euler%27s_formula does a better job in that it has a diagram of the 'Three-dimensional visualization of Euler's formula' (its application to circular polarization, etc.). Including this diagram and having some discussion about the significance of the relevance of the i, e and ᴨ in this famous relationship would have put the subject into better context.
(I recall the significance was lost on me when I first studied the subject for the same reasons. I think that many mathematicians don't realize that many of us ordinary mortals just don't think the same way they do, so including more visual descriptions and diagrams, typical applications and historical background do actually help one's understanding. However, I acknowledge that including this extra material makes textbooks larger and thus more expensive. Perhaps that's the penalty we have to pay.)
This was the book my 1st sem. calculus class used in 1986, at UW-Madison (where the link is hosted and where the author presumably teaches). I remember it as providing a fairly gentle introduction to differentiation, providing a stepping-stone to more traditional notions of limits later on.
I was just thinking the other day about how enjoyable I found learning calculus, compared to what others tend to report. Perhaps this book was part of the story -- very cool that it is being provided free of charge now!
In roughly 1987-1988, I took the required 2 semesters of calculus at UT Austin. It actually took me 4 semesters, for a number of reasons including that I was seriously unprepared for calculus.
But the worst part of the experience was the method of starting with epsilon/delta and limits, to "explain" what was going on, and then throwing that away to get on with solving problems using differentials and integrals. The lectures almost universally took the form of the professor going through a proof of the technique in question and then assigning a set of problems from the text.
> But the worst part of the experience was the method of starting with epsilon/delta and limits, to "explain" what was going on, and then throwing that away to get on with solving problems using differentials and integrals
I guess that was pretty normal once upon a time? I remember that was how my calculus textbooks did it. First pictures to develop some visual intuition, then a gentle foray into how to make the intuitions more rigorous using limits, sequences, and series, and then techniques for solving certain differentiation and integration problems. It makes sense to me; it shows that the techniques are based in math and not in magic, and it lays the groundwork to introduce Taylor series later. It also foreshadows the kinds of proofs that students might do in real analysis if they decide to pursue an engineering or physical science major.
I guess every math class could be very different if you knew it was the last math class that students would take, but you don't know that, and it's problematic to separate students according to which ones will study the material further before they're even exposed to it. If a student takes a class and unexpectedly finds they want (or need) to take further classes in the area, they don't want to be find out later that, "Oh, sorry, we didn't think you would pursue this subject, so we gave you the version of the class that didn't prepare you for the next one."
I had to retake the first calculus class. Didn't have much to do with the difficulty of the material, but rather a professor doing things that would get him pulled from the classroom in 2021. Just one of the many things he'd do is call on someone to answer a question during the class. If he didn't like the answer he'd tell them they were lazy or that they should quit college.
While infinitesimals lie at the very heart of the classical approach to calculus, in this day and age it is important to complement a course based on what is now considered non-standard analysis with one of the more standard (limit-based) courses. (I think this is also true in other areas, e.g. if the students are taught a physics course based on, say, geometric algebra, they also should be trained in the more ‘standard’ ways of understanding things.)
I think most universities won't even have a non-standard analysis course available to complement the courses using the standard limit approaches. Your fear seems quite unwarranted to me, since standard analysis courses are usually the only option, and always outnumber nonstandard analysis courses.
Why isn't this approach to calculus more common? Does it lead to more complex proofs or have any serious drawback?
I understand the ε,δ way of defining the limit is important because it extends to other context such as continuity in topological spaces etc; however, as a physicist, infinitesimal quantities reflect the way we really think about calculus intuitively, so it makes sense make them "first-class" numbers.
From what I read on the subject, this _was_ the main way of dealing with calculus, but it was not rigorous. The first rigorous definition of a limit was given using the epsilon-delta approach, by Weierstrass, with of course many many contributions by his predecessors.
Everybody switched during the 19th century, because epsilon-delta was in fact the only rigorous method.
Abraham Robinson gave a rigorous foundation for infinitesimals only in 1960.
Work done by Schmieden&Laugwitz in the 1950s helped to formalize and retroactively apply rigor to the earlier uses of infinitesimals. Their Omega Calculus approach was also constructive.
Sure - this is a great way to think about calculus intuitively, but imprecisely; rigorous, mathematical proofs based on the hyperreals, on the other hand, may become unwieldy, and, as you rightly noted, they do not extend to the modern treatments of, say, differential geometry which, too, plays a huge role, as a framework, in the modern theoretical physics.
> Why isn't this approach to calculus more common? Does it lead to more complex proofs or have any serious drawback?
Simply put, because there aren't more textbooks for it and because professors aren't as familiar with it, leading to a vicious circle where it's not taught because it's not as easy as teaching the standard way, and then it's not routinely learned because it's not taught ….
(The advanced logic that goes into the underlying constructions, but that isn't necessary to use non-standard analysis, also causes many non-logician mathmaticians to give it an unjustified bit of side-eye.)
You really think that it makes sense to require the axiom of choice to prove that the derivative of x^2 is 2x as Robinson's ultrafilter construction does?
I personally like understanding infinitesmals, and knowing what the d in dy/dx means. And knowing that the notation for a second derivative, d^2y / dx^2, is not just arbitrary. This occasionally has uses. For example if you have implemented a numerical d function, for example as lambda x: f(x+h/2) - f(x-h/2), then d(d(y))/d(x)^2 is an excellent approximation to the second derivative.
But conceptually I think it is far better to understand approximation a lot more directly than using a complex construction for the infinitesmals.
> You really think that it makes sense to require the axiom of choice to prove that the derivative of x^2 is 2x as Robinson's ultrafilter construction does?
This was just what I meant to say—the details of the construction shouldn't matter, only the axiomatics of the structure that's been constructed. You can use infinitesimals perfectly well without having to get into the weeds of how they're constructed. To be sure, your results only apply within that structure, but that's the way of mathematics, that things are proven only within some structure.
I think one wouldn't expect, for example, a constructivist mathematician to disparage classical mathematics because it relies on such inelegant logical machinery as the law of the excluded middle. Well, maybe some constructivists do that, but more often they offer the better response of showing how many of the same results you can recover without requiring that logical machinery.
The same response seems appropriate here: there's no reason to cast aspersions on people whose work rests on the axiom of choice; but, if it bothers you, then see how much of their work you can do without the axiom of choice. If you can prove the equivalent, then great; no need to complain! If part of their work genuinely requires choice, then that's an interesting fact, too.
Robinson's construction required the axiom of choice and proved that every result that you can prove with infinitesmals has a proof with limits without the axiom of choice. But he went on to show that infinitesmals allowed abandoned proofs to be made rigorous, and claimed that they were good for intuition.
In other words infinitesmals were trying to replace something that had become established. The axioms behind the established thing are much weaker than the axioms required for infinitesmals. And infinitesmals provably don't let you prove anything fundamentally new.
At that point any interest in infinitesmals has to come down to curiosity and the claim that it helps intuition. As for curiosity, I'm glad to have understood them. As for helping intuition, that is in the eye of the beholder, and has not been a compelling enough argument to get people to switch.
One factor I've heard cited is that if you can prove something in calculus using nonstandard analysis, you can always prove the same thing with standard analysis.
If we switched to teaching calculus based on nonstandard analysis we'd have to also teach the standard approach because of all the existing material that uses it and all the people who already know the standard approach but not the nonstandard approach.
Not many people are willing to commit to a few generations of teaching dual approaches and using both in their work until the people who only know standard are all dead or retired and all the old material has either been translated or is obsolete, when the advantage of the nonstandard approach is just that it might be conceptually easier or more intuitive.
Learning math isn't just about solving problems. It's about solving problems in ways that other people can follow, evaluate, and understand. Since nearly no one knows nonstandard analysis, but most people with college mathematics know standard calculus, there is much less utility in learning nonstandard analysis. It might make it easier for you to solve certain problems, but no one will understand what you did.
The true telos of a mathematics education is gaining a genuine understanding of the principles of mathematics. Pragmatic concerns of the nature you describe should be treated as secondary concerns in an educational environment.
We learnt some non-standard in a logic class (as a capstone application of FOL and model theory).
Perhaps there’s a perceived notion that to understand infinitesimals and such you need a logic background, but logic classes are generally an upper-half class that doesn’t have a full track for undergrads. This would be in addition to the fact that most professors are familiar with ε,δ-proofs and AP calculus classes cover the limit approach (though not rigorously).
The entrenched pipeline of mathematics students and professors would have to face a period of getting flushed out and refitted, and this is probably not attractive to administrators.
I claim that the first undergrad course in Analysis is not actually there primarily to teach you real analysis. Rather, it's there to teach you the difference between "forall exists" and "exists such that forall", and to hammer into you that intuition is not by itself a proof. Nonstandard analysis, I claim, is not as good a vehicle for those purposes as epsilon-delta analysis is.
If you are worried about epsilons and deltas being too hard, then you can write your calculus text to not be rigorous rather than trying to make it rigorous with non-standard analysis. I think giving up on some of the formalism while still sketching the ideas of proofs is the way to go from a pedagogical point of view.
Have you compared the approaches? I learned with epsilon delta, but I would be interested in seeing data about learning outcomes with alternative approaches.
In my experience, epsilon delta understanding was pretty much unrelated to passing calculus, where tests consisted of formula/substitution crunching.
> In my experience, epsilon delta understanding was pretty much unrelated to passing calculus, where tests consisted of formula/substitution crunching
That's a shame. There are so many exciting things to learn in calculus that you can skip the epsilon delta stuff and still do so much more than formula/substitution crunching. Calculus is the gateway to differential geometry, topology, mathematical physics, differential equations, taylor series which are useful for numerical approximations and so many other things, analytic number theory, complex analysis, many forms of statistics. So many wonderful things you can do with it.
Imagine using your new found calculus knowledge to show that the orbit of the planets about the sun is an ellipse -- which is what Newton used calculus to do -- OR learning all about epsilons and deltas. Which would be more fun as a student learning calculus. Make the math exciting enough and people will put up with the drudgery of calculations.
I quickly skimmed through the book, this looks great for a much needed refresh after 10 years of a non-math intensive programming career. Question: are there solutions to the problems?
This helped me a lot when I was taking calculus.
We were assigned Stewart, but I basically learned the from this.
Analysis from Rudin, and later some manifold theory from Lee, have converted me to more mainstream views on the subject... However I think this is probably how calculus should be introduced at first, particularly to say highschoolers.
How to show such numbers exist? Chapter 1 states they can be constructed from the real numbers and this construction is discussed in the epilog, but (I dont have enough time probably) there seem to be only some axioms in the epilogue but not a construction (or proof of existence).
There are various ways to construct real numbers and complex numbers. One approach is given in Landau's Foundations of Analysis (a.k.a. Grundlagen der Analysis).
You might want to look at the Metamath Proof Explorer materials on constructing numbers, a good starting point is here:
http://us.metamath.org/mpeuni/mmcomplex.html
Metamath is a general tool that lets you specify axioms and proofs, and verifies that the proofs only depend on axioms and previously proved proofs. The Metamath Proof Explorer (MPE) is a particular application of it that uses classical logic and the ZFC set theory axioms. MPE shows how to construct numbers using these axioms, then proves a set of number axioms, and from then on uses only those axioms so that the details of any particular construction are not important. What's usually more important is showing that you can construct them.
Cool link. I know some construction of the real and complex numbers, I was asking about the construction of the hyperreals used in the textbook. I couldnt find info about that on the page.
The construction is fairly easy but requires some fairly hefty background knowledge to make formal. In brief, though: an ordinary real number can be implemented as a sequence of rational numbers, with two sequences considered to be "the same" if they eventually get, and stay, arbitrarily near to each other. A hyperreal can be implemented as a sequence of real numbers, with two sequences considered to be "the same" if they are equal at "very many places", for a certain strict and formal definition of "very many places". (Formally, fix a nonprincipal ultrafilter on the naturals; then the requirement is that the sequences be equal on a set of indices which is a member of that ultrafilter.)
A standard real z can be viewed as a hyperreal, by taking that sequence to be z, z, z, …. Another equivalent representation of that same hyperreal would be the sequence 0, z, z, z, … because that's equal to the first sequence in "very many places". There's an infinite hyperreal, implemented by the sequence 1, 2, 3, …. (In fact there are tons and tons of infinite hyperreals.)
You can prove that the space of hyperreals is a field, and moreover with some model theory you can show that actually a lot of structure (in fact, all first-order structure) transfers directly over to the hyperreals from the reals.
I don't know much about it, I'm afraid. But there is the very cool fact that the usual Riemann integral can be written as basically just a sum in nonstandard analysis; it's covered very neatly in André Pétry, "Analyse Infinit́esimale: une présentation non standard" which is very readable. There's a lot more advanced stuff, including Brownian motion and (I think) some integration, in Hurd and Loeb, "An Introduction to Nonstandard Real Analysis".
For the foundations of nonstandard analysis, and a wide-ranging overview, I strongly recommend Robert Goldblatt, "Lectures on the Hyperreals: an Introduction to Nonstandard Analysis", one of the Graduate Texts in Mathematics.
I'm rusty, but did think about this a bit at one point. I recall you can't really do the usual constructions on hyperreals (problems with completeness?) but you can do something like approximate any real-valued measure closely.
It's a bit more than an instructor's manual. It covers the construction of the hyperreals in two ways: One (in section 1E) is relatively short, and is like the compactness theorem approach Abraham Robinson originally used in his book ("Non-standard Analysis", Princeton U. Press).
The second construction uses ultraproducts (in Chapter 1*). Keisler gives a quick introduction to the logic needed to understand the ultraproduct construction, beginning with formal languages and sentential logic. So the prerequisites are there, and explained concisely but clearly. The ultraproduct construction comes up elsewhere; for applications to other areas of math, see Martin Davis's book "Applied Nonstandard Analysis" (Dover Publications). (There's a review of the books by Keisler, Davis, and Stroyan and Luxemburg: https://www.ams.org/journals/bull/1978-84-01/S0002-9904-1978...).
The rest of Keisler's monograph contains mostly nonstandard proofs of many of the results in the textbook. I like it a lot, and anyone who's interested in this stuff should grab the PDF.
Another book that is short and moderately elementary is "Infinitesimal Calculus" by James Henle and Eugene Kleinberg, also published by Dover [which also publishes a paper version of Keisler's text, and is generally a great publishing company].
When Keisler's book came out, someone decided that it would be reviewed for the Bulletin of the AMS by Errett Bishop, who (as I recall) was a noted constructivist. You can read the review here: https://www.ams.org/journals/bull/1977-83-02/S0002-9904-1977... Having a constructivist review a calc textbook that uses nonstandard analysis was probably not a great idea. The review ends with: "Now we have a calculus text that can be used to confirm their experience of mathematics as an esoteric and meaningless exercise in technique." I believe Keisler wrote a reply in a subsequent issue of BAMS, but I can't find it.
I do not see how 1E shows existence. The definition of the natural extension needs the hyperreals in the first place. (reading on mobile, maybe I am missing something here)
Check the existence theorem on page 23 (I'm looking at the paper edition): "Let R be the ordered field of real numbers. There is an ordered field extension R* of R and a mapping * from real functions to hyperreal functions (i.e., functions on R*) such that Axioms A - D hold." Note that he just starts with the reals. The actual definition of R* is in the middle of the proof:
R* = {tau[epsilon] : tau(x) is an element of T(M)}
Note that the proof uses Zorn's lemma, so it's definitely not "constructive" in the strict sense (in case that's what you meant).
Well, no numbers exist in any real sense. Integers, real numbers, complex numbers, hyperreals, etc., they're all imagined, and built from axioms. Keisler does have a companion book that goes into more of the theory of hyperreals, and its bibliography (p209) points to deeper theoretical work too: https://people.math.wisc.edu/~keisler/foundations.html
Well, the real existence of abstractions is a pretty involved topic in philosophy of knowledge, and so you can’t simply say that, for example, integers do not ‘really’ exist. Because they indeed do, in a very important (and, in fact, obvious) sense: you need two cats to have a fight.
I flip between agreeing and disagreeing with you! I'm not well qualified to comment so I expect an informed rebuttal to this will be instructive for me. So here goes:
I still find myself in the "number systems are a cultural artifact" camp. We choose which details to keep and which to throw away in our abstractions. There are anumeric cultures and it's not clear to me at all that the existence of integers is obvious (beyond what humans can subitize) unless you're immersed in a culture that has impressed them upon you since early childhood.
Do you have "2 cats" or actually just "this cat and that cat"? This cat is a bit bigger but that one looks meaner.
Deep down I think that an insistence on the "reality" of integers versus reals (say) is purely aesthetic. (Ignoring for the moment that our conventional constructions build one from the other in a certain way).
The integers represent perhaps the simplest case. Another example of their actual existence is the fact that properties of atoms and their nuclei critically depend on the number of the nucleons of particular types in them; note that this fact does not in any way depend on whether a culture is anumeric or not. With some effort you can extend this idea to other abstractions (this process is usually greatly hindered by one's leaning towards solipsism).
The existence proof you describe is in terms of a yet more complex abstraction; we can imagine aliens might describe a more useful atomic theory than our own which does not involve nucleons or their discrete counts at all (ok, unlikely as I acknowledge that may be).
I don't want to descend into solopsism and apologise if I veer too far in this direction. Also thanks for taking the time to rebut what is probably a sophomoric argument.
I suppose my fundamental objection is that if integers are "real" then it seems to me that quaternions must be similarly "real" (since I can describe useful things with them) and so on, I can't see a boundary which would let me say "ok integers are the real deal but infinitesimals are just a thing we made up".
I think we have a zone of possible agreement if we decide either that all these abstractions are "real" or none of them are.
This may seem a bit extreme, but, in my opinion, while in mathematics (and especially in physics) there is a fair amount of scaffolding, what they do in mathematics is, actually, they discover things that do exist in reality in one form or another. (It is important to understand that few, if any, things in mathematics are pure fantasy; the objects and relations studied there are pretty much forced upon us - which, incidentally, is yet another evidence in support of their having real significance, i.e. as something lying outside our consciousness and acting there independently from it.)
Yes, but the reals (or the power set of the reals) are really a bit dodgy. The Nana h tarski paradox shows a big disconnect with reality. There is other axiom models without AOC, which allow for every subset to be measurable, but they seem to have their own problems.
The reals exist in nature in the same sense as, say, 𝜋 does (which occurs as the natural limit in a particular Monte Carlo experiment). The paradox that you mentioned is, rather, disconnected from our intuition, from which reality itself is, too, disconnected (if you think of quantum mechanics, for example).
On the other hand, it is interesting to think about the fact that, say, the Avogadro number is usually represented as a real number; the problem with the number of protons or other particles is that their number is unbounded: given enough energy, another particle can always jump into existence...
That's still arbitrary (designed by humans) because you decided to count each proton as 1. We could also count every proton as 2 and it would be just as arbitrary.
The notion of existence of ideas has been discussed in philosophy since Platon and has not settled. Consider that to express an idea even a simple one like the numbers one-two-three one needs to use a language. And language has infinite number of interpretations that may be entirely different. So how do one know that his ideas are not misinterpreted by others? Moreover, memory is based on the language, so how does one know that things that he remembers from a day ago mean the same things as it was then?
They are not all build from axioms but from definition (apart from some models of natural numbers, or you go back to ZFC). Like C is build from R^2, R from the powerset of Q, Q from integer^2, integer from natural numbers.
Otherwise you do not easily know if your axioms are actually consistent.
Yes, but what I meant is that there are axioms at the base of this structure, so everything is ultimately built on them. Some people put axiom foundations higher up too, like axiomatic real numbers.
Yes, there are of course axioms, but if you have a rich enough set of axioms like zfc, then one should stop piling more axioms (if not absolutely necessary). So of course the question is existence in zfc
I agree with you (for some values of "exist") and furthermore think this is a good reason why "imaginary" is a terrible name for the number set they refer to.
(I prefer "pure complex", but that's got its issues too, of course.)
It is not complicated but it requires some work and more.
Define “f(x)” is an infinitesimal if lim f(x)=0 when x-> 0.
which is the same as saying “f(x)” is smaller in absolute value than any real number.
Nothing more.
The problem: you cannot divide by infinitesimals. Now you need to “create a field” from these numbers and the reals.: this requires the axiom of choice. There is also the sign problem.
But in the end, “that is all you neeed”. You think of infinitesimals as “numbers smaller than any true real number”.
Thanks! I know about axiom of choice (and I am fine with it), so how do you do the field construction?
I am also a bit worried, if you have a field it is also a division algebra. And I thought we know that the only divisions algebras are R, C, H? Like https://math.stackexchange.com/questions/2020399/division-al...
(The link does not 100% fit). So how do hyperreals go around that 'no-go' theorem?
The Frobenius theorem characterizes finite-dimensional associative real division algebras as being isomorphic to the reals, the complex numbers or the quaternions. The hyperreals aren't finite-dimensional as a vector space over the reals (for example, if Ɛ is an infinitesimal hyperreal, then Ɛ, Ɛ², Ɛ³, Ɛ⁴ ... are linearly independent over the reals).
A simpler example of another real division algebra (and in fact, another field) is the field of rational functions with real coefficients. This field is also infinite dimensional over the reals (for example, 1, x, x², x³... are linearly independent).
Ah yes. But then if I think of infinitesimal als functions going to 0 at 0, I can have two infinitesimals (first one is constant zero for negative values other one constant zero for positive values), which multiplied give the constant zero function. How does a construction deal with that?
Although the original statement about “infinitesimals being functions that vanish at 0” was stated with confidence, it is wrong.
The usual construction of the hyperreals replaces real numbers with sequences of real numbers, and also introduces a nontrivial equivalence relation on the sequences, making two sequences equivalent if they agree on a “large” set of terms. The real numbers get represented by the constant sequences, infinitesimals get represented by sequences that approach 0, and infinite numbers are represented by sequences that grow without bound.
The magic is in how “large set of terms” is defined. You need a “large set” relation with the property that finite sets are not large, and for any set either the set or its complement is large. Then we can resolve your question: say you had two not-always-zero sequences that multiply to give the all-zero sequence. Then the set of zero positions is large for one of those two sequences. And that means one of your sequences is equivalent to the zero sequence. The field axioms are saved!
inf + 1 = inf, which is idempotent, so if you extend the complex numbers with infinity they become a semiring. Is the ire that infinitesmals draw related to this?
There are lots of different "infinities" floating around in mathematics, and that probably doesn't help the matter.
The extended reals or extended complex plane do behave as you describe (and infinite cardinalities have the same property), but infinitesimal treatments of calculus do not. You're instead working in an ordered field that contains the reals and some element greater than all reals.
Supposing such a thing can exist (they can exist in ZFC), the fact that you're working in an ordered field gives you a lot of infinitesimals and infinities (and also a lot of numbers "between" the ordinary reals).
From what I understand, you do not work directly with infinite, but infinitude, basically a hyperinteger that is bigger than any integer. For which H + 1 = H does not hold.
The key terms to search for are "nonstandard calculus" or "nonstandard analysis". I just found the following set of lectures (but I haven't watched any of them); maybe they are what you want: https://www.youtube.com/watch?v=ILDkYszP2lA&list=PLDXeoTykA-...
https://terrytao.wordpress.com/tag/nonstandard-analysis/
https://terrytao.wordpress.com/2010/11/27/nonstandard-analys...