Hacker News new | past | comments | ask | show | jobs | submit login
Do Teachers Need to Include the History of Mathematics in Their Teaching? (2003) (researchgate.net)
142 points by lainon on Dec 24, 2016 | hide | past | favorite | 114 comments



Yes, yes, a hundred times yes.

My math teacher taught me not only calculus, but also when and who got up to what. It doesn't have to take a long time, but a bit of context helps a lot. Euler went to Russia and yada yada bridges graphs etc.

Same goes for all science disciplines. You need to have a rough idea that Darwin worked in the 19th century, that much of thermo came about in the late 19th century, that quantum is a 20th century thing. You need to know what people were wondering about, and what experiments they came up with.

I've been listening to a lot of audio courses lately, and those little nuggets really help to understand things.

The point of the little stories such as how Watson and Crick came up with the double helix is to help recall. It's hard to remember dry facts, much easier to remember stories. People are kinda built that way.


Yes Yes Yes! One of the best math books I ever read, "Math for the Millions" by Lancelot Hogben taught each math topic as something that was discovered as civilization grew to have more complex problems.


I strongly believe math should be taught with the science that motivated its development- calculus should be taught with classical mechanics. But I'm less sure the history of science is worthwhile.

The problem is opportunity cost. Schools (highschool, college intro physics) spend a great deal of time discussing previous models of the atom. What's the value of teaching Thomson's plum pudding model of the atom, really? They could start with the current model and list all of the observations/experiments that have shown the model to be useful. Previous models could be relegated to an appendix, or a history class.

This would free up time to more comprehensively discuss 20th and 21st century physics, which are sorely neglected at this level.


The value of teaching Thomson's plum pudding model of the atom is to understand the evolution of ideas. It also helps the young minds understand that solving problems is often messy and we have to revisit our understanding of scientific models as new facts emerge. It might be obvious to you but the young students would feel that solutions would come in a single shot.

Ideally if the students have strong basics 20th and 21st century physics should be tried by the students themselves.


I think there is value in teaching that there have been prior models (but more energy should be spent on the current thinking) because it shows that good scientists are constantly researching and revising their ideas as they get more information. Science is never done.


To be clear I agree there's value, I just don't think the value is as high as the other things that could be taught in that time.

I think appendices are really a good solution here. Those who are curious can read them- I know I would have. Those who aren't, can stick to focusing on the current model.


I'm not sure I agree. I think that understanding that knowledge is never done, and that science is about exploring new ideas and questioning our current understanding is much more important to the layperson than knowing how atoms are structured.

I also think that couching science as a journey--a mystery to be solved, with clues and red herrings along the way--will help to get students interested in learning the details. (This is all just my gut instinct; I have no experience in science education, so I might not know what I'm talking about).


Yeah, calculus seems really abstract and pointless until you realise that it's 90% of physics. At school they shy away from calculus so much that you just do a version of physics where they've simplified away the calculus, which is extremely confusing.


Sometimes it's worthwhile to explain how things are by first talking about how things aren't.


The opposite happened to me, no background at all and I actually ended up resenting calculus for a long time.

At some point I stumbled onto stories about newton, leibniz, and everything that drove them. It was riveting and brought the magic back.


Hearing their stories communicates to you the existence of the millenia-wide fellowship of truthseekers, and that what you are receiving is not just fodder for tests---it is your inheritance.


> how Watson and Crick came up with the double helix

s/came up with/stole the idea from looking at Rosalind Franklin's X-ray data/

To be fair, both Watson and Crick strongly insisted she also get the Nobel, but she'd tragically passed a way shortly before the award was announced, and the rules are very clear about no posthumous awards. Still, it's sad that we don't teach Franklin's role.


I have never, not once, come across a discussion of Watson and Crick that did not also talk about Franklin's role.

Even when I was in high school back in the 80's, her contribution was front and center, along with the other two. Can we please let the myth that somehow her contribution is forgotten and not taught to die?


Always the headline is Watson and Crick. Not Franklin with some minor assistance from ...

Franklin is being brought into the "Watson and Crick" discussion. Rather than it being Franklin first, foremost and clearly most important. Let's not airbrush that her research was provided to Watson and Crick who claimed the glory without her consent.

So yeah, Franklin. Franklin is double helix.

Edit: name typo


No one forgets that her data was taken/stolen. The fact of the matter is that she didn't come up with the final structure given her brilliant experimental work and results. The history of science is littered with forgotten people who were scooped of the glory just short of the line. Franklin is very well recognised given that she didn't discover the structure of DNA.


> Always the headline is Watson and Crick. Not Franklin with some minor assistance from ...

Oddly, Wilkins is always left out too. That trio won the Nobel.

Maybe Watson and Crick were just better at selling themselves.


I agree that historical context should be taught in classes. I wrote a Master's adding plenty of historical notes about "why" this and that approximation was thought of, and proposed. Sometimes it doesn't take much - just reading the early Schrödinger papers can give you an idea of his train of thought. It really helps make sense out of theory, for me.


"Euler went to Russia and yada yada bridges graphs"

I know little of this history, but is there any link between him going to Russia and him solving the seven bridges of Königsberg problem?

I'm asking because I don't know whether Euler ever visited Königsberg and because Königsberg is in Russia now (renamed to Kaliningrad after WWII), but was in Prussia at the time.


He went to work in St Petersberg, which is just up the coast. No idea if he sailed or drove, but it seems quite likely to have been on his way regardless of where in Europe he was before (Switzerland?).


(TLDR: likely on his way, but did he stop there?)

Yes, he was in Switzerland, in Basel. Googled a bit. http://www-groups.dcs.st-and.ac.uk/history/Biographies/Euler...:

"Euler left Basel on 5 April 1727. He travelled down the Rhine by boat, crossed the German states by post wagon, then by boat from Lübeck arriving in St Petersburg on 17 May 1727."

Still inconclusive, but given that we know this, I would say chances are non-zero that the sources this was derived from spell out where that ship made stops.

One could also look at his journeys from Leningrad to Berlin and, years later, back, but both were after his publication on the 7 bridges problem.


*rode. I dont think one drives a wagon. Euler was born in 1783.


Someone probably drove the vehicle in question for Euler but:

b.2.b One who drives a vehicle or the animal that draws it; a charioteer, coachman, cabman, etc.; also, one who drives a locomotive engine. (Often with defining word prefixed, as cab-driver, engine-driver, etc., for which see the first element.) [...]

c 1450 St. Cuthbert (Surtees) 6016 All þe dryuers ware agaste þat þe sledd suld ga our faste. 1581 Savile Tacitus 93 (R.) Buffons, stage-players, and charet drivers. 1725 Pope Odyss. xiii. 99 Fiery coursers in the rapid race Urg'd by fierce drivers thro' the dusty space. [...]


I think I recall reading a different version from Leibniz's POV (sp). I have always liked these videos:

https://www.youtube.com/watch?v=dW8Cy6WrO94&index=1&list=PL3...


I would highly recommend the BBC Radio 4 program "In our time" http://www.bbc.co.uk/programmes/p01gyd7j?page=2

Each episode tends to discuss historical, political, and economic conditions in addition to the primary topic.


I'll be the devils advocate. Why? I can understand calculus perfectly fine without understanding how people got there. In fact, I'd argue much of what you learn before call--in the order of discovery--actively hinders understanding.


Because advancing science and mathematics is not a straight line but full frustrations inside a cloud of uncertainty and anxiety. It is important to appreciate what the world looked like before the discovery and how someone tackled a problem to change this view of the world because when you yourself make discoveries you will be in the exact same situation.

However if all you care is to use what has been discovered, which is becoming less and less valuable, then you don't need to learn history of mathematics and science.

Although this is not good evidence but rather an anecdote, I cannot remember any significant person who has made fundamental contributions to mathematics or science that was completely ignorant of the history of the field.


Let me give an example from calculus - a continuous function. How do you define the concept? The definiton of the concept changed quite a lot in the past 250 years or so. (Please take the following explanation with a grain of salt, I am not writing a thesis on the topic, just pointing out stuff.)

For Euler, continuous function was pretty much intuitive notion. He only composed functions with only occasional point discontinuities, so it wasn't a big deal for him to even not have a proper definition.

Then people like Bolzano and Dirichlet came along and realized they need a better definition, because there can be some really weird cases. So they formalized the continuity with limits (which is typical way how to define it in basic calculus).

Later yet, people understood better what it means to be a real number by looking at notions such as countability and measurability. While this doesn't affect continuity itself, it does affect understanding of what is a real fuction.

Then came more abstraction, to metric spaces and eventually topological spaces, which redefined "continuous function" yet again as a morphism between topological spaces.

Another shift in thinking about continuity happened when theory of distributions was invented. This actually completely reverses the intuition - instead of properly definining reals and then on top of real function define what it means to be continuous, you define the "function" itself in an entirely different way, in which the continuity becomes somewhat irrelevant.

Finally, modern mathematics is quite obsessed with category theory and various ways to make everything into some algebra. In a way, we care less what reals really are, only what we can do with them (or their sets).

So I think to understand the intuitive relation of all these different definitions, you need to understand a little bit of history.


Many people will find it easier to follow maths lessons if there are a few bits of history sprinkled in here and there. It adds a human element and honoring the great ones hundreds of years (or even millennia) after they are dead serves as an implicit demonstration of how important their discoveries are to us. Not a terribly powerful demonstration, but less futile than repeatedly yelling "hey, this is important!"

Also, mathematical concepts don't come with natural names attached. But we need consistent labels for successful communication. It's much easier to not confuse those labels if you know a bit about the history that led to the naming.


It also helps to understand history.

If you know that people didn't know some science/math at a time, it makes it clearer why they acted in some way.


Probably not. It's interesting to plow through Newman's four volume "The World of Mathematics" as a grad student, but inflicting mathematical history on ordinary school kids is cruel. They don't need that much math.

Unless you're a mathematician, math should be viewed as a tool, like a lathe. You don't need to know the history of the lathe, and how Maudslay made it a precision machine tool. (His original lathe is in the Kensington Science Museum. It's one of those historic artifacts which looks very different from its predecessors, and it's successors look a lot like it.)

Few people need to know how to build up mathematics from minimal axioms. Nobody should have to struggle through Whitehead and Russell below the PhD level. We have power tools for that now. The original Boyer-Moore theorem prover from 1992 can build up constructive number theory from the axioms in under a minute.[1] I fixed it up recently to run on GNU Common Lisp and put it on Github, so it's runnable on modern machines.

There's certainly no excuse for inflicting Newton's notation for calculus on kids. It's not even clear that classical geometry proof approaches are that useful.

[1] https://github.com/John-Nagle/nqthm


In this context, your idea of history is of a "Historian's history". The article on the other hand is about "more historical context" in the progression of mathematical tools and ideas.

Worth considering, is that if this practice gained widespread appeal, the quality of how it is done (by the best and probably by the majority) would improve.

For my opinion, I think the idea "incorporate history" is just the most obvious first attempt at more story telling in mathematics. If these "stories" are not a part of the classroom then they may come too late or never for the majority.

I do agree that math is a tool, but it should be taught with more rhetorical tools. To trade a little into your knowledge space, think about the fondness many have for "The Little Schemer".


Bringing up Whitehead and Russell indicates you may have somewhat missed the argument here. The point is to be guided by the history, not build up from axioms. Mathematics was not historically built up from axioms! Whitehead and Russell are very recent historically, so you would certainly not start with them (or even get to it at all) by following a historical approach.

Essentially this is saying that you can use the sequence of historical discovery as a guide to the appropriate sequence of lies-to-children [0].

[0] https://en.wikipedia.org/wiki/Lie-to-children


When I learned calculus in school our teacher took an hour to describe what the problem was and how people over history had solved these problems before calculus came around. This made the topic much more interesting to me. You probably don't need detailed math history but a few minutes showing that things like calculus have not always been around and how they were discovered is pretty useful.


To generalize a bit from your comment: basic education probably shouldn't follow a strict "ontogeny recapitulates phylogeny" model (https://en.wikipedia.org/wiki/Recapitulation_theory)

Of course it's very important to study the history, but probably not in the first presentation. But at the same time, it's definitely true, for most subjects, that historical references often help to motivate less committed students and enliven the experience.

On Maudslay's role in the development of machine tools, this paper looks interesting:

FT Evans "The Maudslay Touch: Henry Maudslay, Product of the Past and Maker of the Future" http://www.tandfonline.com/doi/abs/10.1179/tns.1994.007?jour...


Maudslay's maxims:

- Get a clear notion of what you desire to accomplish, then you will probably get it.

- Keep a sharp look-out upon your materials: Get rid of every pound of material you can do without. Put yourself to the question, ‘What business has it there?’

- Avoid complexities and make everything as simple as possible.

- Remember the get-ability of parts.

“It was a pleasure to see him handle a tool of any kind, but he was quite splendid with an 18-inch file.”


On the other hand, often our tools have somewhat peculiar forms. For example, “trigonometry” as commonly taught in high schools and used in science/engineering/mathematics uses bizarre names, terrible notation conventions, and a giant pile of unmotivated formulas to memorize.

Teaching a bit of history alongside helps students understand why it takes that particular form.

The word “sine” comes from a weird Latinization of an Indian word for “half a bowstring”. Draw a picture of a circle with a vertically oriented chord (the word “chord” also implies a bowstring), and a student will have a much easier time remembering what the sine is. Likewise tangent (Latin for touching) and secant (Latin for cutting) make more sense if you think about the meanings of the words. Cosine means the sine of the complementary angle, etc.

The reason we call inverses “arcsine”, etc. is because originally these were written as quasi-sentences, and the concept of a mathematical function was not well developed. So sin⁻¹ x would be expressed as something like: arc (sin. = x). That is, the length of the arc whose sine is x. This form was cumbersome so later got shortened to arcsin x.

The origins of trigonometry are in astronomical measurement, which is why we have 360° in a circle (each degree is roughly one day of movement (365 days/year), rounded to a nearby highly composite number), and come from the Sumerian/Babylonian numerical tradition which used a base sixty number system. Hence “first minutes”, “second minutes”, “third minutes”, etc. of a degree. “Minute” (Latin for small) implies 1/60 of the larger unit.

The reason trigonometry focuses on learning a big pile of formulas is because before the era of electronic calculators, people needed to do all computations by hand, or by interpolating in pre-computed lookup tables. The goal of “trigonometry” is to take a given problem and convert it to a form with the easiest hand computation and the fewest table lookups possible, so that the mechanical work can be handed off to a team of human computers who can go through the laborious arithmetic. Memorizing trigonometry formulas is a way to cut the work done by the human computers to a small fraction of what it might take for the original problem as posed.

Trigonometry was important in science/engineering because until recently the abstract vector concept and idea of combining simple single-number parts into “complexes” were not well developed. People solved problems by breaking them into coordinates and discrete lengths and angles. Solving triangles and converting between polar/cartesian coordinates were important steps in almost any 2-dimensional problem.

If we really wanted to avoid the “ontogeny recapitulates phylogeny” model, we would scrap the current form of trigonometry (certainly not spend 4+ months exclusively focusing on it) and set the high-level ideas on a more logical foundation which was easier to learn and reason about, ditching the parts now anachronistic in an electronic computer age. We would give students harder problems to solve and fewer formulas to memorize. But that could leave students unfamiliar with the existing language commonly used in the existing literature, so to some extent we’re stuck by our history. http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf


> The reason we call inverses “arcsine”, etc. is because originally these were written as quasi-sentences, and the concept of a mathematical function was not well developed. So sin⁻¹ x would be expressed as something like: arc (sin. = x).

Any pointers to resources about those "quasi-sentences"? I'm interested in language, broadly speaking, so info about how mathematical notation evolved is interesting to me. The rest of your comment is great to, I'd love to read a book about stuff like this if there is one!


The most comprehensive source about this kind of thing is Cajori’s History of Mathematical Notations. You should be able to find a used copy of both volumes for a reasonable price.


There's a Dover publishing of both volumes in one on Amazon.

https://smile.amazon.com/History-Mathematical-Notations-Dove...


That was a great read, thanks!


You don't think that a little historical perspective helps bring the subject to life? I think it's great for both the mathematically inclined students and their not so interested counterparts. Context helps those who care see the bigger picture and those who don't to at least understand why this might be important.


I work as an applied mathematician and in my opinion trying to work though the historical context can sometimes hurt. Basically, there are two issues at play. One, everyone has a finite amount of time to learn things, so working through historical methods to add context can take away from gaining time and experience with flat out better, modern methods. Two, it often confuses people as to what methods to use. I work a lot in optimization theory and solvers. In modern optimization theory, you essentially never formulate, use, or look at the Euler-Lagrange equations. However, whenever I work with engineers, they was to use these equations because they learned them in school and think that it's necessary to use them to optimize. It's not. Just discretize everything and use the KKT conditions. Hell, don't discretize everything and use the KKT conditions because they work just fine in Banach spaces. It's not that the E-L equations are wrong, but they're covered for historical context and it cheats people out of the time they need to learn modern methodology. As another example, when people learn iterative linear system solvers, they often start with the Jacobi method or Gauss-Seidel. However, these methods are almost never used because they're absolutely terrible compared to modern Krylov methods. Again, though, I often work with engineers who learned about them in school and want to use them when they should have been learning the differences between things like MINRES and CG and what needs to be done to precondition the system.

For full mastery of a field, sure it helps to know the historical record. Certainly, I think there are little historical tidbits that can keep a class more interesting and put some things in context. However, by in large, I often see people use completely terrible algorithms because they were taught in what I consider a historical manner and they did not have the time, nor perhaps the instruction, to learn more modern methods, which can often be formulated in an independent way without regard to older algorithms. Fields evolve and sometimes we realize there's just flat out better ways of doing things that don't need earlier results.


Reading the above, I wonder if the people who are writing the linear equality/inequality parts of SAT solvers for program proof know that. Internally, numerical equalities and inequalities become rows in a sparse matrix. The columns represent program variables. Solution is done with rational arithmetic. The original Oppen-Nelson prover of the 1970s solved those with a Gauss-Jordan approach, and most follow-on work through at least the 1990s was basically the same machinery.


I teach high school math, and I incorporate math history informally. I don't teach lessons specifically about math history, but I often use it as context for whatever I'm teaching.

For example, if I'm showing students how to find the area of a circle, I ask them if they know where pi comes from. Many students have no idea that people had to discover the value of pi, and how it can be used in formulas. To many students, pi and formulas are just things that have been around forever, that they have to learn in school. I draw a square around a circle and ask what the area of a square is. I draw a pentagon and a hexagon, and ask them what will happen if we keep adding sides. Students spend most of their time focusing on the practical aspects of math, but they come away with an understanding that math has been a human endeavor of discovery, and that many of the pieces fit together in beautiful and surprising ways.

When we have a little time at the end of class, or during transitions, I pull up little snippets of math to show them. Math videos are great; for example, I love showing students that ∞ + ∞ = ∞. [0]

There are lots of little things we can do to make math more alive for students, and sharing some math history is certainly one of them.

[0] https://www.youtube.com/watch?v=faQBrAQ87l4


Short answer: It varies.

Sometimes in math the history provides helpful context and motivation, and when people leave it out it makes things confusing. I'm not going to elaborate on this because I assume most people here already agree with this!

But sometimes the modern way is so much cleaner and better, so that even if you do want to learn the history, everything will probably be easier to understand if you learn the modern way first and know in advance what truth it is that they were working their way towards. Sometimes the historical way is just awful.

(When I took representation theory in college, the professor thought it would be funny to at one point show us the original definition of an irreducible character. Nobody should ever have to learn representation theory in such a way!)


https://betterexplained.com/articles/developing-your-intuiti...

This website does a really good job explaining complex calculus ideas to me. One of which is 'e', where the history and its origins will help you understand why it behaves the way it does. Without this context, 'e' is just some arbitrary number that you have to memorize. This is solely why I believe that history of math should definitely be taught.

Finance is also another area where history should be taught since ideas like continuously compounded interest was a recent development, and it's history explains why anyone wanted to compound continously.


Kalid from BetterExplained here, glad you're enjoying it!

I often go to the history of a concept for both historical appreciation and, practically, to understand it better. We often study Calculus without looking at what Archimedes was able to do -- break shapes into smaller parts -- and without really seeing how the notion of infinitesimals come into play. (Epsilon/delta definitions don't give the same insight.)

The natural log was discovered before e -- why would that be? Sine, cosine, trig functions -- they started as measurements of triangles, evolved into analytic definitions of their own -- why does this progression make sense?

In my mind, truly understanding a concept means you understood the path it took to the current state.


Oh wow, this is why I love Hacker News - thank you for helping me overcome my mathematical anxieties in college as I was working on my Computer Science major. Your site really helped me gain an appreciation for math since I had terrible teachers as a kid and led me to believe that I could never 'get it'. Would love to see some more linear algebra materials as well


That's awesome to hear, thanks (I started the site to help other students).

Yeah, my only linear algebra content is:

General Overview:

https://betterexplained.com/articles/linear-algebra-guide/

Matrix Multiplication for Programmers:

https://betterexplained.com/articles/matrix-multiplication/

Hoping to flesh it out over time.


Out of curiosity, how is the content generated? Do you write it all yourself, or are there other contributors?

I've always thought that betterexplained was a great resource, and was wondering whether there was any way I could contribute.


Just saw this now :). I write it all myself, but sometimes get pointed at resources by others.

https://aha.betterexplained.com - forum to discuss ideas. It's in the background currently but I'd like to make it into more of a public place.

If you like, shoot me an email, I'm putting together a list of people who might want to be contributors. Thanks!

https://betterexplained.com/contact


> One of which is 'e', where the history and its origins will help you understand why it behaves the way it does. Without this context, 'e' is just some arbitrary number that you have to memorize.

What? I have no idea about the history of `e` (though I have a good idea who "found" it) but it is incredibly important and it's pretty easy to explain why -- if you have a dollar in the bank, earning interest at an instantaneous rate of 100% per annum, but continuously compounded, `e` is the amount of money you'd have at the end of one year.

Very, very simple. When the rate of change is equal to the quantity itself, `e` is the thing you exponentiate.


Euclid: if 2^n-1 is prime, then (2^n-1) * 2^(n-1) is the sum of all its proper divisors (and clearly even).

Euler: If an even number is the sum of all its proper divisors, then it has the form (2^n-1) * 2^(n-1), where 2^n-1 is prime.

You don't need to give a full history lesson every time, but if you omit the people that came up with this, and the fact that it happened about two thousand years apart, you're needlessly ditching precious magic. Some historical gems take away little class time and make mathematics more humane.


I mention this when I teach number theory - and the fact that the proof in Euclid is entirely in words. People complain about mathematical notation sometimes, but the imperfect notation we have is better than just words for everything. (We could use more pictures, however.)

Likewise, if I define normal subgroups, then simple groups - and it would be a shame, at that point, not to mention the classification of the finite simple groups.

I think from the students' reactions that these things are interesting to them. People interest people.

So I mention Euclid, and maybe I take 1 minute. The finite simple groups, maybe 2 minutes. The class is 50 minutes long. Is that "teaching the history of math"? Really, the original question is ill-posed. Let us say I have a 50-minute class in a content course. How much of that time do I have to spend talking about history before I'm "teaching the history of math"? One minute? Ten minutes? Do I have to give an assignment on history?

But a little bit of history, or culture, or a random story - I think that's part of learning the subject, broadly understood (as you said, humanely understood). Nothing but definition-theorem-proof would be pretty deadly.


I'm certainly not a real mathematician, though I was a math major in college. But I use math all the time. Meanwhile, my kids are taking math in school.

Practically everything I do with math, is done at the computer. When I derive something by hand, it's with the knowledge that I'm just doing it for nostalgia's sake. I could, and probably should, use Jupyter / Python / Maxima for everything. And I'd enjoy learning how to use even more interesting tools such as a proof assistant, even if it would be purely recreational at this point in my career.

Meanwhile, in their high school math classes, my kids will never touch a computer. Everything is done by hand, with occasional use of a graphing calculator (what an archaic device).

In a weird sense, not only are they learning history, but the entire curriculum is history.

I can't say if this is good or bad. Whatever I learned in high school must have paved the way for me to pick up more modern techniques fairly readily. Math really came alive for me when I began to learn abstract math, and was simultaneously introduced to computation at the front end of the microcomputer revolution. That's what made me want to be a math major.


When you say you should use a computer, you mean for arithmetic and calculations right?

Also calculators are hardly archaic. If I want to calculate something quickly I'll always go for my Casio FX-83GT, since I can type it much faster in there. They are archaic in the sense that the number of terms you can have can be limiting though...

I think a good grasp of arithmetic is incredibly helpful in the real world, and is something that academics (like me, as a physicist) often lack, whereas "regular people" are much better at it. I also rarely bother with change, I pay with card when I can...

I think doing anything more complicated than basic calculations on paper is pointless though


I have a calculator too, though not a graphing one, and I use it for a similar reason: The keypad is convenient. But if I need to graph something, or do a repetitive calculation, then I'll turn to Jupyter. In fact, I have it on my tablet.

I graduated from high school just as graphing calculators were introduced, so it never became part of my experience.

What seems unfortunate about the graphing calculators is that its special symbiosis with K-12 math teaching limits the development of both. You can't add features to the calculator, or offer a free alternative as a phone app, without facilitating "cheating," and the textbooks can't introduce lessons requiring computational power beyond the capabilities of the calculator.

Not to mention, the TI monopoly: Every family has to shell out for one of those things.


For simple calculations when using a computer I suggest: http://speedcrunch.org/

For things with units: https://frinklang.org/ Since learning of it on hackernews a few months ago I find I use it frequently.


Not just mathematics, all disciplines should contain a healthy dose of history.

I think that explaining the thought process is even more important than the results.


There is a difference in history and thought process.

My view is that school should help kids find their thought process. Will likely be similar to successful thought processes. Didn't have to be, though.

The most instructive thing I have ever heard, was seeing that Feynman made his own notations for learning math tricks in high school. Fur some reason, this really made me regret not trying new ways of things.

There was an article recently on the importance of notation. Making your own goes a long way to understanding others.


While studying CS in my first year at university our maths professor spent the first week teaching us about ancient number systems and for the first few weeks taught us how count, add and subtract using Egyptian (Base 10) and Babylonian (Base 60) numerals.

We were given some historical context for those number systems and this was the perfect way to lead into teaching us binary, octal and hexadecimal arithmetic.

For those of us who have always been interested in computers it mightn't have been that useful but for the people who "sorta fell into this course" it was a great way to learn those concepts without it just being "that binary thing"


I try to incorporate a little bit of history into my mathematics courses.

For instance, why do we rationalize denominators? That is, why is 1/sqrt(2) traditionally considered bad form? There is a historical reason for this that few students today know about. I think understanding this history puts things in context and makes the subject less about arbitrary rules.

Here's something that puts basic algebra into perspective. Every equation that we teach you to solve by hand is reducible to either a quadratic or linear equation. The rules we teach are all about transforming expressions/equations into quadratic or linear form. There is a purpose. It's not random.

Does it help? I don't know.


> rationalize denominators

Hah. Ah, yes. Rationalizing denominators is sometimes useful for calculations and sometimes counterproductive. I think numbers of the form 1/√n are almost always simplest and best understood as 1/√n rather than √n/n. (Then you have symmetry in statements like "The diagonal of a square is √2 times its side"/"The side of a square is 1/√2 times its diagonal".) Probably the same for a/√n or 1/b√n. If you're about to add it to another such number and you need a common denominator, like 1/√2 + 1/√3, then you can turn it into 3√2/6 + 2√3/6; on the other hand, if you were to multiply 1/√2 and 1/√3, it's most sensible to keep them that way and get 1/√6; if you then needed to square it, 1/6 is simpler than 6/36. I do believe that teachers' insistence on rationalizing denominators regardless of context was an instance of cargo-culting—following an arbitrary rule without understanding where it came from or when it was appropriate.

(And, incidentally, any denominator of the form "a ± b√c" should almost certainly always be rationalized, and that is a more difficult and valuable trick.)

Edit: By the way, I would be interested to know what "historical reason" you have in mind. I have a feeling that it's of the same form as "at one point, mathematicians were sort of embarrassed by the idea of negative numbers, which were obviously not real, so they would prefer forms like x + 3 = 0 over x = -3". That's the only reason I can think of for always preferring that form. Was your comment about converting things to linear and quadratic form meant to apply to this? I hope you wouldn't assert that it was always to be preferred. (If a^2 + b^2 = c^2, tell me whether "a = 1/√5, c = 1/√2" yields simpler calculations than the alternative.) But my teachers did not communicate anything so nuanced—points taken off for any final answer anywhere with square roots in the denominator—and I don't think it was communicated to them, either. (At least one of them was led to assert that √5/5 was, in itself, "simpler" than 1/√5.)

On the original subject of this thread, I might say that, whether or not it's directly passed down to students, the background of mathematics should be incorporated into what is taught to teachers, because otherwise a majority of them will be ignorant, and will come off to intelligent students as blindly following and enforcing arbitrary rules that they don't understand.


Before computers became ubiquitous and cheap if one wanted to know the the first 5 decimals of 5/sqrt(2) you had to look up the result in a table of values. But to save space and printing cost that table of values did not contain 1/sqrt(2). It did contain sqrt(2).

My comment about quadratic and linear equations was to point out that pretty much most of basic algebra is realizing that these are the only two types of equations that can be solved by hand (ignore 3rd and 4th degree formula) and what we do in algebra is to study equations that can be reduced to quadratic or linear equations.


I've always assumed the "historical reason" was that when people had to do computations by hand, it was easier to find the decimal value of 1/2^(1/2) [how did you make those square root symbols?] by dividing 2^(1/2) by 2 than by dividing 1 by 2^(1/2) (assuming that you knew the decimal expansion of 2^(1/2) ). With calculators and computers, that isn't important. Most of the math teachers I know would not penalize anyone for leaving an answer as "1/2^(1/2)".

But the idea of rationalizing is important - for example, so that you can express 1/[a + b * 2^(1/2)] in the form p + q * 2^(1/2), where a, b, p, q are rational.


Square root symbols: option-v on Macs. Don't know about other operating sysetms. ± is option-+ (aka option-shift-=). I just did option-[every key], and option-shift-[every key], at various points in the past, and learned the symbols that I liked.


The danger is that incorporating history is very easily to do badly. I remember various science lessons at school which were made duller and more confusing by the teacher taking a historical approach.

We have better mathematical notation, explanations etc these days that make various topics far easier to understand. Try reading most old mathematics books and it's a tough experience. It's certainly also possible to have a good understanding of mathematics without really knowing much about the history e.g. I understand Galois Theory but I really don't know much at all about the history of trying to solve polynomials or indeed about Galois himself other than he went and got himself killed in a duel. In fact when I was younger, too much history might have put me off the subject altogether. I'm old enough now to appreciate history but I wasn't when I was at school.

Instead, I think a better approach is to identify parts of mathematics that teachers struggle to teach either in terms of concepts or motivation. Then one can look more carefully at those and see whether examples from history (or indeed other contexts) might help, rather than necessarily using history as the starting point. I think the article may be in accord here, but it is so easy for the message to get interpreted differently by teachers.


History is a convenient and readily available proxy for shifting content focus away from mathematical facts and toward mathematical processes such as problem-solving, proof-writing, problem-posing, abstraction, theorizing.

History's appeal I think lies in the concrete, engaging narratives involving the struggles, dreams, and failures of actual people going through those processes. In my experience, differences in mathematical inclination are correlated with the ability of perceiving the abstract mathematical processes as engaging narratives in and of themselves.

Professional mathematicians, as much as I've witnessed such speak to one another, tend to describe, e.g. a sequence of algebraic manipulations to solve a problem, as a journey taken by known facts during which they grow, combine, and ultimately transform into new knowledge. I myself have always considered numbers, variables, etc. abstract concepts, to be my friends, and like with any friends I care about their relationships, their states of being, and so on.

This of course all begs the question of what mathematical facts and processes should be part of the curriculum in the first place, i.e. what is it that we would like to teach better using history in the first place?


I'm skeptical that adding more history to mathematics education would increase motivation. If mathematics has a contender as the most boring and dreaded subject, it's history. For motivation, tie in pop cultural references and show applications that students actually find interesting. But historical context could be useful on other fronts. E.g., discussing the arguments from historical debates probably helps to solidify certain ideas.


Just speaking personally: for me, learning the context and history of mathematics would have made a huge difference in my motivation to learn it.

I always hated math growing up (despite being very good at it now); I didn't sit down and learn it well until I got serious about learning CS, in my 20s.

Our teachers did not contextualize why we were learning this stuff at all. It was more like: here, sit still and spend an hour drawing lines on grids and arbitrarily shuffling X's and Y's. Boring... And for me, it seemed so far removed from the reality of my day-to-day life at that age, that I just couldn't see why learning this "boring" subject was at all relevant.

In reality, math is one of the most fascinating subjects anybody could learn. But unless you know why and how it's used--there's no motivation to sit down and plug through tedious exercises about seemingly trivial subject matter. (I would have rather been playing video games. Or climbing a tree. Causing trouble. Or, yes, even reading.)

This isn't the only missing link in math education in the US -- but it's one of them.


The fact that mathematics definition are a construction and not something that's absolutely obvious, and also that notations have evolved and so are a bit arbitrary as well is absolutely crucial to demystify mathematics. I think the reason mathematicians still wonder if it's necessary is because it would only show that maths are a human construction and not a heaven's revelation, and so makes math more mundane.


Absolutely, yes.

1. It's much more interesting to study something when you have a historical perspective that you can relate to later on in the course.

2. It makes you comfortable about the idea that such developments in mathematics are made by fellow humans only and they can also do such things if they put in the efforts. This might sound like a small addition but it's detrimental in developing such sense in young kids.


It depends. There are some topics for which the historical context provides the perfect motivation. Which problems were considered important and why? How does this theory solve these concrete problems? It is much easier to motivate students to learn about abstract theories if you can clearly explain their usefulness ahead of time.

However, there are some mathematical theories, e.g. Topos theory, whose historical context is so convoluted that it's just going to confuse students. I'm speaking from personal experience here... Historically, Topos theory was developed in the context of algebraic geometry. This context is not (directly) useful to you if you want to apply these ideas to logic. If you approach the topic from order theory instead (which is an application that came much later historically!), you get a very smooth explanation where every step follows from what you did previously instead of magically teleporting in place from disparate areas of mathematics...


Absolutely no! 90% of educated population does not handle basic math (such as rule of proportions). History would dilute math even more. But it would be easier for teachers and students to 'pass'.


I agree. The fact that high school students are struggling with epsilon-delta notation of limits is not because they lack historical context. It is just far beyond what most people can grasp.

Most of my school mates had zero understanding of the mathematics we learned. They memorised how to compute the distance between a point and a sphere, a line and a sphere, a plane and a sphere, and then they'd solve one of these problems for the exam, get a good grade, and forget everything immediately afterwards. At no point have they understood what a scalar product is and why they are using it.

If you'd really want to improve mathematics education, you should focus a lot more on the basics, and teach advanced subjects only to the handful of students that are interested in them.


I don't know about American conditions, but I remember that many struggled with epsilon delta because it was their first encounter with formal logic. "For every... there exists a..." took some getting used to for them.


Yes. Probably the most inspiring feature of my first number theory course was a healthy dose of history: about Gauss, Sophie Germain, Riemann, Hardy, Ramanujan. It was helpful, psychologically, to see the process that gave rise to number theory, instead of viewing its ideas as God-given.


Not sure about mathematics. History of math is interesting, but I have never found it super enlightening when it comes to actual concepts. Although it's probably good to understand some of it, because it helps to underline the reasons why various definitions have been introduced (for example, definition of continuous function and then later topology).

However, whenever I talk to other people (contrarians) about global warming, I very much recommend Weart's History of Global Warming as the only book to read. I think in this case, understanding history of the theory, the timeline and how convoluted the path of discovery was helps to break the silly conspiracy theories about climate change.


I didn't develop a love for mathematics until I picked up Mathematics for the Nonmathematician at my local library. Having some historical context to what we were learning would have had a more profound impact on me during high school.


Maybe not for math, but I think a case could definitely be made for programming. If I had understood the motivation behind Unicode and its full implications, I would have spent much less time suffering in confusion about encoding errors.


Unicode is a terrible mess. I remember reading about it with enthusiasm in the nineties, but it didn't take long to realize the committee of designers simply punted on every difficult decision instead of making a stand. They dumped everything on the programmers. At this point, I think it would be less work to convert 7 billion people to using ascii than it would to enable 10 million programmers to use that pile of a standard robustly. History lessons won't help us here.


> committee of designers simply punted on every difficult decision instead of making a stand

Are you sure that it was a bad thing? If they made a stand, we would now had to use (arguably worse) UCS2 or UCS4 encoding instead of UTF-8 (which de facto won).


UTF-8 is a nifty compression scheme - like a Huffman encoding that doesn't require a table to implement, but the committee doesn't deserve any credit for that (unless Thompson and Pike were on the committee, which I doubt).

Besides, the reason most of us actually like UTF-8 is because it leaves ascii alone (which is all I ever use) while pretending to handle the general case. It doesn't help end users or programmers deal with any of the nonsense around multiple ways to encode glyphs (combining codes vs accented codes), deal with surrogates (yes, people encode surrogates in UTF-8), lexical sorting, or anything else. I'll bet there are dozens of incompatible ways strings are UTF-8 encoded in the real world, each of them a bug for interoperability, and all of that blame falls on Unicode being a terrible standard.

So yes, I'm sure.


Physics and chemistry are usually taught from an historical perspective and I found this to be very helpful for it fires the imagination. We humans seem to love learning by stories. The hero strives to solve an important problem, fails a few times and eventually suceeds. Learning in ths fashion seems to enabler the learner to better integrate new knowledge into mental representations that are more likely to be recalled at the appropriate time when solving our own problems. Math is unfortunately not taught in this fashion. IMHO this is why so many kids have the same complaint: "when am i ever going to use this?"


I really liked the third bullet:

Historical problems can help develop students’ mathematical thinking.

Seems really useful to know the problems humanity was struggling with at the time a tool or its notation was developed. Trigonometric functions were nonsense to me until I learned more about astronomy. Once I saw all the problems you could solve with them, everything clicked into place.


As an expansion of the topic, I've always had a problem with a variety of subjects being taught without context. Even history itself is taught without context.

For example, in the US, when you learn US history it is done in almost complete isolation of what might have been going on elsewhere on the planet. You get a bit of what was going on in England but that's it.

Many topics, from math to physics, chemistry, geography and even history would be so much more interesting if they were taught with an underlying foundation of relevant world history to make them more interesting and contextual at the same time.

Even woodworking benefits from understanding how and why people were using certain designs and joints at different times. How did the nail come about? The screw? Various tools, etc.

And one of my favorites, the number zero throughout history.


“Need”? Depends on the level, and maybe not. It can certainly be helpful though. I particularly recommend in primary school teaching about the history of counting boards / abacuses, the history of trigonometry, and most importantly the history of logarithms (every high school student should learn rudimentary use of a slide rule). All three will help clarify why particular tools and notations became standard, and give insight into pre-electronic-computer science and engineering more generally.

For anyone interested in the history of mathematics per se at the undergraduate level, I recommend Stillwell’s book, https://amzn.com/144196052X

More important than “mathematical history” is to teach students some measure of physics/engineering alongside the mathematics, to help motivate concepts.


no. the math curriculum is too crowded already. i liked reading dirk struik, but the selfishness of mathematicians and the patrons they served is only entertainment in the end. want to update the curriculum? then replace some geometry and calculus with statistics.


I strongly believe they should.

I've learned much more about math by looking at its history. Then again, I've simultaneously had to learn a lot of applied math quickly, so I have skipped the historical perspective on many things too.

I think the way I learn best is simply try to apply a concept first and struggle until I become deeply frustrated. Then I backtrack and try to learn the fundamentals that influence the most current tools and methods of application.

Part of understanding these fundamentals is understanding the history of an idea and how it evolved.


Not sure the history in itself is useful but starting from the beginning is probably a good idea.

A solid grounding in logic should come first. Then we can skip quite a few of the blind alleys because there simply isn't time to cover every byway from Pythagoras onwards.

But always the foundations must be solidly built or we end up with people who can crank the handle on the algorithm they have been taught but can't understand what to do when it doesn't apply.


I never truly understood the quadratic formula until I learned about the history of the equation and its discoverer. History of Mathematics should be a fundamental course.


I think the history of anything is important. It helps the student learn why things are significant, what they changed, and how things were before it occurred.


Absolutely yes! Without history students are given an often misleading view of how math constructs evolve. Not only does historical context help some students learn and accept certain math concepts (helping answer the 'why?'), but it also provides insight as to how math research is actually carried-out.


I think the history is important for expectation management alone. It took a world of very smart people many hundreds of years to get to where we are today and there are still many open questions. It's probably ok that you as a student don't understand it all at once.


Yes or at least within the context of discovery. Math never clicked with me until I approached this way.


Does anyone have a good ref that reviews the history of dividing by zero? The current way of dealing with it just seems so ad hoc and dissatisfying... there has to be a long controversial history but searching around I find very little that isn't repeating the same stuff.


You mean that it's undefined? It actually makes perfect sense that it is left this way.

Consider addition and subtraction as functions:

    f(x) = x + 42
    g(x) = x - 42
Now, realize that these functions are inverses of each other. Each one undoes the operation of the former:

f(g(x)) = 42 + x - 42 = x g(f(x)) = x + 42 - 42 = x

If you change 42 to any other real number, this relationship still holds. This is because f and g are functions that relate a number uniquely to another number. The inverse then, is just flipping that relationship (7 -> 49 has the inverse 49 -> 7).

You can define similar functions for multiplication and division, and things will work out in the same fashion until you try to replace 42 with 0. What happens then is that the multiplication relationship is no longer unique. For

    f(x) = 0 * x
f(x) is 0 for any choice of x! This is a problem when we try to flip the relationship as before to invert the operation. Since f(7) = 0 and f(49) = 0, should g(0) be 7 or 49?

Moreover, the fact that the right and and left hand limit of 1/x as x -> infinity does not converge means that there's no obvious way to "plug the hole" so to speak and define 1/0 as some value.


Well you gave the answer. This was the way my maths teacher taught us about why 0 is a curious number.


I'm a mathematician and I'm not aware that there was a "long controversial history". (I'll ask a colleague who teaches history of math and see if she knows anything.) I have a lot of students who plan to teach, and I tell them that their students (and other people) will probably ask why "you can't divide by zero". It's important to explain what is meant by "can't". (Does it mean no one knows how to do it? Or that some mathematical authorities issued a decree? Of course, it's nothing like that. Those kinds of misunderstandings can arise because people don't understand how math is done by mathematicians, because it is done in ways that are different from the way we do things in everyday life.)

Anyway, the explanation is simple and quick enough that I can do it in any class where I'm discussing number systems (e.g. linear algebra, number theory, abstract algebra). The "tl;dr" is that if you want to "divide by 0" you will have to give up something else, and none of the things you have to choose from are things you'd want to give up.

In more detail, suppose you could "divide by 0". Division is defined as multiplying by the multiplicative inverse. (If you don't like that definition, you have to explain what you'll substitute as the definition of division - and note that, mathematicians want a definition that extends smoothly to "number systems" that may be very unfamiliar.)

So saying you can divide by 0 is the same as saying that 0 has a multiplicative inverse - call it 0^(-1). By definition of multiplicative inverse, 0 * 0^(-1) = 1.

On the other hand, in any reasonable number system (specifically, in any ring), 0 * x = 0 for any x. The proof is easy - it uses the definition of "0", the definition of additive inverse, the distributive law, and associativity of addition. (Try it!) Therefore, 0 * 0^(-1) = 0, so 0 = 1.

If this doesn't seem enough of a contradiction, just note that it follows from this (and the definition of "1") that x = 0 for all x. So the only "number" in the whole world is 0. Well, that makes life simple, but not very interesting.

So: If you want to "divide by 0", you're going to have to give up one of those algebraic axioms I mentioned. Which one would you give up? Associativity of addition? The distributive law?

I think it's really important to explain (particularly to kids learning math) that math is not a bunch of arbitrary rules. "Not dividing by 0" is not an arbitrary rule - it's a matter of making a trade-off.


Wow, this was a really interesting explanation. I think this is perfectly appropriate for junior school while the explanation about non-converging limits offerred above by someone is well suited to college students.


>So: If you want to "divide by 0", you're going to have to give up one of those algebraic axioms I mentioned. Which one would you give up? Associativity of addition? The distributive law?

Just to expand on this, mathmaticians have done this; in several ways. For example, the Projetivly Extended real line, and Riemann sphere add "∞" to the Real or Complex numbers respectivly, such that 1/0=∞. Note that 0/0 remains undefined, ∞=-∞, 0∞ is undefined, and ∞ + ∞ is undefined (I am probably missing other "oddities" of these constructions).

There is also a more general way of defining division by 0, that avoid undefined instances: wheels.

As bikenaga mentions, the standard definition of a "reasonable" number system that involves addition and multiplication is a ring. In general, division by anything is not defined because elements are not guaranteed to have multiplicative inverses [0]. For example, the integers form a ring, but 5/3 is not defined in the integers.

If you add the following two properties to a ring, you get an integral domain: 1) Commutativity: xy = yx 2) if xy=0 then x=0 or y=0. Again, the integers are an example. For a non integral domain ring, consider the integers mod 4, where 2 * 2 = 0.

Once you have an integral domain, there is a standard way of defining division by any non-0 element: fractions. Informally, we that x^(-1) is the fraction 1/x, and a/b = x/y iff ay=bx. Addition and multiplication of fractions are defined as you learned in grade school. [1] As you would expect, applying this approach to the integers gives you the rational numbers. More formally, we defined the fraction x/y as the ordered pair (x,y).

To define a wheel, we modify the above construction slightly. Specifically, we say that a/b = x/y iff there exists an s,s' such that (sa,sb) = (s'x,s'y) or sa/sb = s'x/s'y [2].

Addition and multiplication remain unchanged, but we define a new operation for taking inverses: /(x,y) = (y,x). That is to say that, to take the "inverse" of an element, you swap the numerator and denominator.

In this system, we define 0 = (0,1) = 0/1 and 1 = (1,1) = 1/1.

Division by 0 is now a simple matter: 0/0 = (0,1)/(0,1) = (0,1)(1,0) = (0,0)

Notice that, under this construction, (0,0) is not the zero element; (0,1) is. Further, the equation

(0,0) + x = (0,1) has no solution.

If you keep poking at this structure, I am sure that you can find other bad things that happen.

[0] In fact, depending on who you ask, a ring is not even required to have a multiplicative identity (eg. 1).

[1] This construction gives you a structure known as a field; which is, in my opinion, the point where most non-mathematicians would start consider the algebraic structure to be a reasonable number system.

[2] Under this construction, we can also loosen the requirements of the underlying ring. Specifically, any commutative ring will do. We do not require that xy=0 implies x=0 or y=0.


The stuff on wheels is interesting! I found the paper by Carlstrom (http://www2.math.su.se/reports/2001/11/) and it alludes to applications, though I didn't see any specifics about applications (to computer science, at least). Thank you for the pointer.


If you avoid division by 0, wheels revert back to normal fractions [0]. This should mean that you can use them as a drop in replacement for a rational number datatype. Doing so should allow you to defer checking for division by zero; possible moving the check outside of a tight loop. Granted, this should also be doable as an optimization of normal rational arithmetic.

The only use cases I can think of for wheels amount to them being a principled way of adding NaN to the number system. Of course, if history is anything to go by, a hundred years from now someone may look back on this comment the same way we look back on people calling sqrt(i) "imaginary".

[0] At least under the explicit construction presented.


This conversation may have moved well above my head, but I mean start with 0/0 = "undefined". I don't see why that should be the case.

  x   = 0/0
  x*0 = 0
  
  1*0 = 0
  2*0 = 0
  ...
From this we see that it isn't really that x is undefinable, rather it can be any value at all. There is apparently no issue with an equation having two equally valid solutions (eg quadratic formula), so at what point are there too many?


I think I see what you're asking. I think the answer is: Before we start talking, you have to tell me what all the rules are. So when you say "start with 0/0 = 'undefined'", what is the ambient number system? The proof I gave earlier showed the number system can't be a ring - so it's not the real numbers, the integers, the rationals ... at least not by the standard definitions of those number systems. When you do math, you don't make up the rules as you go.

As far as the example you gave goes, you start by assuming 0/0 is defined. But if you're trying to show 0/0 is defined you're assuming what you want to prove. The logic isn't correct.

Note that giving 0/0 the name "x" doesn't do anything. Simply naming something doesn't establish any fact. It just makes "x" shorthand for "0/0".

Anyway, observing that 0/0 * 0 = 0 but also 1 * 0 = 0, 2 * 0 = 0, and so on doesn't establish any necessary connection between "0/0" and 1, 2, ... You wouldn't conclude from "1 * 0 = 0" and "2 * 0 = 0" that "1 = 2", or that "1 could be 2", for instance. So nothing has happened. But what could happen? Remember that you started by assuming that "0/0" was defined. "Assume" in math means you've assumed it's true. In that case, you're done, right? Its "definedness" isn't probabilistic. And if starting with that assumption you did find out something true, it doesn't follow that the assumption is "independently" true. (The truth of "if P, then Q" and the truth of "Q" do not together imply the truth of "P".)

You might want to look at the post on wheels higher up this thread. It shows what you could do - namely, use a different set of rules.

This may be more than you wanted to know ...


Zero: The Biography of a Dangerous Idea by Charles Seife is one of the most engaging books I've read about mathematics. In its Amazon reviews, I also see The Nothing That Is: A Natural History of Zero by Robert Kaplan recommended.


I'm in the same boat as only finding repeating stuff; but for other's reference, here is the repeating stuff:

The earliest known attempt at dealing with division by zero comes from Brahmagupta in 628 (in the Brāhmasphuṭasiddhānta). His definition is as follows (non division-by-zero axioms ommited):

1) A non-zero number divided by 0 is a fraction with zero as the denominator.

2) Zero divided by a non-zero number is a fraction with zero as the numerator and the finite quantity as the denominator

3) Zero divided by zero is zero.


Meh. I feel like American math education is seriously lacking, and I doubt that adding stuff to the common core would do any good.


Anyone can add [2003]?


absolutely


What's next?

Do Doctors Need to be Taught the History of Medicine?


Certainly yes (within limits), and as many people said above that applies to all fields. Sometimes knowing how/by whom/when something was discovered or entered into practice gives some perspective on what you are doing today.


What's next?

Do math education specialists need to be taught the history of the debate over whether mathematics education should include the history of mathematics?


The best class I took in college (a million years ago) was "Men of Mathematics". Of course today, that course would be impossible to teach as the SJW's would protest the math dept, etc. The text used for the course was a book of the same name from E.T Bell.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: