Hacker News new | past | comments | ask | show | jobs | submit login
Habits of highly mathematical people (medium.com/jeremyjkun)
429 points by CarolineW on July 30, 2016 | hide | past | favorite | 110 comments



Couldn't this also be titled "Habits of high rational people", with mathematics simply being one application?

I'm not a mathematician and was miserable at math in school, but I apply these habits in the business world every day. They help me cut through a lot of crap that comes from other people's sloppy/lazy thinking.

>Anyone who has gone through an undergraduate math education has known a person (or been that person) to regularly point out that X statement is not precisely true in the very special case of Y that nobody intended to include as part of the discussion in the first place. It takes a lot of social maturity beyond the bare mathematical discourse to understand when this is appropriate and when it’s just annoying.

I don't disagree but would argue that the far more common problem is people - not just mathematicians, mind you - not considering definitions enough which ultimately leads to confusion and/or misunderstandings and consequently additional, unnecessary, cycles spent in discussion about "what do you really mean?"


(I studied math and computer science. Walked out of a math Ph.D. program before starting one)

>I'm not a mathematician and was miserable at math in school, but I apply these habits in the business world every day. They help me cut through a lot of crap that comes from other people's sloppy/lazy thinking.

That's possible. The OP does not claim that the traits he listed apply if and only if you are a mathematician. It's certainly possible that you hone these skills without being mathematically inclined.

I do contest, however, that these habits apply to all "highly" rational people. I know plenty of people who are rational (or so are they deemed to be by their friends and colleagues) yet decidedly do not possess some of the listed habits, especially "scaling the ladder of abstraction" and "Being wrong often and admitting it".

On "being wrong often and admitting it," based on my experience, it's not that you become modest as a result of studying math. By studying math, you realize how prone your mind is to arriving at a faulty, unproven conclusion, and you learn to be more skeptical of your own thinking and become more at ease with being "wrong", allowing yourself to let your intellectual guard down.


On "being wrong often and admitting it," based on my experience, it's not that you become modest as a result of studying math. By studying math, you realize how prone your mind is to arriving at a faulty, unproven conclusion, and you learn to be more skeptical of your own thinking and become more at ease with being "wrong", allowing yourself to let your intellectual guard down. Indeed...math if anything promotes mental hygiene.


My small issue is that he attacks empirical science at its periphery (statistical muckery, mostly going on in politicized debates, pharma, etc.) rather than it's core. My experience in biology is quite similar to what he describes with people more-or-less dispassionately seeking the truth. It's a 'messier' field, so perhaps there's more room for personal pride, but I have my doubts about that. Anyone who holds hypotheses dear is a poor mathematician or biologist, just the same.

Basically, it's a basic vs. applied science critique; basic science is quite brutal, and you will be wrong very often. Usually there's little at stake regarding which conclusion you come to, so long as you're investigating something interesting. Someone trying to create effective medicine is in a very different situation.

> there’s very little fame outside of the immediate group of people you’re talking to

This is very true in most basic science fields. There are about 4 or 5 other labs in the world that are familiar with the details of my sub-field, and this is true for most of my colleagues. Mathematics is certainly not privileged in this sense.


I find it ironic how OP's claim is easily ruled out by basic mathematical argument.


I do think there is something unique about training in the 'hard' analytic disciplines (math, computer science, physics, etc.). In particular, one learns to be wrong often, and that the result of one's hard-fought intellectual effort is often objectively incorrect. At the very least, math accelerates the process of discovering this, and so is a very good teacher of intellectual humility.

The fact that you jumped to 'other people's sloppy thinking' as your first example is somewhat telling. The point is that your whole perspective can shift from thinking that you're smarter than other people and need to correct their mistakes, to understanding how to collaborate with other smart people over difficult problems while everyone is making and correcting mistakes.


>The fact that you jumped to 'other people's sloppy thinking' as your first example is somewhat telling.

You'll have to take my word for it but I'm under no illusion as to my smartness relative to others. More often than not I'm the dumbest person in the room. I learned the hard way that I have plenty of blind spots and I'm the first person I should be checking for 'sloppy thinking'.


That in my experience is a huge differentiating factor between analytical people and everyone else. Most people look on admitting failure as a great horribleness to be avoided at all costs. at my current corporate job I've often been looked at as insane for being open about such things.


The "hard analytical disciplines" require understanding the fundamentals of whatever they work with, in order to process those fundamentals together into a useful effect.

Getting meaning across via words is fundamentally the same process.


"Getting meaning across via words is fundamentally the same process."

-- Here, I think you've distilled the basic assumption of the op and I would claim someone with enough exposure to mathematics will know this claim is wrong.

Informal arguments may indeed be carefully and even exactly argued but they nearly always involve implicit emotional appeals, appeals to unstated common assumptions and definitions and so-forth.

Mathematics is different - the arguments are absolute, given the assumptions is can be demonstrated that the conclusions follow, even to the point of the proofs being mechanically verifiable. The reason someone making mathematically assumptions often has to admit their wrong is the arguments have follow a determined series of steps whereas someone with a strongly held, strongly articulated belief-system about the ordinary world can have a plausible answer to every objection but with some portion of those answers being appeals to commonly held beliefs in society.


Having done graduate work in both computer science and philosophy (and having a healthy respect for both), I agree with this assessment. It's part of what I was trying to articulate. Although I think some of the philosophers (particularly the historians of philosophy) that I've met have been some of the most intelligent, rational, and well-spoken people I've met, I think there's a keen difference between the critical thinking abilities of the computer science undergraduates vs. the philosophy undergraduates that I've seen graduate from a university/college, and I chalk a lot of this up to learning to be wrong very quickly in formal disciplines.


> I don't disagree but would argue that the far more common problem is people - not just mathematicians, mind you - not considering definitions enough which ultimately leads to confusion and/or misunderstandings and consequently additional, unnecessary, cycles spent in discussion about "what do you really mean?"

It's all a difference in context and environment. When talking through the specification of a product your building and prerequirements for it, nitpicking the definitions will save you pain and money.

On the other hand, nitpicking whether someones analogy in covers exact 100% cases of the issue while ignoring the point of the conversation will just make you look like an arsehole in front of the group of people. And the latter is still painfully common on HN as well. As the article says - understanding when some detail is important for the current conversation is an important social skill.


Everyone on this thread, from the original article to you seems to agree that sometimes nitpicking is good and sometimes it isn't. But for every minute that has been wasted ignoring and even responding to pedants, days have been lost by people undertaking projects that they never bothered to define.

And if you have ever worked with customers to elicit requirements, then you will know that the human race has far more fuzzy thinkers than it does logical pedants. (And of course some of the worst nitpickers are fuzzy thinkers too).


> I'm not a mathematician and was miserable at math in school

Maths in school is not really like the maths that mathematicians do.


True, but if you're not good at school math, I believe there is little hope you can be good at the style of math that mathematicians do...


This is not true at all. I know many people in mathematics that are below average when it comes to arithmetic, some to the point of suffering from dyscalcula. I myself am probably in at least the bottom quartile of arithmetic ability, and there have been quite a few historic examples of top mathematicians with a similar "problem", Hilbert being a common example.

There's a huge difference between reasoning about abstract statements and arithmetic calculation, even when it comes to abstract statements about arithmetic. I'm not saying there's no correlation, but the conditioned probability in the direction of bad at arithmetic => bad at mathematical reasoning is probably much lower than you'd expect.

At the very least, I'd encourage those that are bad at arithmetic to not assume they will be bad at mathematics or data science. If I had, I'd be much worse off and less happy than I am now with math in my life :)


Hilbert's supposed dyscalcula is a myth..unless you have a source we haven't seen?

It's similar to the Grothendeick story -- a mathematician making a simple mistake -- that get exaggerated to "mathematicians can't do arithmetic." https://en.wikipedia.org/wiki/57_(number)

Another common myth was that Einstein was bad at math, because he (and others) had trouble in school because the material was too easy and he quarreled with teachers, and "failed" tests for early college admissions administered in a foreign language. https://www.washingtonpost.com/news/answer-sheet/wp/2016/02/...

When a mathematician confesses being bad at calculation, they don't mean "couldn't pass high school", they mean compared to their elite peers.


You're right, it looks like that is a myth (I remember seeing it on this math stackexchange question[1], which is actually a great example of some mathematicians' opinions on the matter).

There's a wide margin between "couldn't pass high school" and "had difficulty with math in high school". And I've known plenty who confessed that were quite explicit that was not what they meant (e.g. by comparing themselves unfavorably with their children).

[1]: http://math.stackexchange.com/questions/551074/are-all-mathe...


Math is taught so poorly that I am rather skeptical that this generalizes. I was fairly terrible at math ( although I scored well on tests and was able to struggle through it ) until I was in a junior year course that was used for discrete math, which went into proofs, and from that point on, it was simply no longer a problem.

But because I was specialized towards mathematics, I had been ... "inconvenienced" in favor of "easier" courses for the non-specialists. I think there's a severely unexamined premise in that state of affairs - but I have to ask - would others be as math-phobic if they really knew how it worked?

The head of the math/sci department literally said "Calculus keeps the dumb people outta med school" - and he was only partly kidding.

The question is - is rigor, even partial, informal rigor ( if that even makes any sense ) of value to everyone or not? It's obviously painful. But it really helps with goal of not being a sucker.


I suspect the relationship between school maths and what mathematicians do is a little bit like the relationship between touch typing and programming, in the sense they're separate skills but being good at one of those skills reduces friction with the other.


I'm not sure about this one. I studied CS and was kinda forced to learn university-level maths and I found that it differs both on the pedagogical level (different ways to motivate students, I preferred the ones in the university) and on the logical level, when for the first time in my life I was actually EXPLAINED the terms we operated on.

University maths is much more logical than the school voodoo I had.


I didn't quite have this experience in my engineering math courses. Admittedly, the other math courses were much better about this.


I wouldn't bet on that, they are really different.

Some of the most blatantly off-putting characteristics of school math, like mindless repetition, are antagonical to real math.


Orthogonal?


Sorry. I meant antagonistic.


High School math (often) is all about memorizing a few tricks from the analysis of real numbers and maybe elementary number theory/combinatorics. I think there'd be more mathematicians if people got to see what math is like in actuality. I fell in love with math when I discovered that it's an actual body of knowledge complete with definitions, concepts and proofs just like any other discipline. Compared to that high school math (loosely) could be said to be a bunch of party tricks with no rhyme or reason to them.


Unless you have natural intuition or memorizing skills that make high school math easy for you, it's for many kids a negative experience due to the combination of a bad curriculum and teachers who are not skilled pedagogically. School math is the number one reason why so many kids learn to dislike it, and programming is the field that made me love and appreciate math for what it really is, while I didn't like it too much in school. With better teachers and curriculum I'm sure I would have appreciated it much earlier.


Nice spot on. The described qualities would also apply to philosophers, logicians, etc.

I think a broader term would be, habits of highly "analytical" people.


I strongly disagree with the claim that philosophers fit this description. As a mathematician, it often seems to me that 90% of philosophy is spent arguing about things with extremely loose definitions. As a result, you can argue from the same starting point and come to completely different conclusions (as separate philosophers often do), because the starting point was already self-contradictory for some interpretations of the wording. As soon as you clearly define what you are talking about the problem usually becomes trivial.

For example, many of the moral dilemmas that get tossed around as interesting become trivial to answer once you give a mathematically rigorous definition to words like "good", "moral", "utilitarian" or whatever.


For example, many of the moral dilemmas that get tossed around as interesting become trivial to answer once you give a mathematically rigorous definition to words like "good", "moral", "utilitarian" or whatever.

Yes, but the while the conclusions become trivial, actually coming up with those rigorous definitions is not.

I'm not going to provide examples because they are likely to start a flamewar. But if you search for words like "normative" or "core principle" in my comment history, you can see how difficult it is to actually come up with those rigorous definitions. The problem is that when you do, you either come up with unsatisfying arbitrary definitions, or else you wind up implying peripherally related conclusions you probably dislike.

Yes, I am definitively guilty of being "highly mathematical" (and also highly economical and philosophical), and it pushes me to recognize incoherence in mainstream "thought".


Even in Math, coming up with good definitions is often the hard part. For example, modern point-set topology is both extremely powerful and consists of theorems that seem obvious to prove, but that's because the definition of a topology is so well-chosen.


I like how Conor McBride put it:

> I usually take the presence of any proofs in my code as the cue to question my definitions.

http://stackoverflow.com/a/13241158/46571


I'm a philosopher and agree. There are some philosophers who make sufficiently precise definitions and use reasonable formal method - where certainly not all formal philosophy is useful, though. They are in a small minority. I'd even give the same ad hoc figures, it's about 90% trash, 10% worth reading and then a few gems, so you have to be careful which authors you read and what topics to study in philosophy.


One of the problems of philosophical discourse, imo, is that it seems to be impossible to give mathematically rigorous definitions of concepts like 'good' or 'moral' or even 'knowledge' that somebody won't be able to disagree with by counter example.

What you sometimes see among philosophers is that even definitions that are the result of many iterations are still treated more like rules of thumb than precise definitions. Or they at least acknowledge that their definition is probably flawed, and proceed carefully.

Some proceed as if they are doing math.


> that somebody won't be able to disagree with by counter example

I really don't understand this. You give a counterexample to a theorem; you don't give one to a definition. If someone disagrees with the definition and wants to use a different one, then they are talking about a different thing (even if they want to use the same English word to refer to each of them) and their conclusions can not be compared in any meaningful way.

Imo, you can't "proceed carefully" with a definition that is open to interpretation. If you could do so safely, then you know enough about how people could interpret your definition to form a more rigorous definition.


  >> that somebody won't be able to
  >> disagree with by counter example

  > I really don't understand this.
  > You give a counterexample to a
  > theorem; you don't give one to
  > a definition.
In a sense you do. Often when we are giving a definition we are intending to capture a particular idea. Sometimes the definition we give captures too little, or too much. If you find that out early enough then you can change your definition to better match what you intend.

The classic example is "connected" from topology. Speaking very loosely, a set is "disconnected" is there is a "disconnection", which is a separation of the set into two pieces which are contained in disjoint open sets. Basically, the set is disconnected if it's made up of two (or more) pieces that can be divided by a "surface". A set is "connected" if there is no disconnection.

The problem is that there are sets we want to think of as not connected, and yet which satisfy the definition of connected as given above. Here's an example:

  X = { (x,sin(1/x)) : x in R, x>0 } u { (0,y) : -1 < y < 1 }
So the net result is that we have the two definitions: "connected" and "pathwise connected."

If that example had been thought of earlier, it's possible that the definition of "connected" might have been fixed earlier. So in a sense, X is a counter-example to the definition of connected.

Further reading:

http://planning.cs.uiuc.edu/node140.html

https://en.wikipedia.org/wiki/Connected_space

https://en.wikipedia.org/wiki/Connected_space#Path_connected...


Define "loose". Edit: Jokes aside, nothing is really "strictly-defined" as we would like it to be since definitions (even the most rudimentary ones) are based on -what we might call- statistical sets of observations. We come to the conclusion that an apple is a round object because we've seen an apple from multiple angles. Math accepts axioms, but those axioms are based on accepting definitions that are rudimentary observations. What is "1"? The idea of "1" can only be understood in terms of experience - hardly "rigorously defined".


> definitions (even the most rudimentary ones) are based on -what we might call- statistical sets of observations

Mathematical definitions are usually abstractions of concrete observations, but they aren't the observations themselves. This is why coming up with a mathematical definition often requires more work than coming up with any other kind of definition.

> What is "1"? The idea of "1" can only be understood in terms of experience

1 is very rigorously defined: As a natural number, it's the successor of 0. As an integer, rational, real or complex number, it's the result of mapping the natural number 1 into Z, Q, R or C in a suitable way (compatible with the semiring structure of N).


> Mathematical definitions are usually abstractions of concrete observations, but they aren't the observations themselves. This is why coming up with a mathematical definition often requires more work than coming up with any other kind of definition. ... 1 is very rigorously defined: As a natural number, it's the successor of 0. As an integer, rational, real or complex number, it's the result of mapping the natural number 1 into Z, Q, R or C in a suitable way (compatible with the semiring structure of N).

That's all nice and dandy for mathematicians, but such is still meaningless (from an overall perspective) because ultimately you end up with recursive definitions. "The successor" of "0" means nothing because "successor" isn't defined, unless you define it via dimensions (or something else). Once you try to define dimension or otherwise, you have to define dimension, which leads to defining counting, which leads back to defining 1. Appealing to experience is the only way to end the cycle.

That said, it's perfectly fine to talk about a system within itself, but it's applicability to anything real (which is what I'm concerned with - my apologies if that's not where you're going) is limited to whatever can be described in terms of reality.


> "The successor" of "0" means nothing because "successor" isn't defined

The existence of a successor function is postulated, because it's an axiom of the theory of natural numbers. It's up to individual models of the theory to define this function concretely.


Philosophers? Using precise definitions? Maybe some philosophers do... But not most.


Oh, they use them. The problem is that each philosopher has a different precise definition that they feel is natural and matches what we "really mean" when we ask deep questions. Since most arguments and conclusions are phrased fuzzily, you end up with the following situation:

Fuzzy premise -> precise logic -> fuzzy conclusion

where each of those arrows leaves so much room for interpretation that you can build literally an entire subfield by arguing back and forth about what the most reasonable mapping from fuzzy to precise and back again might be, even if everyone agrees that the manipulations in between are rigorously correct.


Styles vary between schools and the one called "analytic philosphy" is big.

But in general when I read philosophers disucssing social matters, I feel they are being too rigorous. Trying to cover all bases when the fuzzy nature of the subject is always going to make their abstractions leak.

They have to do this because they compete with other philosophers who will pick the nits.


The much larger problem is that philosophers can't agree on one (or at least few) axiomatic base(s) they want to base their argumentations on. Thus lots of philosophical discussions are rather of the kind "we have different axioms and thus come to different conclusions".


"If you wish to converse with me, define your terms." -Voltaire

Perhaps it depends on what you mean by "precise", but philosophical arguments do tend to start by defining one's terms or disputing the opponent's definitions, expressed or implied. When continental philosophers talk, that's the only part I can follow.


I took parent's comment to mean it's about philosophers who deal with logic. You may recall that logic as a tool free of opinion originated with Aristotle, and the first thing that comes to my mind in this context is Principia Mathematica.


Philosophy has some but not all attributes of this–for example, philosophical problems usually don't have the same level of objectivity and settledness, so you don't get the same 60-second level feedback loop of making a conjecture, being wrong, and refining it. That's not to say that philosophers never do something similar, just that it's not required as part of training.


I think this is answered in the first paragraph. The author is clearly giving an answer to "what is the point of learning mathematics?” There is nothing in the question or answer that implies non-mathematicians can't have these character traits.


Absolutely not. Highly rational people is not the same thing as mathematical.

It's why economists are terrible hedge fund managers and not mathematicians. It's why all these script kiddie programmers at silicon valley are terrible.

Tell me ... when you are mathematical, you cannot be wrong in all worlds.


There's one habit in particular that he didn't really touch on that has been one of the most impactful parts of getting my math degree on my thinking.

There seems to be a gap between formal definitions and what we feel actual definitions are. Every once in a while, a professor would prove something that was clearly right, but felt like a violation of some unstated implicit part of a definition (for example, that the set of all vector spaces is itself a vector space). Despite feeling wrong, I couldn't come up with a particular reason that it wasn't right (short of falling back on some weasel word like "technically that's correct"). Math education forces that gap out into the open, and makes you pay more than lip service by building more and more theorems on top of it. The real world effect is to make you examine whether there's some actual value in this definitional gap (which can then be explicitly stated) or whether it's just fallacy.

I know far too many people who are comfortable with stopping at "this doesn't feel correct despite me having no counterargumemt to its 'proof' ". I've never been inclined that way but my math education certainly sharpened my ability to avoid this.


My coworkers and I use a related technique all the time, e.g.:

"I think X, Y, and Z are true." "If those are true, then A should also be true. But A seems clearly false." "Hmmm...I agree A is surprising, but not clearly false. What would distinguish a world in which A was true or not?" "Well, if A was true, we'd expect B, otherwise C." "Ok, I agree C seems much more likely than B, so that suggests one of X, Y, or Z is wrong."


When a conclusion feels wrong despite being derived logically, that usually means that the definitions are flawed. In math the defintions of structures don't have to reflect anything real, it's all about starting with arbitrary rigorously stated definitions and exploring their implications. So when it feels like definitions are wrong in math, that's just a reflection of that "definitional gap".

But the virtue of always siding with the logical-but-unintuitive isn't as valuable outside of mathematics, so it's maybe not the best point to bring up in an article that's about skills math teaches that are useful in "real life". In philosophy, definitions aren't arbitrary and rigorous, they're meant to be reflections of reality. So when a conclusion feels unintuitive, that's a decent argument against the definitions it was derived from. If mathematics teaches people to just pay attention to the definitional gap, that might be useful enough, but isn't that covered by "discussing definitions"?


> for example, that the set of all vector spaces is itself a vector space

Hmm... under what operations? It's a semigroup under direct sum, and probably something under the tensor product, but I'm having trouble imagining what the scalar multiplication should be. (Or was that a "fictional" example? :)


Sigh, teach me to write a comment quickly without proofreading on HN, on a topic of any complexity. As written, my comment is wrong: I meant to write "vector space functions"[1] but left out a word. What I was getting at was the unintuitiveness that arises from thinking of vector space elements as number-like things (in the sense that we're familiar with them from basic math: numbers or coordinates) to being "anything that meets the definition", including functions. Seriously though, thanks for pointing that out, part of what I still love about the HN community is getting called out for even esoteric incorrectness.

[1] which was my concise but imprecise way of expressing "the set of functions from a set over a field to a vector space over the same one", with + and * defined as you'd expect.


Unless I'm mistaken, The totality of vector spaces is not even a set.


There's one for every cardinality (up to isomorphism). I forget my set theory, but if the cardinalities form a set, then the totality of vector spaces does too. If not, we can amuse ourselves by looking at the set of finite-dimensional vector spaces.


Mathematics promotes bad habits too. Example 1: paying too much attention to worst-case scenarios. When creating a mathematical proof you want to make sure that your conclusion holds in all possible scenarios allowed by your assumptions. In real life decision-making worst-case scenarios are really bad, really expensive to guard against but also really rare so often the best thing to do is to ignore them. You may find that hard to do psychologically if you spend too much time doing mathematics.

Example 2: being too parsimonious. When doing mathematics you have a clear cut objective (your conclusions) and your goal is to achieve it while using the least resources (assumptions). In real life a person thinking mathematically may put in just enough effort to fulfill the formally specified part of their objective while ignoring informal requirements that are assumed to be 'common sense' by non-mathematical people.


Concerning example 1: There are different concepts, such as average-case scenarios (for example modeled by expected value) or quantiles. Use them if this is what you want to model (but pay attention to the assumptions).

Concerning example 2: If you want some 'common sense' property to hold, why don't you specifiy it then?


What you are saying is definitely true within mathematics and its applications. But the original article was about “The […] skills that students of mathematics […] will practice and that will come in handy in their lives outside of mathematics” and in the same spirit I wanted to point out habits of thought developed doing mathematics which have a negative effect when the problem you are facing cannot be fully formalized.


> The most common question students have about mathematics is “when will I ever use this?”

The author then goes on to provide 6 semi-abstract reasons as to why abstract reasoning matters. Since this question is most likely to be asked by a child, it is code for "how can math-skills make me money or get me a job." Upon hearing the author's answers, 13 year old me would conclude, "it doesn't, and math is as useless as writing poetry". A child who isn't thinking in this practical way is already academic minded and doesn't need an answer. "When will I ever use this?". It obvious! "In next year's math class".

It is sad that a student can leave high-school with almost no classes that ever makes math less abstract. The most beautiful moment in my life was my freshman college physics class. It first filled me with joy and then resent. Because I almost didn't go to college and there was no reason I wouldn't have understood this course at 13. Same goes for finance.

So answers should focus on applications like modeling simple Newtonian physics of a satellite in orbit or predicting the RPM when a 4-stroke engine red-lines based on max-stress specs. Or show them useful short cuts like the Rule of 72 and if they are curious show a derivation. Show examples of ratios/percentages, how fractals can simulate a landscape, ray tracing, or examples of the Fibonacci series in nature, etc.


>It is sad that a student can leave high-school with almost no classes that ever makes math less abstract.

I find it sadder that many can get through high school without any classes that make it more abstract. Too much of it is just mindless repetition of seemingly-useless incantations that have no meaning or apparent purpose (especially if the concepts aren't understood). Where there are word problems, many students are confused and hate them because they haven't really learned how the math abstracts the situation. (Or how to look at a situation and recognize "Hey, I could abstract this with math to get more information!")

I once tutored a student who hated math and after 11th grade was still failing pre-algebra and wouldn't be able to graduate. After a few months, we had covered algebra 1 and 2, geometry and trigonometry, and a brief intro to calculus and statistics. She aced the test, graduated, and went on to become a successful professional in a job that routinely used math. By then she loved it. Problem? It had never been about the joy of discovering new ideas and new ways to use them. It had always just been shuffling numbers around without any concepts or thought processes that could lead to understanding.

My answers are simple:

Why should you learn it? As with learning to read/speak/write, it gives you another way to think and communicate, and exposure to concepts, some of which you may never use, others of which may someday be very valuable. Even if you don't use them everyday, just knowing that they exist and could be helpful (and recognizing when) is valuable.

When will you use this? You may not personally sit around calculating statistics and probability, but as a manager you may recognize that they could help you make important decisions, avoid problems, and increase profits. So you know to hire someone who can do the calculations, and you have a general idea of what to ask them for and how to understand the reports they give you. And to ask questions to get them to explain the business meaning of things affected by confidence levels or standard deviation so you can understand how that affects your decisions. You'll use it to increase knowledge, success, and profit.


The point of the article isn't to show that the math will be used. The point is that people who ask such a question are missing the point.

"Education is what remains after one has forgotten what one has learned..." - Albert Einstein


And the point is that high-horse mathematicians are missing the point when they ignore the relevant lives of their audience.


Math PhD here. I have also had the chance to work with many bright "analytically-minded" people from other backgrounds while a management consultant.

This article is spot on, and some of the behaviors really do seem more indicative of "mathematical people" -- which I suggest really stands for "those who have done research in a 'mathematical' field." (The key being the mix of cold, hard precision in the idealized proof with the squishy, intuitive, human activity of discovering what is pretty and true -- an aspect usually lost in math education!)

Some examples I found most poignant:

- The article mentions "fluidity with definitions" and illustrates it well with the anecdote about Keith Devlin. This is a skill distinct from pure "analytical reasoning," as it requires comfort with definitions that are at once precise but also open to (frequent) change. The process of forming and changing definitions is creative and imprecise, and falls into what is sometimes called "conceptual reasoning." (A programming analog might be API design.)

- Several of the other points are tools for figuring out what is true, and for precising imprecise statements. For example the need to "teas[e] apart .. assumptions" is only natural when reading papers with Theorems that have very precise conditions .. which do not exactly hold in the case you need! In many other "analytical" contexts pre-conditions are not made as precise and arguments by analogy are considered acceptable provided the conclusion is believed. (A programming analog might be debugging when some implicit pre-conditions or invariants break.)


I'd like to propose another habit: the habitual use of certain idioms that are present in mathematical reasoning, in reasoning about other problems. I'm thinking of constructs like

- "if and only if", versus mere implication

- TFAE, as in, the following are equivalent ways of establishing some property

- Defining a relation, say R, between entities, and establishing properties about R, like transitivity.

- Defining notations, labels and symbols for certain quantities or relationships within a problem.

You typically first learn these idioms when doing proofs or reading lots of math and trying to summarize the ideas in it. Some of them are also aligned with the more abstract side of CS (unsurprisingly).

Someone who has had a little experience with these habits can make a real contribution to clear thinking. How many times have you stood at a whiteboard discussion where people repeatedly talk about "the properties that this object inherits from its parent" (or similar) -- but without just inventing a notation for that thing?


I really liked reading this article. The author describes not just the skills that are important for reasoning well (mathematics seems to just be an arena in which this kind of thinking is required), but in the end, also the pitfalls of taking these skills to their logical extremes in social situations. As the author points out, context is very important.

Most of the Senior Engineers I look upto and who I consider as role models seem to know this. They are carefree and jovial in most conversations, but when its time to design a system or drill into root causes of an outage, they are capable of asking (and answering) these types of precise questions. I've learned a LOT from working with them and do hope to be like that in the future.


The parts that jump out to me in the context of being a professional programmer:

1) Definitions == overloading. We regularly run into overloaded terms that mean very different things to different people. Microservices. Technical debt.

2) Being wrong - there is a perceived cost to being wrong, and in many organizations it is unfortunately a reality. When new or young, there is so much emphasis on proving oneself that there is easily a disincentive to opening oneself up to risk of being wrong. After being around a while, it is easier - sometimes I like advancing a position I know to be wrong because I know it will more efficiently generate explicit reasons why, as people protest (although even there I usually make clear I am doing devil's advocate).

3) Abstraction scaling - it's unfortunately so common for programmers and technical people to get pedantic, which frustrates people and wastes time. Sometimes it's driven by insecurity. For instance, needing to be the smartest person in the room, or proving competence in an area that is not directly relevant. And sometimes that insecurity is rational, for instance if part of an organization that encourages insecurity through its systems. But oftentimes it is because of a lack of experience with abstraction, or an inability to recognize what "rung" a discussion is on, or losing one's place on the abstraction tree. Complicating this is that sometimes it is important to drill down when no one else wants to. Recognizing this and keeping track of relevance is one of the hardest things to do when dealing with deep subjects and arguments.


There is an interesting Math Overflow thread called "Mathematical habits of thought and action which would be useful to non-mathematicians". The answers there all interesting (and sometimes conflicting), but my favorite (part of an) answer comes from Terence Tao:

>Equivalence. Basically, the idea that two things can be functionally equivalent (or close to equivalent) even if they look very different (and conversely, that two things can be superficially similar but functionally quite distinct). For instance, paying off a credit card at 10% is equivalent (as a first approximation, at least) to investing that money with a guaranteed 10% rate; once one sees this, it becomes obvious why one should be prioritising paying off high-interest credit card debt ahead of other, lower-interest, debt reduction or investments (assuming one has no immediate cash flow or credit issues, of course). Not understanding this type of equivalence can lead to real-world consequences: for instance, in the US there is a substantial political distinction between a tax credit for some group of taxpayers and a government subsidy to those same group of taxpayers, even though they are almost completely equivalent from a mathematical perspective. Conversely, the mistaking of superficial similarity for functional equivalence can lead to quite inaccurate statements, e.g. "Social Security is a Ponzi scheme".

[1]http://mathoverflow.net/questions/74707/mathematical-habits-...


Very good article. I might add that a further pitfall in being "highly mathematical" is when you assume that the person you are communicating with is also "highly mathematical" and therefore has similar habits. That's not true MOST OF THE TIME and can lead therefore to severe misunderstandings.


IME you get used to this pretty rapidly, and what you're left with KS even better: the ability to translate between the two modes of communication (and expose the lack of rigor in one, which often leads to its proponent discovering errors in their assumptions).


True. It can be very frustrating for people when their "fuzzy-logic" (for lack of a better word coming to mind) is not accepted as a proof and they don't see how it isn't iron-clad. Fuzzy-logic gets us through most of everyday life with minimal mental burden, and so is definitely useful, but it doesn't hold up in an argument where you're trying to prove something. It can also go the other way though; some topics are inherently fuzzy, and trying to impose rigour on them just ends up wasting everyone's time and leaves nobody convinced.


#6 scaling the ladder of abstraction

This is one I have a tough time explaining to students who are so focused on the task at hand they don't feel there's time to sit back and reason in generality.


Came across this Dijkstra quote via https://www.youtube.com/watch?v=GqmsQeSzMdw

> The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise

By taking away all the non-essential things and leaving only on what you need, it allows you to truly understand the structures that you are studying. Definitely rings true when I work in ML/Haskell-style type systems. Easier said than done convincing the students though...


> By taking away all the non-essential things and leaving only on what you need, it allows you to truly understand the structures that you are studying.

This is the essence of type abstraction and parametricity!


I did poorly in high school. Largely because of untreated ADHD. I got to university and did no math. But really excelled at my studies (STEM related).

Somehow I got into a software engineering position in robotics. And regularly I need not just the logic and rational thought you gain from school, but the practical math skills.

I've been picking up what I need to learn from open courseware like MIT lectures. The thing that really upsets me is that I'm learning it without issue. I was always completely capable of it, but my time in high school convinced me I wasn't. I wish I had been convinced/allowed to take more math in university.


Yeah, if we remove every mention of mathematics to make it less "elitist", then one could notice a few nice things, such as justification behind some TTD practices (writing down examples and counter examples to come up with more refined definition), healthy obsession with precise usage of words (plus not mentioned but important - to use as less of them as possible - just enough), and that, basically, the whole process is a heuristic search, so a "wrong" is nothing but merely an empty branch of the search process, not a "failure" - so, just accept it and backtrack. The need of assertions and tests (proofs) is obvious. And the ladder of abstractions is the very same notion of layers of composable abstractions, where each layer is a "language" and building blocks for a layer above, which was popularized by the on SICP and On Lisp books.

This, by the way, is also the answer to that question "do we need to study math for programming". Discipline in the use of ones own mind is what is required.


Just reading those 6 bullet points reminds me very, very strongly of the kind of discussions I have with my friends (all of whom are CS students).


There's a really great Quora answer along the same lines: https://www.quora.com/What-is-it-like-to-understand-advanced...


This is without question my favorite Quora post of all time.

I find myself rereading it every once in a while.


I'm not sure these traits exist outside of mathematics as much as posited. I would think this happens mostly because mathematics is such an abstract realm that it is possible for mathematicians to cultivate that sort of detached mindset effectively. Once you start getting into topics and subjects that are more personal and unable to be completely abstracted, these traits may be a burden or not exist in practice.

Not many people will take the same approach to their own values, and philosophy shows the difficulties in using the mathematical/logical approach to value systems and human thinking.


Haha that's funny I was just scrolling through the comments before clicking and thinking "I wonder what Jeremy Kun has to say" and lo and behold...


7. Being precise.

The precision you hold yourself to through a rigorous approach to a problem is also very valuable and is the reason I think calculus is very important.


> The precision you hold yourself to through a rigorous approach to a problem is also very valuable and is the reason I think calculus is very important.

What property concerning precision does calculus have that doesn't hold for any topic in mathematics?


Infinity (and infinitesimals) is intuitively east to use but easy to misuse. Arithmetic isn't so easy to misuse.

Students usually see infinity for the first time in calculus (or maybe when working with infinite series)

These are the situations where we commonly see people being confident in their incorrect answers (which is worse than being unconfident and unable to get correct answers)

Difficulty wrangling infinity comes up a lot on Hacker News, even https://hn.algolia.com/?query=infinite%20series&sort=byPopul...


> Students usually see infinity for the first time in calculus (or maybe when working with infinite series)

Students also see groups or R-modules the first time in abstract algebra. So what.

> These are the situations where we commonly see people being confident in their incorrect answers (which is worse than being unconfident and unable to get correct answers)

> Difficulty wrangling infinity comes up a lot on Hacker News, even https://hn.algolia.com/?query=infinite%20series&sort=byPopul....

I rather the reason why "infinity" is misused so often, but, say, groups or R-modules, not so lies rather in the fact that too most math instructors too much to appeal to intuition in calculus, but not in abstract algebra. Thus mathematics should be taught in a much more abstract way where you are not misled by your bad intuition because you simply aren't able to formulate wrong thoughts in the abstract framework (that's why the abstractions and formalism was invented).


Indeed. The most precise I ever had to be was in my real analysis course. It seems agreed upon that all professors that teach it will be utterly pendantic about all proofs in that class. Which I agree with as a sort of gateway to graduate mathematics but man was it frustrating haha


Calculus doesn't have any true rigour/precision. That's why it's not analysis.


Maybe that would be an imprecise term then? ;)


I would argue that it sometimes gets in the way of the social life. Being able to stay very precise and keep in touch at the same time might be challenging for some.

Though, perhaps it's the other way and the overly precise (Asperger's syndrome?) people become mathematicians as opposed to them developing the precision?


I would also like to agree that precision can sometimes get in the way when confronted with so little of it in places of work or home.

I Live in a hundred year old home, have never seen a 90 degree angle on any of the walls and everything is a little off. Growing up learning fine wood working. My house has provided so many confounding moments.


Not all mathematicans are on the spectrum. When they have done studies the elevation in AQ is about 2 points, from mean 17 in population to 19. Autism starts at around 23 if I remember correctly.


Only problem is people tend to build whole theories and conclusions out of assumptions, and their assumptions could be wrong.

Mathematics only shows you deductions that could be make out of initial axioms and rigid rules. It doesn't mean the assumptions you made while forming axioms or the rules itself are right.

Having said that being mathematical increases your chances of arriving at the right results.


> It doesn't mean the assumptions you made while forming axioms or the rules itself are right.

There's no absolute notion of “right” to begin with. There's just “provable” (in a theory) and “true” (in a model of a theory). Furthermore, a statement that's true in one model might not be true in another.


Mathematicians need logical precision because they work in the realm of things which can be definitively proven or disproven.

Not really, by Gödel's incompleteness theorems, they are always working in a realm where some things cannot be definitely proven or disproven.


Your quote didn't say that all things can be disproven or proven. You need to think mathematically!


Yet you disapprove OP's statement ;)


'Scaling the ladder of abstraction' applies quite strongly to dealing with a new codebase, and is something I think inexperienced people (myself included) initially struggle with.


Only six habits, I would have hoped for a seventh one :)


The seventh is left as an exercise to the reader.


in line with "Teasing apart the assumptions underlying an argument", perhaps the definition initially included a seventh habit, but it was found to be implied by the remaining six and discharged


Isn't there an old quip about how poor mathematicians are at arithmetic? ;)


I don't recall. How are poor mathematicians at arithmetic? ;)


lol my bad.


If there were seven habit, there would probably be someone who would have hoped for an eighth one.


Come on, it's a perfect number!


ahah you can chose among 6 ...or 28 bullet points if you like perfect numbers :)


7: Schematizing things into, for example, lists.


his piece about discussing ideas and switching opinions made me remember a comment i made once to someone that said "evolution is just a theory, people need to hear both that and creationism". i said "only evolution is a theory, because it's based on facts and bits of logic, the other's not".


It might also be that individuals who have these traits self-select into studying math. So we can't easily conclude that studying math leads to the development of these "highly mathematical habits", because the causation could also flow the other way.


mathematician here, so true




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: