> What has in fact happened in recent decades? Has category theory turned out to be mainly a “simplifying abstraction” [...]?
In short: yes, with some exceptions. The majority of mathematicians are working on problems where category theory is of no importance.
A quick list of some of the major exceptions I'm aware of: the general cluster of ideas surrounding the Fukaya category, derived categories, and homological mirror symmetry [0]; geometric Langlands [1]; and the bulk of modern algebraic topology (I have in mind homotopy theory especially). It's also incredibly useful, practically speaking, for modern algebraic geometry and number theory, although my impression is that you have to get pretty deep in the weeds before it becomes more than a linguistic convenience. For a taste of the latter, see the discussion at the beginning of the category section of this algebraic geometry text (which is in principle accessible to someone who has taken an undergraduate course in abstract algebra – though I feel in reality a significant amount of mathematical maturity is required): Section 1.1 of http://math.stanford.edu/~vakil/216blog/.
Also, it has become somewhat of a meme here to talk about Homotopy Type Theory. I think this is overhyped; the experts in foundations don't seem to think highly of it. Many arguments for its utility or theoretical importance are, in my opinion, misguided – for example, the claim that set theory proves "junk theorems" such as "pi" being an element of "4."
While I'm not an expert in any of this, I'm primarily interested in Homotopy Type Theory (HoTT) because so many tools originally developed for algebraic geometry and algebraic topology have been successfully applied to logic and programming languages (e.g. categories, functors, natural transformations, monads, comonads, topoi) but homology and homotopy--core tools in algebraic geometry and algebraic topology--haven't been investigated to nearly the same degree in logic and computing. So even if HoTT isn't particularly compelling from the standpoint of pure foundations, I'm still excited for the work they are doing exploring these core tools in geometry in the context of logic and programming languages.
Programmer warning: this is the math jargon heavy version, you don't need to know the math jargon to make use of these tools for writing day-to-day programs.
There are too many applications for me to do justice in one message and it is a bit messy because all of these are related, but there are roughly three ways that category theory gets applied to computing:
- Reasoning about data within computer programs. In "Generating Compiler Optimizations from Proofs" they make a category where the objects are a representation of expressions in programming languages and the morphisms are expression substitutions. They use this to build general proofs of correctness of compiler optimizations (http://cseweb.ucsd.edu/~rtate/publications/proofgen/proofgen...)
- Looking at logics (and thus programming languages) as algebraic objects. Objects are propositions, morphisms are proofs that prove one proposition given assumed propositions. Similarly objects are types, morphisms are functions. That in some contexts these are the same thing is the famous (in programming language circles) Curry-Howard-Lambek correspondence. https://existentialtype.wordpress.com/2011/03/27/the-holy-tr... For way more, see https://ncatlab.org/nlab/show/relation+between+type+theory+a... A different version of this is specifying the mathematical meaning of program execution in terms of directed-complete partial orders which are fruitful to think of categorically. I think Category Theory
for
Computing Science is the most approachable starting place for that perspective (https://www.math.mcgill.ca/triples/Barr-Wells-ctcs.pdf)
- As core abstractions for programming. While you can reason about programs and programming languages with categorical tools, you can also use algebraic abstractions within programs. Functors, strong lax monoidal functors (called Applicative Functors in this context), and monads feature prominently in Haskell's standard library. Basic overview is here (https://wiki.haskell.org/Typeclassopedia). The default ones that Haskell uses are all defined as endofunctors on Hask (category of Haskell types and functions between them), so they look a bit funky to a mathematician, but there are other libraries with more mathematically-legible categorical tools as well. (Elsewhere in the thread, Iceland_jack linked to a discussion of one of his proposals: https://www.reddit.com/r/haskell/comments/eoo16m/base_catego...) It is also useful to describe data structures as initial F-algebras in order to abstract over the fold/reduce operation (https://blog.sumtypeofway.com/posts/introduction-to-recursio...). There is a ton more on The Comonad.Reader (http://comonad.com/reader/). For an extremely practical application of non-trivial categorical abstractions see: https://kowainik.github.io/posts/2018-09-25-co-log While I am heavy on the Haskell here because that's what I know best, people also use these abstractions in Ocaml, Scala, and other languages with enough functional and type-level infrastructure for them to be ergonomic to explicitly identify and abstract over.
Joseph Goguen also did a ton work applying category theory to computation that spans several of the above application types. A Categorical Manifesto is probably a good place to start and he cites a ton of papers in it that are good next steps. (https://cs.fit.edu/~wds/classes/cdc/Readings/CategoricalMani...)
Thanks. I ask that question a lot on here and I think your first bullet point is the first really substantive answer I've gotten. It's an interesting paper.
I don't have time to read the rest of your links, unfortunately. However, I'm uncomfortable appealing to Haskell (or Ocaml/Scala) as evidence that category theory provides core abstractions for programming. It's true in this context, but only because Haskell was designed to make it true. I am skeptical that category theory has much to say about practical work in the languages used by most programmers, like C, javascript, and python.
Anyway, about HoTT: I don't think it aims to accomplish what you want it to. Its primary concerns seem to be 1) formalized math and 2) serving as an alternative foundation to ZFC. But if it eventually does what you want, great!
I'm happy to hear that you found the first paper interesting!
While I agree that it is unlikely C and Python programmers will take up categorical abstractions any time soon, there is some interest from the JavaScript world. As an example of how this manifests this is a concurrency library (https://www.npmjs.com/package/fluture) and parser combinator library (https://www.npmjs.com/package/parsimmon) that are explicitly using built functors, applicatives and monads as programmatic abstractions. I don't think that this is going to be the mainstream of JavaScript in the near future, but it does suggest that some folks in the JavaScript world think that categorical abstractions are relevant to their practical work.
I also think it is the case that it is a bit strong to claim that Haskell was designed as a programming language for categorical abstractions because none of the ones that I mentioned were included in the initial design of the language. The Haskell 1.0 report[0] was published in 1990 and while it had typeclasses[1], the Functor and Monad typeclasses weren't added until the introduction of monadic IO for Haskell 1.3 in 1996[0]. Moggi's first paper on monads for computation[2] was in 1988, but to my knowledge no one involved in Haskell was introduced to it until several years later[4]. While applicative functors are in the middle of the popular hierarchy, they were the most recently introduced to functional programming. It took until 2008 for them to be explicitly identified as an interesting abstraction in "Applicative programming with effects"[3]. Making a language where programs compose nicely was an explicit design goal, so I don't think that it is a surprise that that's where category theory first landed as an explicit tool for programming abstractions. The categorical abstractions replaced non-categorical abstractions because they did a better job of solving the real problems that the people programming in these languages had, so I think it is a meaningful application of category theory to computing even if it hasn't happened yet in the mainstream of computer programming.
I agree with your assessment of HoTT as well. I'm not currently working on HoTT or on the use of homotopy and (co)homology as programming abstractions, so if they make some contributions to that on the way to their actual goals, I will be super happy.
>the experts in foundations don't seem to think highly of it.
Feel free to name some of these experts. The only thing I've seen is remarks by generally renowned mathematicians such as Terry Tao who have no interest in either foundations or formal verification (which isn't an indictment, its quite removed from mainstream mathematics), so I don't see why anyone would use this to evaluate the merit of HoTT.
>Many arguments for its utility or theoretical importance are, in my opinion, misguided – for example, the claim that set theory proves "junk theorems" such as "pi" being an element of "4."
What? Either whoever said that has no clue what they're talking about or you are misremembering their argument. No, set theory doesn't allow you to prove that pi is an element of 4 - but it allows you to state it. And this is actually an issue in so far as that when you want to state actually meaningful theorems, you constantly have to make assertions about what an object is actually supposed to be. If you do this in a formalized, machine amenable way you just end up with a form of type theory.
And this isn't actually an argument in favor of HoTT, but just some form of (dependent) type theory, all of which essentially run along the lines of 'This makes proof assistants better/easier to use'. I've never heard a legitimate argument for why dependent type theory is overhyped - even hardcore Isabelle folks will have a hard time disputing that dependent type theory has made much more noteworthy recent advancements.
The main argument (from the point of general proof engineering) for HoTT is computable univalence i.e. being able to transfer properties along isomorphisms without the computation getting stuck which is a huge deal. Is it a large enough deal that everyone will adopt it? Who knows, it might be that a more 'practical' proof assistant such as Lean, focusing simply on better automation engineering, will win out, but the hype around HoTT is definitely not just a 'meme'. People over hyping category theory doesn't invalidate the actual research being done.
I will leave it up to you to decide whether they know what they're talking about.
> No, set theory doesn't allow you to prove that pi is an element of 4 - but it allows you to state it. And this is actually an issue in so far as that when you want to state actually meaningful theorems, you constantly have to make assertions about what an object is actually supposed to be. If you do this in a formalized, machine amenable way you just end up with a form of type theory.
I am not totally sure about the distinction you are trying to draw here. But I reject the claim that, properly understood, ZFC allows you to prove or state such a thing.
What ZFC does is allow one to reformulate all mathematical questions as questions about sets. So if I want to ask some question, like is pi^2 rational, then I can define Dedekind cuts, define squaring, and I know its truth value is the same as a certain question about sets of rational numbers.
Now, of course pi is not, philosophically speaking, the same as the set used to encode it. (And in fact there are many ways to do this encoding.) So one cannot even state questions like "Is 3 a member of pi?" in set theory, because 3 and pi are not sets. On the other hand, it is perfectly legitimate to ask questions like, "Is (the pair of natural numbers representing the number) 3 a member of the Dedekind cut associated with pi?"
> The main argument (from the point of general proof engineering) for HoTT is computable univalence i.e. being able to transfer properties along isomorphisms without the computation getting stuck which is a huge deal.
If my concern is studying the foundations of mathematics, I have not seen a reason why I ought to prefer HoTT over ZFC. But there are various reason to prefer the second, for instance the fact that at some point sets need to get into the picture anyway to give a satisfactory account (e.g. to define large cardinals to study the relative consistency strength of various theories).
If my concern is formalizing mathematics, Lean seems to be doing a much better job (see https://xenaproject.wordpress.com/2020/02/09/where-is-the-fa...). And HoTT gets something like 10x more attention than Lean does on HN. So I think "meme" is an apt description.
The FOM folk are in general incredibly bearish on proof assistants, so of course they don't care. If you only care about an 'ultimate' foundation that is the best fit to evaluate other systems, rather than a foundation that is amenable to actively working with it, of course you have no reason to care about HoTT.
If you share their opinion, alright, but I personally find this incredibly myopic.
I haven't read through all comments, but as far as I see people are actually just criticizing that the encoding is always relevant - that is, they can't talk about objects without encodings, but they don't actually care about the theorems regarding the encoding itself. I think this is fundamentally different from saying 'set theory proves 4 is an element of pi' because otherwise you could levy the same criticism against type theory! After all, there's nothing preventing people from encoding the natural numbers as sets in type theory. The benefit of type theory is that instead you can talk about objects without regard for the encoding which isn't possible with ZFC.
>I am not totally sure about the distinction you are trying to draw here. But I reject the claim that, properly understood, ZFC allows you to prove or state such a thing.
>Now, of course pi is not, philosophically speaking, the same as the set used to encode it. (And in fact there are many ways to do this encoding.) So one cannot even state questions like "Is 3 a member of pi?" in set theory, because 3 and pi are not sets.
I guess it matters on how you view it exactly. Under your view you can't state it, but this comes at the cost of not being able to make any at all statements about objects rather than just specific encodings! I'd wager that the vast majority of people would be unsatisfied with this for actively doing mathematics. When people want to talk about the natural numbers, they don't actually care about the specific encoding, they just want to talk about the objects.
The way I see it, if I want to talk about the objects 4 and pi in set theory, I have to consider an arbitrary set that encodes it. Now, because these are still just sets, I can state '4 is an element of pi', but not prove it, because whether it's actually true depends on the specific encoding. This isn't great either, but I think this is a lot closer to how people actually work.
>If my concern is studying the foundations of mathematics, I have not seen a reason why I ought to prefer HoTT over ZFC.
When studying foundations itself ZFC is king, but no one is contesting that. Even HoTT fanatics will admit that its complexity makes it less amenable for certain situations.
>If my concern is formalizing mathematics, Lean seems to be doing a much better job
I'm not sure I understand. So you _do_ accept intuitionistic type theory as a valid alternative to ZFC when one actually wants to formalize mathematics?
Then your dislike of HoTT is even more confounding to me. HoTT is just an extension of intuitionistic type theory. If there's a mature HoTT implementation that stands the test of time (e.g. potentially a fully fleshed out version of cubical type theory), Lean is almost certainly going to incorporate it. Univalence is very much part of 'natural' reasoning, if I show a property for one encoding, I want to be able to transfer that to any other encoding. There's even already an experimental library for Lean that allows you to use univalence. It's just that generally Lean is a lot more conservative rather than experimental. They're not doing a better job because their theoretical foundation is better - it's simply that Kevin Buzzard and his troop is actually putting in the work of doing a lot of formalization.
>And HoTT gets something like 10x more attention than Lean does on HN.
Because HoTT is actually experimental research that could be a breakthrough for intuitionistic type theory, whereas Lean is 'just' people formalizing existing mathematics. I'm actually rather surprised that Kevin Buzzard's work is so popular here - he's certainly doing great work, but there hasn't really been any significant news for the past years. Not only that, but HN isn't a mathematics forum - his work seems about as popular as Coq and Agda here, even though the latter have a far more material impact on software engineering. HoTT is uncharacteristically popular as well, but at the very least there's a chance that it could change the face of formal verification!
> The FOM folk are in general incredibly bearish on proof assistants, so of course they don't care. If you only care about an 'ultimate' foundation that is the best fit to evaluate other systems, rather than a foundation that is amenable to actively working with it, of course you have no reason to care about HoTT.
> If you share their opinion, alright, but I personally find this incredibly myopic.
I want to distinguish between two senses of "foundation of mathematics":
1) An ontologically minimal language to formalize mathematical reasoning and facilitate proving things about (in)consistency, provability, and relative theory strength, and generally doing metamathematics.
2) A language to actually carry out the details of formalizing mathematics, perhaps jointly by humans and computers, in the sense of the Xena Project and proof assistants.
Here is a claim which might annoy you, but really ought to be uncontroversial: the correct meaning of the term "foundation of mathematics" is the first one. This is what it has meant for about a hundred years, at least, and what someone working professionally in the reverse mathematics/set theory/model theory/logic communities will understand the phrase to mean if you use it in conversation with them.
Now, given this, and given you agree that ZFC does a better job at function (1) above, I think this explains why the FOM people (and more generally those in the communities I listed) find it strange when HoTT is suggested as a "foundation of mathematics." (I can't speak to their opinions on proof assistants, though.) You can see this quite explicitly when Harvey Friedman starts asking questions of the HoTT people; he always asks something about ontological parsimony, remarks how simple the ZFC axioms are (and tells a story about explaining them to his barber, or whatever), and remarks how homotopy is much too sophisticated a concept to be truly foundational.
Moreover, they are going to regard any criticism that ZFC is deficient because it's a poor choice for function (2) as totally besides the point. That's akin to arguing that Turing machines are a poor choice for the foundations of computational theory because it's hard to program them in practice. The same response to applies to your second point about how type theory is "closer to how people actually work"; this was never a figure of merit in the first place.
So, I indeed share this view, but I don't think it's myopic. I think it follows immediately from what a "foundation of mathematics" is.
> I'm not sure I understand. So you _do_ accept intuitionistic type theory as a valid alternative to ZFC when one actually wants to formalize mathematics?
Yes, with the caveat noted below. The practical formalization of mathematics is an engineering problem, not properly speaking one of foundations, as discussed above. This is not to say it's not a deep and interesting problem, just that the difficulties there are mostly orthogonal to those encountered when doing, e.g. metamathematics. One ought to do whatever makes formalization easiest for practitioners.
> Because HoTT is actually experimental research that could be a breakthrough for intuitionistic type theory
This is not compelling to me because I think intuitionism is boring and misguided. I think the majority of practicing mathematicians feel the same way. Of course, you can just add axioms to make it classical, but I'd wager the emphasis on intuitionism is another thing that turns people off from HoTT.
> I'm actually rather surprised that Kevin Buzzard's work is so popular here - he's certainly doing great work, but there hasn't really been any significant news for the past years.
He and his crew have actually formalized a good chunk of real mathematics. That's fairly significant, in my mind.
Also, given that it's an engineering problem, there are going to be a bunch of difficulties that only pop up once you start playing with it. So the fact that a group of people have started playing with it, and found some of these problems, and iterated Lean in response – that's good!
>Here is a claim which might annoy you, but really ought to be uncontroversial: the correct meaning of the term "foundation of mathematics" is the first one.
I know some people have advocated for different terms ('practical foundation of mathematics', 'framework of mathematics') but none of them seem to stick, so linguistic prescriptivism strikes me as a waste of time, especially with how overloaded terminology is in mathematics. Any arising confusion is unfortunate, but as long as it's clear what people mean, I don't see the issue and anyone who actually is able to differentiate between (1) and (2) should be able to understand what is meant from context.
>You can see this quite explicitly when Harvey Friedman starts asking questions of the HoTT people; he always asks something about ontological parsimony, remarks how simple the ZFC axioms are (and tells a story about explaining them to his barber, or whatever), and remarks how homotopy is much too sophisticated a concept to be truly foundational.
Sure and if this was the aim of HoTT, dismissing it for this reason would be legitimate, but it's not, as one can see by the reaction of HoTT people to these question: They'll deflect to (2) and do some spiel about 'entrances', because that's what they care about and refer to.
>Moreover, they are going to regard any criticism that ZFC is deficient because it's a poor choice for function (2) as totally besides the point. That's akin to arguing that Turing machines are a poor choice for the foundations of computational theory because it's hard to program them in practice. The same response to applies to your second point about how type theory is "closer to how people actually work"; this was never a figure of merit in the first place.
Then why are there so many people arguing for ZFC as an option for (2)? Not only are there proof assistants based on ZFC and people who advocate for this, a lot of the inane ZFC vs ETCS or HoTT discussions on FOM are absolutely about (2). Hell, even Harvey Friedman has engaged often enough in this e.g. arguing that the ability to make nonsensical statements is actually good because mathematicians start their work by making statements that are often nebulous or wrong.
>So, I indeed share this view, but I don't think it's myopic. I think it follows immediately from what a "foundation of mathematics" is.
I'm specifically talking about their views on matters of (2). If your criticism of HoTT is just that they're using the term 'foundation of mathematics' wrong, I'd say the HoTT folks are doing pretty well.
>Yes, with the caveat noted below. The practical formalization of mathematics is an engineering problem, not properly speaking one of foundations, as discussed above. This is not to say it's not a deep and interesting problem, just that the difficulties there are mostly orthogonal to those encountered when doing, e.g. metamathematics. One ought to do whatever makes formalization easiest for practitioners.
Are you referring to the development and study of formal systems such as CIC/intuitionistic type theory/HoTT or the actual development of proof assistants that implement these formal systems (Coq, Agda, Lean, etc)? Calling the former engineering seems quite strange to me and I don't see how you could reasonably claim that it's not metamathematics.
>This is not compelling to me because I think intuitionism is boring and misguided. I think the majority of practicing mathematicians feel the same way. Of course, you can just add axioms to make it classical, but I'd wager the emphasis on intuitionism is another thing that turns people off from HoTT.
Intuitionism is important for formal verification. For classical reasoning you can just - as you mention - add an axiom, which is also exactly what Lean does. If people dislike HoTT because it's not classical then that's just the fault of bad marketing. It's not indicative of the actual merits of HoTT at all.
>He and his crew have actually formalized a good chunk of real mathematics. That's fairly significant, in my mind.
It's significant and needed work, but there hasn't been any significant news for non-mathematicians. From the point of someone not familiar with mathematics, they've essentially been doing the same thing for about four years.
> Any arising confusion is unfortunate, but as long as it's clear what people mean, I don't see the issue and anyone who actually is able to differentiate between (1) and (2) should be able to understand what is meant from context.
I don't think this is true at all! For example, maybe it is the case that HoTT proponents care solely about (2), and not (1), in which case my criticisms are misguided. But this is not at all the impression I get from reading what they write. Given I'm sensitive to the distinction between (1) and (2), and given I apparently can't infer the difference from context, what are we to conclude? That I just have poor reading comprehensions skills? Maybe so – in which case please help people like me out and use more precise terminology.
OK, I'm being a little coy here, but to be completely serious, I think it's already a confusing enough topic and a bunch of pointless debates could be avoided with better terminology.
> Sure and if this was the aim of HoTT, dismissing it for this reason would be legitimate, but it's not, as one can see by the reaction of HoTT people to these question: They'll deflect to (2) and do some spiel about 'entrances', because that's what they care about and refer to.
I went back to the introduction to the HoTT book [0] to check, to make sure I'm not misremembering things. And they literally say: 'we therefore believe that univalent foundations will eventually become a viable alternative to set theory as the “implicit foundation” for the unformalized mathematics done by most mathematicians.'
This can't be sense (2), because it refers to "unformalized mathematics." So it has to be sense (1). And this is not the only sentence of this kind, e.g., 'This suggests a new conception of foundations of mathematics, with intrinsic homotopical content, an “invariant” conception of the objects of mathematics.'
> Then why are there so many people arguing for ZFC as an option for (2)? Not only are there proof assistants based on ZFC and people who advocate for this, a lot of the inane ZFC vs ETCS or HoTT discussions on FOM are absolutely about (2). Hell, even Harvey Friedman has engaged often enough in this e.g. arguing that the ability to make nonsensical statements is actually good because mathematicians start their work by making statements that are often nebulous or wrong.
Yeah, I agree with you. There's no a priori reason to believe ZFC is a good choice for this.
> Are you referring to the development and study of formal systems such as CIC/intuitionistic type theory/HoTT or the actual development of proof assistants that implement these formal systems (Coq, Agda, Lean, etc)? Calling the former engineering seems quite strange to me and I don't see how you could reasonably claim that it's not metamathematics.
Well, metamathematically speaking, the development and study of these formal systems is boring. They don't have anything interesting to say about consistency or provability or whatever, and (as I understand it) they're all mutually intepretable with ZFC (at least the versions used to actually formalize things in practice). So whatever you want to call it, this research is not really justifiable on traditional metamathematical grounds. The actual point seems to be facilitating the construction of proof assistants.
> It's significant and needed work, but there hasn't been any significant news for non-mathematicians. From the point of someone not familiar with mathematics, they've essentially been doing the same thing for about four years.
I'm not sure what "there hasn't been any significant news for non-mathematicians" means here. For example, Quanta articles on various mathematical discoveries and sociological phenomena (from chalk to disagreements over the foundations of symplectic topology) routinely get posted on HN and are well received. If this is the bar for "significant for a non-mathematician," then I think finally formalizing a good chunk of "fashionable" mathematics is absolutely newsworthy. For a long time many people didn't take formalization seriously because it never touched "real" math, and now you're telling me I can do "real" math with it? To the point where undergrads are formalizing the standard undergrad/grad curriculum? That's huge!
>OK, I'm being a little coy here, but to be completely serious, I think it's already a confusing enough topic and a bunch of pointless debates could be avoided with better terminology.
Sure, I just don't think it's going to happen. The HoTT (and category theory) people largely don't care about (1), so they typically don't need to differentiate. More cynically, it's probably also just better marketing, a new 'foundation' certainly sounds more impressive than just a new 'mathematical framework', especially because (1) is generally pretty esoteric.
>This can't be sense (2), because it refers to "unformalized mathematics." So it has to be sense (1).
I don't think so, look at how they actually define 'unformalized mathematics':
"Indeed, many of the proofs described in this book were actually first done in a fully formalized form in a proof assistant, and are only now being “unformalized” for the first time — a reversal of the usual relation between formal and informal mathematics."
The idea is essentially that using HoTT formalisms informally can lead to new insight, which is exactly what part 2 of the book focuses on and they're making it clear that it's about general mathematics rather than metamathematics. If they were talking about (1), you'd expect them to at least do some actual metamathematics with it!
I think your next sentence actually serves to support this further.
>And this is not the only sentence of this kind, e.g., 'This suggests a new conception of foundations of mathematics, with intrinsic homotopical content, an “invariant” conception of the objects of mathematics.'
And now consider how the sentence actually ends "- and convenient machine implementations, which can serve as a practical aid to the working mathematician". You could argue that it being a practical aid only refers to the convenient machine implementations, but how would 'homootopical content' or 'an "invariant" conception' remotely help with (1)? To me this makes it pretty clear that the aim is to have a practical 'foundation' for the average working mathematician i.e. (2).
>So whatever you want to call it, this research is not really justifiable on traditional metamathematical grounds. The actual point seems to be facilitating the construction of proof assistants.
Sure, that I don't disagree with, it's definitely in the area of 'applied metamathematics' and there's a reason type theory field is more on the CS side. I just think that you would generally get some pretty strange looks if you called e.g. research on sequent calculus an 'engineering problem', even though it's typically done with the intent of eventually getting a practical system.
>I'm not sure what "there hasn't been any significant news for non-mathematicians" means here. For example, Quanta articles on various mathematical discoveries and sociological phenomena (from chalk to disagreements over the foundations of symplectic topology) routinely get posted on HN and are well received. If this is the bar for "significant for a non-mathematician," then I think finally formalizing a good chunk of "fashionable" mathematics is absolutely newsworthy
My point is that often there has been no 'visible' (for a non-mathematician) progress made, so it's curious to me that despite this it still gets posted here every few months. I would have expected some attention if some larger milestone was reached (e.g. formalizing most of an undergrad degree), it's the consistency that is surprising to me.
I think we agree that it's legitimate to motivate HoTT on the grounds of pursuing proof assistants/formalized mathematics, and that other justifications – like competing with ZFC to be a "foundation of mathematics" in the philosophical sense – are not compelling. So whether it's reasonable at all for me take issue with the HoTT hype turns on whether they're offering justifications beyond practically efficient formalization.
I read the introduction to the HoTT book again and it seems to clear to me the justifications offered there go well beyond just better proof assistants. ("Homotopy type theory also brings new ideas into the very foundation of mathematics.") This is also my experience reading HoTT-adjacent people on Twitter (John Baez, random grad students). They clearly have some, I don't know how to put this, fundamentally aesthetic concerns with set theory, and offer various arguments like the ones we agreed were bad.
I appreciate you pointing out that it is possible to make a totally reasonable case for HoTT as a research program that does not resort to such specious arguments. So I am left wondering why so many other people fail to do this.
My sincere thanks to you and to the others in this thread for taking the time to respond to my question about the applicability of category theory elsewhere in mathematics. I learned a lot from all of your comments.
I will retire in two years, after I turn sixty-five. I’ve started to ask myself if that will be too late for me to try to study pure mathematics again.
In short: yes, with some exceptions. The majority of mathematicians are working on problems where category theory is of no importance.
A quick list of some of the major exceptions I'm aware of: the general cluster of ideas surrounding the Fukaya category, derived categories, and homological mirror symmetry [0]; geometric Langlands [1]; and the bulk of modern algebraic topology (I have in mind homotopy theory especially). It's also incredibly useful, practically speaking, for modern algebraic geometry and number theory, although my impression is that you have to get pretty deep in the weeds before it becomes more than a linguistic convenience. For a taste of the latter, see the discussion at the beginning of the category section of this algebraic geometry text (which is in principle accessible to someone who has taken an undergraduate course in abstract algebra – though I feel in reality a significant amount of mathematical maturity is required): Section 1.1 of http://math.stanford.edu/~vakil/216blog/.
Also, it has become somewhat of a meme here to talk about Homotopy Type Theory. I think this is overhyped; the experts in foundations don't seem to think highly of it. Many arguments for its utility or theoretical importance are, in my opinion, misguided – for example, the claim that set theory proves "junk theorems" such as "pi" being an element of "4."
[0]: https://en.wikipedia.org/wiki/Homological_mirror_symmetry [1]: https://en.wikipedia.org/wiki/Geometric_Langlands_correspond...