Hacker News new | past | comments | ask | show | jobs | submit login

I would be willing to drink the kool-aid if I saw it being used in a practical way. I always feel these posts are filled with category theory jargon without ever explaining why any of the jargon is relevant or useful. I’ve even watched some applied category theory courses online and have yet to feel I’ve gained anything substantive from them.

However, as I started off with, I’m always willing to try something out or see the reason in something. Can anyone give me a practical applied way in which category theory is a benefit to your design rather than just creating higher level jargon to label your current design with?




Category theory as it applies to programming is not dissimilar to learning design patterns. Some people go their whole careers never touching them and write perfectly fine programs. The same is true for category theory.

Those who learn either or both, have a larger set of mental abstractions that largely transcend any one programming language. They can say, this thing and that thing share a common salient property through a certain abstract pattern. There is a certain sense of mathematical beauty in recognizing common patterns and mechanisms. There is also the choice of subtly or drastically altering one's coding style. The same with each, a programmer can use just the right design pattern in just the right place and perhaps simplify or codify a section so that it's easier to understand the next time around (or decide not to!) or they can go completely wild and over-pattern their code. The same applies to category theory. One can recognize that some section is actually a closed reduction and accepts any semigroup and then decide to codify this into the program or not.

I tend to be a collector of ideas, so learning category theory and it's application in programming gives me all these little gems and ways to re-understabd code. Sometimes I use them and sometimes I don't, but my life is more joyful just knowing them regardless.


For me, I use the simple stuff (Semigroup, Monoid, Monads, Functors, ..) the most. Often times I'll be reasoning about a problem I'm working on in Haskell and realize it is a monad and I can reuse all of the existing monadic control structures. It is also helpful the other way, where you start working with someone else's code and seeing that it is a e.g. Monad immediately tells you so much concrete info about the structure, where in a less structured language you might need to reed through a bunch of docs to understand how to manipulate some objects. The "killer app" is all of that extra structure shared throughout all of the codebases.


Same here. I drank the cool aid a few years back, and started using fp-ts in my frontend projects, hoping to use algebraic data types regularly. But today all I use is the Monad. I can't find any motivation to write abstract algebra to build UI widgets.


> I can't find any motivation to write abstract algebra to build UI widgets

This made me chuckle, because I am at this very moment trying to apply the "tagless final style" described here[0] to a custom GUI in a personal-for-fun-and-learning ocaml project : )

[0] https://okmij.org/ftp/tagless-final/course/optimizations.htm...


Category theory does provide new algorithms if you unroll all of its definitions. For example, it reveals that SQL conjunctive queries can be run both "forward" and "backward" - an algorithm that is invisible in traditional SQL literature because that literature lacks the vocabulary to even talk about what "running backward" even means - it isn't an 'inverse' relationship but an 'adjoint' one. We also use it to provide a new semantics for data integration. Link: https://www.categoricaldata.net


I see nothing in your link to justify what you say. Perhaps you can elaborate.


The way to run conjunctive SQL queries forward and backward is described in this paper, https://www.cambridge.org/core/journals/journal-of-functiona... , (also available on the arxiv), where they are referred to as query 'evaluation' and 'co-evaluation', respectively. We never would have been able to discover co-evaluation if not for category theory! The previous link includes this paper, and many others.


So what is coevaluation and why is it useful? Please don't just point at the paper again.


Bi-directional data exchange has many uses. For example, given a set of conjunctive queries Q, because coeval_Q is left adjoint to eval_Q, the composition coeval_Q o eval_Q forms a monad, whose unit can be used to quantify the extent to which the original query Q is "information preserving" on a particular source (so query/data quality). As another example, we use the technique to load data into OWL ontologies from SQL sources, by specifying an OWL to SQL projection query (tends to be easy) and then running it in reverse (tends to be hard). But no doubt more applications await!


Can you point to some examples of owl/sql transforms being flipped? I have trouble believing that an invertible transformation is hard (presumably each step is invertible, right), and certainly "never would have been able to discover" seems inconceivable to me.

Looking at the paper it is very dense and abstract, also 50 pages long.

Edit: on reflection I am doing a bit of sealioning which was not my intention but it does look that way. I'll try to read your paper but if you assure me cat theory really allowed you to do those things you claim, I'll accept you at your word.


You might try pages 8-16 of this presentation: https://www.categoricaldata.net/cql/lambdaconf.pdf . The examples are relational to relational and simplistic but they do illustrate running the same transformation both forward and backward, as well as show the "unit" of such a "monad". We implemented everything in public software, so hopefully the software is even better than my word! As for loading SQL to RDF specifically, I'd be happy to share that technique, but it isn't public yet- please ping me at ryan@conexus.com.


Could this possibly be explained to the average programmer, who doesn't have the foggiest notion what conjunctive queries, coevaluation, or monads are?


In Ruby and many other languages, you have this idea of a string concatenation:

  "foo" + "bar" -> "foobar"
  "foo" + "" -> "foo"
That makes it a monoid. Instead of talking about OOP patterns, knowing that the "+" operator is a monoid for string objects lets us write code that is composable.

Similarly, with arrays:

  [:foo,:bar] + [:baz] -> [:foo,:bar,:baz]
  [:foo,:bar] + [] -> [:foo,:bar]
Some language platforms will implement the first concat, but not the latter. Without the latter, you couldn't do things like, accept an empty user input. You would have to write code like:

  defaults + user_input unless user_input_array.empty?
This applies to things like, hashes, or even more complex objects -- such as ActiveRecord named scopes, even if they don't use a "+" operator. (But maybe, Hash objects can be "+" together instead of calling merge())

This is all applicable to how you can design libraries and DSLs that seem intuitive and easy to use. Instead of just thinking about bundling a number of methods together with a data structure, you can make sure that a single method within that bundle works the way you expected across a number of objects. You might even be using these ideas in your code, but don't realize it.


I see no more than Groups in your comment. Which are very useful! Also having commutative operations makes for Abelian Groups, which enable operations to be reordered, and makes for implicit parallelism.

Where is Category Theory?


> But where is the Category Theory?

I have no idea. I don't think I have really yet to grok the ideas around categories and morphisms. I'm very slow at learning this stuff.

I do know that each of the small number of intuitions I have gained has sharpened how I reason about my code, that once I grokked a single intuition, they seem both simple and profound, and that there are even more that I don't yet understand.

So for example, when you wrote "Also having commutative operations makes for Abelian Groups, which enable operations to be reordered, and makes for implicit parallelism", I never thought about that. Initially, I could almost feel like something coming together. After I let that sink in for a bit, and then it starts opening doors in my mind. And then I realize it's something that I have used before, but never saw it in this way.

I was mainly answering the parent comment about why someone might want to drink the kool-aide as it were. I suppose my answer is, even though I know there are some powerful ideas with CT itself, the intuitions along the way each have many applications in my day-to-day work.


Actually, it is weird to see groups mentioned in this discussion at all. Non-commutative (generally) groups are useful in dealing with symmetries, but what symmetries can we talk about here? Abelian groups (and modules in general), on the other hand, are completely different beasts (seemingly only useful in homological algebra, algebraic geometry, and topology).

Strings are a (free) monoid with respect to concatenation, sure, but it is easier to learn what a monoid is using strings as an example, rather to try and "learn" about strings by discussing monoids first. Why this is deemed useful by some is beyond me.


It’s monoids, not groups, but yeah, it‘s just knowledge about basic algebraic structures, no particular insight from category theory here.


In category theory, everything has to be defined in terms of objects and morphisms, which can be a bit challenging, but it means that you can apply those concepts to anything that has "an arrow". it's of course just playing with abstractions, and a lot of the intuitive thinking in CT goes back to using the category of sets. For me however seeing the "arrow and composition" formulation unlocks something that allows me to think in a broader fashion than the more traditional number/data structure/set way of thinking.


> unlocks something that allows me to think in a broader fashion

I believe this is what's at hand here. What is that something? What is that broad fashion? An example would be welcome


It's always hard to point out when some very abstract knowledge has guided your intuition, but I apply monads very often to compose things that go beyond "purely functional immutable data function composition". for example, composing the scheduling of PLC actions so that error handling / restarts can be abstracted away. The more recent insights of learning how to model things with just morphisms are broadening my concepts of a functor. I used to thing of a functor as some kind of data structure that you can map a function on, but in CT a functor is a morphism between categories that maps objects and morphisms while preserving identity and composition. This means I can start generating designs by thinking "I wonder if there is amorphism between my database schema category and my category of configuration files or whatever. Maybe something useful comes out, maybe not.


Morphisms are not a concept that is specific to category theory. Are there any lemmas or theorems of category theory that are useful in software development?


that's interestingly enough not something I'm interested in (formally deriving things and doing "proper" maths as part of my software work). I care much more about developing new intuitions, and seeing the category theoretical definition of monoid https://en.wikipedia.org/wiki/Monoid_(category_theory) inspired me a lot.

In fact I think that one of the great things about software is that you can not care about being very formal, and you can just bruteforce your way through a lot of things. DO your database calls really compose? or can pgbouncer throw some nonsense in there? Well maybe it breaks the abstraction, but we can just not care and put in a cloudformation alert in there and move on.


> seeing the category theoretical definition of monoid https://en.wikipedia.org/wiki/Monoid_(category_theory) inspired me a lot.

What applicable insight did you gain over the normal algebraic definition (https://en.wikipedia.org/wiki/Monoid)?


Mostly confirm that drawing things as boxes and arrows even though I implement them as folds/monoids (state machines, binary protocol decoders) and implementing the comonoid by "reversing the arrows" (to do log reconstruction when debugging) has a name, and thus is a fruitful practice for me to continue doing, instead of it being "those weird drawings I keep noodling".

Nothing is rocket science, and you can totally get there without even the traditional monoid definition, but seeing the same diagrams I kept drawing before porting something to more "traditional" code patterns was quite validating.

completely rough whatever that tries to match stuff I did on a ps2 protocol decoder a while ago. Being able to invert state machines that use time ticks to advance requires adding additional counters to make them work properly, iirc. As you can tell none of this is formal, in fact I worked on this before starting my CT joint, but you can maybe see why I resonate more with the CT formulation of things.

The bigger picture here is that when I do embedded dev, I only have so much memory to keep an event log, so I need to be able to work backwards from my last none state, instead of traditionally walking forward from an initial state and incoming events. That this whole concept of "swapping the arrows around" is something that reputable people actually study was super inspiring.

https://media.hachyderm.io/media_attachments/files/109/434/7...


What language only allows concatenating a non-empty array?

And it still doesn't explain why string concatenation being a monoid is useful in ruby. It's useful in Haskell because implementing one of their category-theory typeclasses means that type now inherits a stdlib-sized API that works for any monoid, not concrete types. But even Haskell hasn't proven that it's worth the jargon or abstraction after all these years; every other language remains productive without designing APIs at such a high level of abstraction.


Arguably from a mathematical perspective, the choice of ‘+’ is poor as it implies that the operation is commutative when it’s only associative. Julia used “foo” * “bar” for this reason: https://groups.google.com/g/julia-dev/c/4K6S7tWnuEs/m/RF6x-f...


Then the length() function would be a logarithm...


(For the benefit of those who did not get the joke: a logarithm is a homomorphism from ‘*’ to ‘+’.)


> Instead of just thinking about bundling a number of methods together with a data structure, you can make sure that a single method within that bundle works the way you expected across a number of objects. You might even be using these ideas in your code, but don't even realize it.

I think this was GP's point. They complain that no-one explains why the category theory _jargon_ is useful. As you point out, the jargon is not needed in order to use the concept. Someone might even come up with the concept independently of the jargon.


Most of us can go our whole careers without the "insight" that string concatenation is a "monoid". I don't know any languages that would balk at "foo" + "" or [a, b].concat([]). This all seems academic beyond belief.


For most of my years in primary education, math was easy. Until it was not. I ran headlong into the idea of "instantaneous rate of change", and I was confronted with the existential crises that there is nothing inherently concrete or "real" about the idea of instantaneous rate of change. My mind just got stuck on there, and I nearly failed the course in high school.

Everything after that point was not easy, and somewhere, there was a belief forming that maybe I was not so good at math.

Two things I encountered changed my perspective on that. The first was reading a math graduate student's experience where they talk about struggling with concepts, like groping about in a dark room until one day, you find the light switch and you see it all come together. The other is Kalid Azad's "Math, Better Explained" series. His approach is to introduce the intuition first, before going into the rigor. Between those two, I realized that I had never really learned how to learn math.

Once I started with the intuition, I also realized why people get stuck on things. There are people who don't get negative numbers. They keep looking for something concrete, as if there were something intrinsic to reality that makes negative numbers "real". And there isn't. And yet, in my day-to-day life, I work with negative numbers. I am far better able to reason and navigate the modern world because I grok the intuition of negative numbers.

Then there are the cultures where there isn't even a notion of natural numbers. Their counting system is, "one", "two", and "many". In their day to day life, to echo your sentiment, they can go on their whole life without the "insight" that things can be countable. Many of them probably won't even care that they don't have the intuition of countable things. And yet, in this modern culture, I find myself introducing the idea to my toddler.

Ultimately, it's up to each individual's decision how far they want to go with this. Each culture and civilization has a norm of a minimum set of intuitions in order for that person to navigate that civilization. Category Theory is outside that norm, even among the software engineer subculture. Perhaps one day, it will be the norm, but not today. Beyond that, no one can make you grok a mathematical intuition, as useful as they are once grokked.


I am not great at math. But I learned about complex numbers for fun. It took a bit to make them “real” for me, 3B1B helped a lot as did asking myself how I find negative numbers real (negative integer: if something in the future will exist, it won’t and the negative integer will be incremented, aka a debt to the future existence of whatever the number represents).

Complex numbers: the number line is just a metaphor. What would happen if you make it 2D? Oh you can solve negative roots. Whenever a number needs to spin or have a negative root, it’s a useful tool. Numbers are tools. Cool, that’s “real” enough for me.

I know I will never ever use it. Or at least, not in my current state of how I live my life. I liked learning it, there’s something to be said for what is beyond your horizon. But I do think just like complex numbers, category theory needs a motivation that is compelling. In retrospect, learning complex numbers is most likely not useful for me.

Oh wait a second, complex numbers helped me to understand trig. I never understood it in high school but 3B1B made it intuitive with complex numbers

Nevermind, I’ll just let this all stand here. It’s fun to be open and convinced by myself to the other side while writing my comment :’)

I’m sure if I would have learned category theory, I would have written a similar comment as I think there are a lot of parallels there.


Azad has this description of imaginary numbers that really tickled me: “numbers can rotate”.

I remember my high school teacher teaching imaginary numbers in the trig class, and one of the other kids asked what they could be used for. This was an honors class and the kid skipped a math grade. Our math teacher couldn’t talk about it off the bat, and unconvincingly told us about applications in electrical and electronic engineering. We still went through it, but none of use knew why any of it was relevant.

I think if he had said “numbers can rotate”, maybe some of us (maybe not me!) might have been intrigued by the idea. It would have been a way to describe how EM rotate.

My own personal motivation for pursuing CT has to do with working with paradigms, and how are they related, and how they are not. Categories and morphisms seem to talk about the kind of things we can talk about with paradigms, yet much more precisely.


For example, your language can implement sum(Array<X>) for any monoid X. And the sum of an empty list of integers can automatically return 0, while the sum of an empty list of strings can automatically return "". This sounds simple, but many programming languages make you learn special commands and notation for strings. A compiler could also leverage the fact that length() is a homomorphism, so length(a + b) = length(a) + length(b).

Monoids are really simple, so you probably won't get a lot more examples there. But monads are a different story. Once you know the pattern, you really miss them. For example, Promises in Javascript are like painful versions of monads. There is a lot of special-case language support and reasoning around Promises that only apply to Promises. In a language with native support for monads, you could use the same code or libraries to handle patterns arising when using promises, as patterns arising with other monads like I/O.

In summary, if you never try it, you may never know or care what you're missing, but that doesn't mean you're not missing anything...


I find I get a lot of value out of monads even if the language doesn't expose them.

For example, it allows me to quickly draw a single arrow in a big architecture diagram because I know I'll be able to take a list of promises and turn it into a promise of a list, and then take a function that takes a list and returns more lists, and flatmap it on top of that. Even if Promise is "not really" a monad, and I will probably have to write the machinery manually.

It's what I mean when I say "you can bulldoze your way through a lot of broken abstractions when programming", but at the conceptual stage and when designing, you can get a lot of mileage out of the theoretical constructs.


I can think of hundreds of “exceptions” to the category theoretic approach.

So for example anything you can do to “string” type that isn’t based on human language you should be able to do to an array of bytes, integers, or “well-behaved” objects. (Also “escaped” strings or strings in various encodings such as UTF-16 or MBCS.)

Show me a language that implements all of those functions with interchangeable and uniform syntax!

What does it even mean when I say “well behaved”? It means that those objects need to have some traits in common with characters, such as: copyable, comparable, etc…

To implement regex for objects would also require a “range” trait and perhaps others. Category Theory lets us talk about these traits with a formal language with mathematically proven strict rules.

With C++ template meta programming it’s possible to get most of the above, but not all, and it’s not used that way in most code. It’s also weakly typed in a sense that C++ can’t express the required trait bounds properly. Rust and Haskell try as well but still fall short.

Category Theory shows us what we should aspire to. Unfortunately the tooling hasn’t matured enough yet, but now that GATs have made it into Rust I’m hopeful there will be some progress!


Strings/lists under concatenation do not form a group since concatenation is not uniquely invertible. (In the sense of "there is no list -xs that you can concatenate onto xs to get the empty list.)


Again, I don't see how this piece of information is useful for a programmer. And if it is, I'll bet it's 10x more useful if expressed in more ordinary language.


This is useful for example when implementing compiler optimizations, or concrete string implementations. It means that you can reduce "foo" + "bar" + foo to at least "foobar" + foo and that you can intern "foobar". Would that work the same without calling it a monoid? Of course, monoid is just a name. But that name allows you to go shopping in a wide variety of other fields and work other people have done and mine it for ideas or even concrete implementations.


A vast, overwhelming majority of programmers will never implement a concrete string type or compiler optimizations. Can you show me how this kind of theory is practical for a working programmer who does things like batch processing jobs, web applications, embedded software, event-driven distributed systems, etc?


Consider the monoid abstraction in the context of batch processing. Anywhere you have a monoid, you have a corresponding “banker’s law” that says you can find the generalized sum of a collection of items by partitioning the items into groups, computing the sum over each group, and then taking the sum of the sums as your final result. This idea has many applications in batch processing.

For example, in the MapReduce framework, this idea gives rise to “Combiners” that summarize the results of each map worker to massively lower the cost of shuffling the results of the Map stage over the network prior to the Reduce stage.

Another example: In distributed database systems, this idea allows many kinds of addition-like aggregations to be performed more efficiently by computing the local sum for each group under the active GROUP BY clause before combining the groups’ subtotals to yield the wanted gobal totals.

Basically, in any situation in which you have to compute a global sum of some kind over a collection of items, and some of those items are local to each worker, you can compute the global sum faster and with fewer resources whenever you can take advantage of a monoidal structure.


Which is exactly the concept between optimizing for string concatenation and interning them, ultimately. Sure you can make do with the "algebraic" definition of a monoid, or entirely without it, but that doesn't mean the abstraction isn't there to inform your thinking and your research.

One point that really stuck with me is how people who apply category theory (say, people at the topos institute) use the concepts as little tools, and everytime a problem crosses their way, they try out all the little tools to see if one of them works, similar to how Feynman describes carrying out problems in his head until one day a technique unlocks something).

Having more general abstractions just allows them to be applied to more problems.


my personal favourite: seeing the foldable (a monoid with two types, kind of) in event driven systems means you can model a lot of your embedded software / web applications as a set of functional state/event reducing functions, and build a clean system right from the get go. Seeing the functors in there tells you which part of the state can be split, parallelized or batched for performance or modularity).

Again, these are all very possible without knowing category theory, but category theory studies what the abstractions behind this could be. I find that the huge amount of intuition (driven by some formalism picked up along the way, but even more so by just writing a lot of code) I developed is reflected in what I see in category theory. It's why I genuinely think that web development is just like embedded development: https://the.scapegoat.dev/embedded-programming-is-like-web-d... which seems to be way more controversial than I thought it would be.


I once wrote a unifying parent class for several client-specific reports we had. Think the parent class implementing the top-level control flow/logic and the child implementations having methods that it calls into, sometimes once (e.g. for the header) and sometimes per-row.

Specific reports needed various kinds of customization. Some needed to call the client's web API to get some extra data for each row (and since this was across the internet, it needed to be async). Some needed to accumulate statistics on each row and sum them over the whole report at the end. One needed to query an ancillary database for each row, but all those queries had to be part of the same transaction so that they would be consistent.

Now in theory you can do ad-hoc things for each of those cases. You could make the per-row method always async (i.e. return Future), so that you can override it to do a web API call sometimes. You could stash the statistics in a mutable variable in the report that accumulates them, remembering to do locking. You could use a session on the ancillary database bound to a threadlocal to do the transaction management (most database libraries assume that's how you're doing things anyway), and as long as your returned Futures are never actually async then it would probably work. But realistically it would be very hard to do safely, and I'd never have spotted the underlying symmetry that let me pull out the high-level logic. More likely we'd've stuck with three distinct copy-pasted versions of the report code, with all the maintenance burden that implies.

In principle an abstraction never tells you something you didn't already know - whatever properties you're using, you could've always worked them out "by hand" for that specific case. Like, imagine programming without the concept of a "collection" or any idea of the things you can do on a collection generically (such as traverse it) - instead you just figure out that it's possible to traverse a linked list or a red-black tree or an array, so you write the code that works on all of them when you need it. That's absolutely a way that you can program. But if you don't have this vocabulary of concepts and patterns then realistically you miss so many chances to unify and simplify your code. And if you have the rigorous category-theory concepts, rather than fuzzier design patterns, then you have quick, objective rules for figuring out when your patterns apply - and, even more importantly, when they don't. You can use library code for handling those concepts with confidence, instead of e.g. wondering whether it's ok for your custom iterator to expect the library to always call hasNext() before it calls next(). It's a huge multiplier in practice.


That's correct. But understanding that string concatenation forms a monoid is quite nice, because it means that libraries can offer you this as an interface and you can choose the type you want to use.

Sorry for the wall of text, but I think to maybe help you understand why people (like me) like to work with it a bit more explicitly, I'll have to make it more concrete and give lots of examples. The good stuff comes at the end.

So let's say you have a list or array type. You want to aggregate all the things inside. Let me write pseudo code from here

   let balances = List(1, 4, 23, 7)
   let overallBalance = ???
How do you calculate that? Well it's simple - use a for-loop or call .reduce on it or maybe your language even has a builtin sum() function that works for lists of numbers right?

   let overallBalance = sum(balances)
Now what happens if you want to concatenate strings instead? Can you reuse sum()? Probably not - you will have to hope that your language / std lib has another function for that. Or you have to fall back to implementing it yourself.

Not so if monoids are explicitly supported. Because then, it will be exactly(!) the same function (which has been implemented only once) - no type-overloading or anything.

   let overallBalance = combineAll(balances)
   let concattenated = combineAll(listOfStrings)
Okay, seems a little bit helpful but also doesn't seem super readable, so I guess that's maybe not very convincing. But the reason why I personally love to work with monoids as an explicit concept is the thing that comes next.

Let's say you got bitten because you used plain numbers (or even proper money-types) for the balances but in the code at some point you mixed up two things and you added a balance to the users age (or something like that) because you used the wrong variable by accident.

You decide to make Balance an explicit type

    class Balance( private innerValue of type number/money )
So the code changes

   let balances = List(Balance(1), Balance(4), Balance(23), Balance(7))
   let overallBalance = sum(balances)
But the second line stops compiling. The std lib sum function doesn't support your custom balance class for obvious reasons. You will have to unwrap the inner values and then wrap them again (both for the sum() method or in your handwritten for-loop).

In case you use a duck-typed language where you can just "delegate" the plus-operation to the inner value: congratulations, you are already using monoids without calling it like that. Unfortunately, there is no protection against problems, such as that + might mean different things on different types and they can be used with sum() but cause unexpected results (read: bugs).

In case you use a language that has good supports for monoids, you essentially have to add just one line:

    a monoid exists for class Balance using Balance.innerValue
That's it. You can now do

   let balances = List(Balance(1), Balance(4), Balance(23), Balance(7))
   let overallBalance = combineAll(balances)
And, magically, "overallBalance" will be of type Balance and be the aggregated balance. In case you think that it can't work as easy as that, I'm happy to show you some runnable code in a concrete language that does exactly that. :)

On top of that, it does not end here.

Let's take it a step further. Let's say don't only have the account-balances of a single person (that would be the example above). You have that for multiple people.

So essentially, you've got

    let listOfBalances = List(
      List(Balance(1), Balance(4)),
      List(Balance(23), Balance(7)),
      List(),
      List(Balance(42)
    )
And you want to calculate the overall combined balance. Now it gets interesting. Even in a duck-typed language, you can't use sum() anymore, because the inner lists don't support that. You will have to to fall back to a manual two step process, such as

    sum(listOfBalances.map(balances => sum(balances)))
But with monoids it's different. Since we know how to combine balances in a monoidic way, we also automatically know how to do that for a list that contains balances. In fact, we can do so for any list that contains something that we know of how to combine it. That means, without any other code changes required, you can simply do

    let overallBalance = combineAll(combineAll(listOfBalances))
This is recursive and goes as far as you want. And it does not only work with lists, but also Maps and other structures. Imagine you have a map with keys that are strings and values that are of any type but constrained to be a monoid. E.g. Map("user1" -> Balance(3), "user2" -> Balance(5)). Or Map("user1" -> List(Balance(2), Balance(3), "user2" -> List(Balance(5))). Or even maps of maps.

Now if we have two of those and we know that the values are monoids, we can combine them as well, using again the same function, no matter what is inside. E.g.:

    let map1 = Map("user1" -> Balance(3), "user2" -> Balance(4))
    let map2 = Map("user2" -> Balance(5), "user3" -> Balance(6))
    let aggregated = combine(map1, map2)
And the result will be

    Map("user1" -> Balance(3), "user2" -> Balance(9), "user3" -> Balance(6))
This is such a powerful concept and makes a lot of things so convenient that I'm always crying when I work in a language that does not support it and I have to handroll all of my aggregations.

One note at the end: all of this can be absolutely typesafe in the sense that if you try to call combine/combineAll on something that isn't combinable (= is not a monoid) it will fail to compile and tell you so. This is not theory, I use this every day at work.


I think the real insight of category theory is you’re already doing it, eg, whiteboard diagrams.

Category theory is the language of diagrams; it studies math from that perspective. However, it turns out that category theory is equivalent to type theory (ie, what computers use). So the reason why diagrams can be reliably used to represent the semantics of type theory expressions is category theory.

My workflow of developing ubiquitous language with clients and then converting that into a diagram mapping classes of things and their relationships followed by translating that diagram into typed statements expressing the same semantics is applied category theory.

The “category theory types” are just names for patterns in those diagrams.

Trying to understand them without first getting that foundational relationship is the path to frustration — much like reading GoF before groking OOP.


> However, it turns out that category theory is equivalent to type theory (ie, what computers use).

Just to elaborate on this a little, a program of type B in context of variables of types A1, A2, ..., An, can be modeled as an arrow (A1 * A2 * ... * An) -> B. (More elaborate type theories get, well, more elaborate.) The various ways you might put programs together (e.g. if-expressions, loops, ...) become various ways you can assemble arrows in your category. Sequential composition in a category is the most fundamental, since it describes dataflow from the output side of one program to the input site of another; but the other ways of composing programs together appear as additional structure on the category.

Categories take "semicolon" as the most primitive notion, and that's all a category really is. In fact, if you've heard monads called "the programmable semicolon", it's because any monad induces a related category whose effectful programs `a ~> b` are really pure programs `a -> m b`, where `m` is the monad.


I’m a math major. I learned category theory in school.

I think of category theory as an easy way to remember things. If some concept can be expressed as category theory, it can often be expressed in a very simple way that’s easy to remember. However, if you try to teach math using category theory from the beginning, it feels a little like trying to teach literature to someone who can’t read yet.

Anything directly useful from category theory can be expressed as something more concrete, so you don’t NEED category theory, in that sense. But category theory has the funny ability to generalize results. So you take some theorem you figured out, express it using category theory, and realize that it applies to a ton of different fields that you weren’t even considering.

The effort to reward of category theory is not especially high for most people. It did give us some cool things which are slowly filtering into day to day life—algebraic data types and software transactional memory are two developments that owe a lot to category theory.


Pardon me: what IS category theory?


Almost every field you can study in mathematics can be thought of as a “category”, and when you take a math class in undergrad, it usually focuses on one or two specific categories. Category theory is the study of categories in general. As in, “What if, instead of studying a specific field of math, you studied what these different fields had in common, and the relationships between them.”

The part where it gets wacky is when you realize that “categories” is, itself, a category.

https://youtu.be/yAi3XWCBkDo


> I always feel these posts are filled with category theory jargon

The 80/20 rule really applies here. Most of the time there's only a few key type classes that people use. Pretty much just Monad, Monoid, Applicative, Functor. If you grok those, then what people say makes a lot more sense.

> Can anyone give me a practical applied way in which category theory is a benefit to your design

Monad is probably the best example, because so many different, important libraries use it. Once you know how to code against one of them, you kinda know how to code against all of them!

The following is code written in the STM monad. It handles all the locking and concurrency for me:

   transfer x from to = do
       f <- getAccountMoney from
       when (f < x) abort
       t <- getAccountMoney to
       setAccountMoney to   (t + x)
       setAccountMoney from (f - x)
This same code could be in the State monad (excluding the line with 'abort' on it). Or it could be in the ST monad. Or the IO monad. Each of those is doing radically different things under the hood, which I don't have the brainpower to understand. But because they expose a monadic interface to me, I already know how to drive them using simple, readable, high-level code.

As well as the 4 monads mentioned above, parsers are monadic, same with Maybe and Either, List. The main concurrency library 'async' is also monadic (though it just uses the IO monad rather than defining an Async monad).

Often the people writing your mainstream libraries know this stuff too. If you're calling flatMap (or thenCompose), you're calling monadic code. The writers just shirk from calling it monadic because they're worried people will suddenly not understand it if it has a scary name.


Looks like to the hard part is ‶handl[ing] all the locking and concurrency″ with a pretty interface; that it satisfies the definition of a monad is a convenience because the `do` keyword requires it, but so would it be if it satisfied any other one, wouldn't it?


I would say it's the other way around:

1. Because the underlying concept for handling composing locks (STM, which I don't know much about, but I'm going to assume is a way to wrap operations within a transaction) is sound

2. it is easy to define the corresponding monad

3. which makes it usable in a do notation

4. producing readable and future-proof code

The step 2 is almost trivial, 1 is where the thinking is at. The do notation is nice to have, but not that important (to me at least). I have no problem writing this in CPS style when using javascript or c++:

   const transfer = (x, from, to, onSuccess, onError) => {
      return getAccountMoney(from, (f) => {
         if (f < x) { 
             return onError();
         } else { 
             return getAccountMoney(to, (t) => {
                 return setAccountMoney(to, t+x, () => {
                     return setAccountMoney(from, f-x, onSuccess, onError);
                 }, onError);
             }, onError);
         }, onError);
     }
   }
The main point is that I can add 80 edge cases to this and it will continue to compose nicely, not a single lock in sight.


If you have 24 minutes, George Wilson's talk about propagators is really interesting and demonstrates how we can find solutions to problems by looking at math.

https://www.youtube.com/watch?v=nY1BCv3xn24


I feel like another application which is maybe not talked about all that much is that knowing category theory gives you power to name some design pattern, google that, and tap into that vast mathematical knowledge that humanity already discovered. This becomes incredibly valuable once you become aware of how much you don't know. Or maybe just write that bare manimum code that works, idc.

Oh and also when you recognize your design to be something from ct its probably quality. Shit code cant be described with simple math (simple as in all math is simple, not as in math that is easy to understand).


Category theory is to functional/declarative programming what design patterns are to OO programming.

Patterns for decomposing problems and composing solutions.

To ask “How is it useful?” Is to ask “How are design patterns useful in OO?”.

But you already know the answer…

Is there an advantage over just inventing your own vocabulary? Yeah! You don’t have to reinvent 70+ years of Mathematics, or teach your language to other people before you can have a meaningful design discussion.


>You don’t have to reinvent 70+ years of Mathematics

What do those 70 years of mathematics do for me as a software engineer?

>or teach your language to other people before you can have a meaningful design discussion.

It's the other way around. Most engineers will have no clue what you are saying.


>What do those 70 years of mathematics do for me as a software engineer?

You mean other than inventing the entire industry in which you work? It's a weird question that I have no idea how to answer for you.

What is 70+ years of computer science doing for you as a software engineer? Are you standing on the shoulder of giants; or are you inventing/re-discovering everything by yourself from first principles?

>It's the other way around. Most engineers will have no clue what you are saying.

So how do engineers understand each other if we all invent our own meta-language to speak about software design in the abstract?

Misscommunication is what happens by default unless you intentionally become part of a community and learn the meta-language. Mathematics is one such community which has existed for millenia and benefits from the network effect. It's an anti-entropy thing.

There are many parallels here to the Tower of Babel.


>What is 70+ years of computer science doing for you as a software engineer?

No, what does category theory unlock from mathematics that I can not get without it.


Nothing. You can re-invent and re-discover everything. By yourself. From first principles.

You can re-discover and re-invent all of computation, Mathematics and physics.

Heck, you don’t even need computers if you have pen and paper.

You can do absolutely everything all by yourself. All you need is infinite time.


I'm not sure if this is supposed to be sarcastic, but taking it at face value, mathematics are the underpinning of both computer hardware and computer science. Since we are talking about more abstract mathematics, it is what gave us the lambda calculus, complexity analysis of algorithms, type theory, relational algebra, distributed systems.

more pragmatically, libraries like redux, react are heavily influenced by concepts from functional programming, rust has novel concepts from type theory, data engineering in the cloud age leverages a lot of algebraic concepts to achieve massive data throughput. We have parser combinators, lambdas, stronger typing, map and flatmap in most languages these days. These all come directly from mathematics.


>redux, react are heavily influenced by concepts from functional programming

Functional programming concepts don't require learning category theory

>rust has novel concepts from type theory

Type theory isn't category theory. Rust's borrow checker was not inspired by affine types.

https://smallcultfollowing.com/babysteps//blog/2012/02/15/re...

>data engineering in the cloud age leverages a lot of algebraic concepts to achieve massive data throughput

This doesn't requite category theory and those are basic concepts from algebra that you can learn outside of the context of algebra.

>We have parser combinators, lambdas, stronger typing, map and flatmap

These don't require category theory either.


Oh, I thought your point was that mathematics wasn't bringing anything to computers. As far as I understand it, category theory is more about finding commonalities across mathematical fields (or indeed, scientific / engineering fields), more so than solving more concrete problems.

What does that give you? For me, I think it gives me an easier way to see common abstractions across different problems I work on.

I am at the beginning of my CT journey itself, but a layman's understanding of monads, functors and applicatives gets me really far writing pretty much the same code when i do bare-metal embedded or frontend javascript. The point is not that I couldn't write bare-metal code or frontend code without, is that I am much quicker seeing "my javascript promises are like a monad" and "my coroutines are like a monad" and being able to shrink my cognitive load.


>Functional programming concepts don't require learning category theory

This is such a reductionist world-view. Programming concepts don't require you to learn the theory of computation either, but having a theoretical/abstract grounding for what computation is disconnected from any particular programming language/model of computation helps. A lot.

>Type theory isn't category theory

It depends on what you mean by "isn't".

There is a 1:1 correspondence between type theory and category theory constructs.

https://ncatlab.org/nlab/show/computational+trilogy#rosetta_...


"I’ve even watched some applied category theory courses online and have yet to feel I’ve gained anything substantive from them."

To understand category theory's appeal in a programming context, you must understand the duality of power in programming languages. Some people call a language "powerful" when it lets you do whatever you want in the particular scope you are currently operating in, so, for instance, Ruby is a very powerful language because you can be sitting in some method in some class somewhere, and you can reach into any other class you like and rewrite that other class's methods entirely. Some call a language powerful when the language itself is capable of understanding what it is your code is doing, and further manipulating it, such as by being able to optimize it well, or offer further features on top of your language like lifetime management in Rust. This type of power comes from restricting what a programmer is capable of doing at any given moment.

If your idea of "power" comes from the first sort of language, then category theory is going to be of very little value to you in my opinion. You pretty much have to be using something as deeply restrictive as Haskell to care about it at all. I am not making a value judgment here. If you never want to program in Haskell, then by all means keep walking past category theory.

Haskell is very much a language that is powerful in the second way. It restricts the programmer moreso than any other comparable language; the only languages that go farther essentially go right past the limit of what you could call a "general purpose language" and I daresay there are those who would assert that Haskell is already past that point.

Category theory then becomes interesting because it is simultaneously a set of deeply restricted primitives that are also capable of doing a lot of real work, and category theory provides a vocabulary for talking about such restricted primitives in a coherent way. There is some sense in which it doesn't "need" to be category theory; it is perfectly possible to come to an understanding of all the relevant concepts purely in a Haskell context without reference to "category theory", hence the completely correct claim that Haskell in no way necessitates "understanding category theory". You will hear terminology from it, but you won't need to separately "study" it to understand Haskell; you'll simply pick up on how Haskell is using the terminology on your own just fine. (Real mathematical category theory actually differs in a couple of slight, but important ways from what Haskell uses.)

So, it becomes possible it Haskell to say things like "You don't really need a full monad for that; an applicative can do that just fine", and this has meaning (basically, "you used more power than was necessary, so you're missing out on the additional things that can be built on top of a lower-power, higher-guarantee primitive as a result") in the community.

This mindset of using minimal-power primitives and "category theory" are not technically the same thing, but once you get deeply into using minimal-power primitives pervasively, and composing them together, you are basically stepping into category theory space anyhow. You can't help but end up rediscovering the same things category theorists discovered; there are a hundred paths to category theory and this is one of the more clear ones of them, despite the aforementioned slight differences. I'd say it's a fair thing to understand it as this progression; Haskell was the language that decided to study how much we could do with how little, and it so happens the branch of math that this ends up leading to is category theory, but as a programmer it isn't necessary to "study" category theory in Haskell. You'll osmose it just fine, if not better by concretely using it than any dry study could produce.

To be honest, I'm not even convinced there's that much value in "studying" category theory as its own thing as a programmer. I never studied it formally, and I wrote Haskell just fine, and I understand what the people using category theory terms are talking about in a Haskell context. If nothing else, you can always check the Haskell code.

There is a nearby parallel universe where the Haskell-Prime community in the late 1990s and early 2000s was otherwise the same, but nobody involved knew category theory terminology, so "monad" is called something else, nobody talks about category theory, and nobody has any angst over it, despite the language being otherwise completely identical other than the renames. In this universe, some people do point out that this thing in Haskell-Prime corresponds to some terms in category theory, but this is seen as nothing more than moderately interesting trivia and yet another place where category theory was rediscovered.

But I bet their Monad-Prime tutorials still suck.


Then it seems fair to say that, since Haskell is the most restrictive language (or at least most restrictive in anything like general use), it had to have the most powerful theory/vocabulary/techniques for dealing with the limitations. (Or, more negatively, nobody else willingly put themselves in a straight-jacket so tight that they had to use monads to be able to do many real-world things, like I/O or state.)

> But I bet their Monad-Prime tutorials still suck.

Funniest thing I've read today. Thank you for that gem.


> Or, more negatively, nobody else willingly put themselves in a straight-jacket so tight that they had to use monads to be able to do many real-world things, like I/O or state.

I want to emphasize how true this is. In the Haskell 1.2 report, before Haskell Monads existed, the type of `main` was more or less:

    type Dialogue = [Response] -> [Request]
    main :: Dialogue
The main function was took a lazy list of, let's call it OS responses, and issued a lazy list of requests to the OS, whose response you would find at some point on the input list. Your task was to keep your lazy generation of `Request`s in sync with yourself as you consumed their `Response`s in turn. Read ahead one to many `Response`s and your `main` would dead-lock. Forget to consume a `Response` and you would start reading the wrong `Response` to your request.

The suggestion was to use continuation passing style (something that later we see conforms to the Continuation monad interface) to keep things in check, but this was not enforced, and by all accounts was a nightmare.

So yes, Haskell's laziness straight-jacket (Haskell was basically invented to study the use of lazy programming without having to purchase Miranda) basically means they had to invent the use of Monads for programming, or at the very least invent the IO monad.


They describe composable patterns with a clear hierarchy. The jargon is useful, since it is the language in which they are described. I mean Turing machine, filesystem, socket and pipe are also jargon.

A good example is LINQ, they use monads and functors. They just gave them other names.


It's not kool-aid, it's applied maths. Just because you don't already know and understand it (and hence don't see the benefit of it) doesn't mean there's no clear and obvious benefit. But you do have to learn before you understand.


To be fair though, there is a wide spectrum of "usefulness" for pure mathematics concepts in applied mathematics. For less contemporary examples you don't need to know about Lebesgue measure to do practical numerical quadrature for applications. Or you don't need to know that rotation matrices are part of the SO3 group to do computer graphics. Sometimes the abstractions are powerful and sometimes they are kind of a distraction (or even make communication a problem). There typically needs to be a super compelling application to motivate a shift in baseline knowledge.


Apologies for the jargon, but there isn't room in a comment box for a detailed explanation.

"The Essence of the Iterator Pattern"[0] posed an open problem in type theory as to whether or not "lawful traversals", of type `∀ F:Applicative. (X -> F X) -> (B -> F B)` could appear for anything other than what is called a "finitary container"[1], B := ∃S. X^(P S) where the (P S) is a natural number (or equivalently a polynomial functor B := ∃n. C_n * X^n). Or rather, there was an open question as to what laws are needed so that "lawful traversals" would imply that B is a finitary container.

Eventually I came up with a fairly long proof[2] that the original laws proposed in "The Essence of the Iterator Pattern" were in fact adequate to ensure that B is a finitary container. Mauro Jaskelioff[3] rewrote my proof in the language of category theory to be few lines long: use Yoneda lemma twice and a few other manipulations.

Anyhow, the upshot of reframing this as a simple argument in Category Theory meant that, with the help of James Deikun, I was able go generalize the argument to other types of containers. In particular we were able to device a new kind of "optic"[6] for "Naperian containers" (also known as a representable functors) which are of the form B:=P->X (aka containers with a fixed shape).

Since then people have been able to come up with a whole host of classes of containers, optics for those classes, representations for those optics, and a complete characterization of operations available for those classes of containers, and how to make different container classes transparently composable with each other[4].

For example, we know that finitary containers are characterized by their traversal function (i.e the essence of the iterator pattern). Naperian containers are characterized by their zip function. Unary containers are characterized by a pair of getter and setter functions, something implicitly understood by many programmers. And so on.

I don't know if all this means that people should learn category theory. In fact, I really only know the basics, and I can almost but not quite follow "Doubles for Monoidal Categories"[5]. But now-a-days I have an "optical" view of the containers in, and forming, my data structures, slotting them into various named classes of containers and then knowing exactly what operations are available and how to compose them.

[0]https://www.cs.ox.ac.uk/jeremy.gibbons/publications/iterator...

[1]https://en.wikipedia.org/wiki/Container_(type_theory)

[2]https://github.com/coq-contribs/traversable-fincontainer

[3]https://arxiv.org/abs/1402.1699

[4]https://arxiv.org/abs/2001.07488

[5]https://golem.ph.utexas.edu/category/2019/11/doubles_for_mon...

[6]https://r6research.livejournal.com/28050.html


This is a lot to digest, but I’m responding to this one because of I think this attempts to answer my question and because of your 4th reference. That being said I need to ask some questions.

Would I be correct in saying — there are less complex objects, which are defined in category theory, that are useful in a generic way in combination with one another?

If the answer to that question is yes, is there a place I look to find the mappings of these software topics/designs to these category theory objects such that I can compose them together without needing the underlying design? If that were the case, I could see the use in understanding category theory so that I could compose more modular and generically useful code.

I have to admit though, I’m still fairly speculative and my thinking on this is very abstract and probably very flawed.


I think your understanding is a little off the mark, at least with respect to my not-so-well written comment, which was just meant as a rambling story about how category theory influenced my and others development of (functional) programming "optics" over the last decade.

The new "optics" people are have designed in reference 4 are not themselves "elements of category theory", but category theory provides the theoretical framework that these composable optics live in. Once that framework is identified, it becomes much easier to say "oh, this new optic definition I've just identified, fits into this framework, and so does this other optic, and so forth"

To make a loose analogy, one could imagine identifying triangles and cubes and such and such that have these rotational and reflective symmetries, and notice how you can compose those operations, and then see how performing specific shuffles of cards is also composable and has some properties similar to rotating cubes in that repeated operations have a certain period and other such things. Then when someone introduces you to group theory and says yes both the cube rotations and the card shuffling can all be seen as a group and group theory can be used to generically answer your questions about these otherwise seemingly very different operations. And then you start seeing that lots of things are groups, like that Rubik cube puzzle you have, or lines through an elliptic curve. Coming up with new optics is analogous to noticing various things are groups, a task that is much easier when you have group theory and know how to recognize a group when you see it.

That said, I think a better answer to your question may be in one of my later replies https://news.ycombinator.com/item?id=33806744: Category Theory has been useful for devising an entirely new (functional) programming abstraction, the theory of "optics" in my case. But for day-to-day programming you don't normally need to invent entirely new programming abstractions, you can just use the ones that have already been developed by the computer scientists.


I’m curious if this is an answer that others would agree with. I could see your reasoning being valid on this, but I tend to see others responding that it has much more broad utility than that.

Typically when something is confusing to me and I see nothing to grasp onto, it usually means something is flawed or poorly communicated. In other words, when no one can explain why something has utility (in everyday programming) and there are a fair number of responses, then the chance of bad communication goes down and bad reasonings (non-answers) goes up. I have a feeling it is not coincidence that you have the opinion it is not utilitarian in everyday programming and that I responded to your initial post.


I'm not entirely sure if that is what you are asking, but for example pure functions from a type A to a type B can be compose with pure functions from a type B to a type C, to form a pure function from A to C. A lot of programmer oriented CT takes place in the category of types, with morphisms being a function from one type to another. Programming in a purely functional style makes composition of functions... trivial, compared to a function that has side-effects due to mutation.

Now of course this sounds trivial, but you can build much more refined concepts out of that simple idea. For example, creating product and sum types.

And that's where the rabbit hole starts.


Here are some of the more practical insights for medium/large-scale software design that have stuck with me. They apply regardless of whether I'm working in a language that lets me easily express categorical-ish stuff in the type system:

- Triviality

The most trivial cases are actually the most important. If I build a little DSL, I give it an explicit "identity" construction in the internal representation, and use it liberally in the implementation, transformation phases, and tests, even though obviously its denotation as a no-op will be optimized away by my little DSL compiler, and it may not even be exposed as an atomic construct in the user-facing representation.

- Free constructions

When it's difficult to separate a model from an implementation, when simple interfaces are not enough because the model and implementation are substantially complex, multilayered systems on their own, often the answer is to use a free construction.

Free constructions are a killer tool for allowing the late-binding of rather complex concerns, while allowing for reasoning, transformation, and testing of the underlying components without worrying about that late-bound behavior. "Late-bound" here does not have to mean OO-style dynamic dispatch -- in fact now you have more expressive power to fuse some behaviors together in a staged manner with no runtime indirection, or to make other behaviors legitimately dynamic, differing from one call to the next. Tradeoffs in the flexibility/performance space are easier to make, to change, and to self-document.

This is particularly useful when you need to start addressing "meta" concerns as actual user-facing concerns -- when your project matures from a simple application that does Foo into a suite of tools for manipulating and reasoning about Foo and Fooness. Your Foo engine currently just does Foo, but now you want it to not just do Foo, but return a description of all the sub-Foo tasks it performed -- but only if the configuration has requested those details, because sometimes the user just wants to Foo. So you embed the basic, non-meta-level entities representing the actual component tasks in a free construction, compose them within that free construction, and implement the configurable details of that composition elsewhere.

- "Sameness"

Making clear distinctions between different notions of sameness is important. Identity, equality, isomorphism, adjointness -- sure, you might never explicitly implement an adjunction, but these are important notions to keep separate, at least conceptually if not concretely in source code entities, as a piece of software grows more complex. If you treat two things as fundamentally the same in more ways than they really are, you reach a point where you now can no longer recover crucial information in a later phase of computation. In an indexed/fibered construction, this X may be basically the same as that X, but this X, viewed as the image of a transformation applied to Y is quite different from that X, viewed as the image of a transformation applied to Z, although the images are equal. So can I just pass my X's around everywhere, or do I need to pass or at least store somewhere the references to Y and Z? What if computationally I can only get an X as the result of applying a function to Y, but conceptually the functional dependency (that is, as a mathematical function) points the other way around, from X to Y? Paying attention to these easy-to-miss distinctions can save you from code-architecting yourself into a corner. You may discover that the true reserve currency of the problem domain is Y when it looks like X, or vice versa.


Knowing category theory helps you better design interfaces. Exactly like the interfaces in OOP.

Alot of the generic interfaces you design in OOP end up being useless. Never re-used and pointless. You never needed to make these interfaces in the first place

In category theory, you will be able to create and identify interfaces that are universal, general and widely used throughout your code. Category theory allows you to organize your code efficiently via interfaces and thus enable Maximum reuse of logic.

When creating an interface you largely use your gut. Category theory allows you to draw from established interfaces that are more fundamental. Basically class types in haskell.

If you want to see the benefit of category theory, you really need to use haskell. Monads for example are a pattern from category theory.

Also a lot of what people are talking about in this thread is exactly what I'm saying^^. I'm just saying it in less complicated jargon.


> you really need to use haskell.

Or Scala. They're the only two common-use languages I'm aware of whose type systems are expressive enough to define what a monad (or a category, or a ...) is internally. Of course you can identify particular monads (or ...) in any language, but you can't talk about them inside most languages.


C++ fits the bill as well (!)

I wager that you can get pretty far with compile-time macros, even as rudimentary as C's, to encoder a fair bit of generic monad machinery.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: