Hacker News new | past | comments | ask | show | jobs | submit login
Functors, Applicatives, and Monads in Pictures (2013) (adit.io)
156 points by ingve on Oct 21, 2021 | hide | past | favorite | 112 comments



I would complement it with:

- Scott Wlaschin's Railway Oriented Programming (ROP): https://fsharpforfunandprofit.com/rop/ and https://fsharpforfunandprofit.com/posts/recipe-part2/

- Scott Wlaschin's The "Map and Bind and Apply, Oh my!" series at https://fsharpforfunandprofit.com/series/map-and-bind-and-ap... and its concept of an "elevated world".

- And for Elixir Programmers, Zohaib Rauf's "Railway Oriented Programming in Elixir" at https://zohaib.me/railway-programming-pattern-in-elixir/ based on Scott Wlaschin stuff.

- https://wiki.haskell.org/All_About_Monads. The introductory part (7 first pages of the pdf).

- Brian Beckman: Don't fear the Monad https://www.youtube.com/watch?v=ZhuHCtR3xq8

- Philip Wadler's "Monads for Functional Programming" at http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/ba...

- HN comment: https://news.ycombinator.com/item?id=16446139

ROP is a good way to introduce the Either monad without ever talking about monads, functors and co. Moreover it is highly useful.


My takeaway from all this is that monads are kinda hard to explain. If something is hard to explain, that further suggests to me that it's probably the wrong abstraction.

(Personally I find the railway abstraction makes a lot of sense, but maybe that's just me.)


Quoted from Bartosz Milewski's 'Category Theory for Programmers chapter 20:

Let me set the record straight: The whole mysticism around the monad is the result of a misunderstanding. The monad is a very simple concept. It’s the diversity of applications of the monad that causes the confusion. As part of research for this post I looked up duct tape (a.k.a., duck tape) and its applications. Here’s a little sample of things that you can do with it:

- sealing ducts

- fixing CO2 scrubbers on board Apollo 13

- wart treatment

- fixing Apple’s iPhone 4 dropped call issue

- making a prom dress

- building a suspension bridge

Now imagine that you didn’t know what duct tape was and you were trying to figure it out based on this list. Good luck! So I’d like to add one more item to the collection of “the monad is like…” clichés: The monad is like duct tape. Its applications are widely diverse, but its principle is very simple: it glues things together. More precisely, it composes things.

===

Also would like to recommand: Functors and Monads For People Who Have Read Too Many "Tutorials" http://www.jerf.org/iri/post/2958

I got a feeling that Monad has no good metaphor to real life objects. Trying to make up an analogy just makes things worse.


When I realised that everything following a couple of simple laws was a monad, I started to notice them alot and was also able to implement them. The tutorials with pictures or analogies like containers only made me confused.


Monads are very easy to explain to anyone used to reading mathematical definitions. It’s the people without a math background that struggle so much. They insist on all of these real world analogies. The problem with that stipulation is that monads are too abstract and general to have a real world analogue.

They are a fantastic and incredibly powerful abstraction though. Their mathematical properties, termed the monad laws, make sure of that.


I have a strong mathematical background, I get the definitions, I understand very well the typical examples like the IO monad, yet I frequently have a hard time following more complex monad use examples. Especially transformers. So, I suppose it is not enough to understand the definitions, you really need to think of them as some form of a language.


I think that’s the case with any concept in computer science or math. It’s easy enough to explain integration to someone using simple examples like f(x) = x^2 but it gets a lot more difficult when you introduce Stokes’s theorem and differential forms.

Most of the people complaining about monads are not in your situation. They are struggling to grasp even the most basic examples because they understand concepts only by analogy to other things they already know and monads don’t map to anything they know.


> It’s the people without a math background that struggle so much.

You mean like most programmers?


You can also explain them as interfaces, although as you said it gets harder when you get to the "why".


Monads are simple enough to explain them in a single blog post; the problem is, nobody can agree on which blog post.


There is an element of truth to this, but they are easier to use than explain.

Part of the hardness comes from people wanting to use monads as an entry point to category theory.


People get hung up on the terminology, but I’ve found if you don’t use the words functor and monad and just focus on the actual use case, second year college students learning functional programming for the first time can grok this concept.

The way I have found success is to have them write a program first where a monad would be useful, and then ask them to implement it without monads. They will identify the need for a monad themselves, and then you can show them the tool they want. Then they use it to solve the problem, and at that point you start giving them a more rigorous theory.


This is roughly Socratic programming teaching! It's for this reason that I think one of the better monad explanations is this one:

http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...


Quantum mechanics is a bit hard to explain, but it turns out to be quite a good model of our physical observations.

Monads are hard to explain, they are _not_ hard to use, and use with great effect. Option#{map, flatMap} changed how I approached programming, and did not require a full understanding of why they existed or what they were a part of.


I assume I'm displaying my ignorance, but it feels almost tautological to me.


Oh, well, being easy to explain is a feature. Monads lack this one (so do pointers, mutable variables, recursive functions, macros., Boolean logic, Boolean values, metadata..)

Still, that's far from the most important feature for anything you will use daily.


I disagree that anything on your "so do" list is hard to explain. At least, I feel like I understand all of them, but I still don't understand Monads. And I've probably spent 10x the amount of time reading (and still not comprehending) Monad tutorials than I have all of the other concepts combined.


I'm curious how much time you're spent "doing monads" compared to reading about monads. The reason that I bring this up is that monads didn't click for me until I had hands on experience with a few different monad instances and could see the abstraction take shape in different ways. If that doesn't apply to you, that's ok too. It's fun to understand other people's methods of conceptualization.


I have no idea how much time I’ve spent doing monads because I still don’t know what they are. It’s possible I’ve done “monadic” things in the code I already write, but I have no idea.

I don’t know any Haskell at all, and it seems to be the only language out there where people think monads are a thing you need to learn, so I definitely have no motivation to learn them outside of curiosity.

But I am curious about them, enough to read every tutorial I come across about them. But I still have no idea what they are.


It's definitely possible you've already done monadic things in code. If it's any consolation, I would not have ever been able to grasp monads by reading tutorials alone and the monad laws are simply too abstract for me to have envision a concrete usage. I learned without touching Haskell at all and it's my opinion is that Scala + Cats is a much easier onramp to monads than Haskell.

You sound like a kind and curious person. If you ever want a one-on-one walkthrough please reach out to me (email in bio). Everyone deserves the chance to understand something if they want to.


Have you tried to explain them to somebody learning to program?

Explaining things to people that already knows them is always easy.


Yes I have, I helped teach some CS courses in college, plus I myself learned them in my very earliest days of programming. It's pretty common for those to be among the first concepts taught.

I'd say that your list is full of things that have practical examples when you consider how real, physical computers work. Boolean logic can be shown by example by showing physical transistor circuits, and I'd argue that people can understand them quite easily. Pointers/variables can be shown by explaining how memory addressing works on real machines, and what CPU instructions correspond with the actual code you're writing.

Monads however, seem to be this purely abstract mathematical concept that comes from CS-as-a-math-discipline, which although useful, requires you understand ton more theoretical background before you can approach it. At least that's my experience -- I still don't understand them and every tutorial I've read either makes them look like magic (ie. doesn't actually explain them) or tries to take a mathematically rigorous approach and I get lost in the weeds.

I know intuitiveness is very subjective, but I'd wager that by numbers, many more people are able to understand pointers/mutable variables/macros/boolean logic/etc more easily than they can Monads.


You don’t need much of any theoretical mathematical knowledge to understand monads. What you need is comfort and experience reading mathematical definitions. For most people, that means they’ve studied a lot of math. The famous John von Neumann quote is bang on here:

Young man, in mathematics you don't understand things. You just get used to them.

—- John von Neumann


I don’t mean to stir the pot here, but that’s a pretty unhelpful comment.

You could basically rephrase it as “you don’t need a lot of math, you just need a lot of math”.


I agree. Almost all of the difficulty is just not getting how mathematicians think or communicate.

Honestly I think a lot of effort is wasted starting with this stuff rather than more motivated examples... But the interest is the harder thing to come by, so at least they're learning something.


My advice: Try these two tutorials:

http://www.jerf.org/iri/post/2958

https://philipnilsson.github.io/Badness10k/escaping-hell-wit...

The first one I think cuts through all of the crap surrounding them. They're just a design pattern. And the second one helps motivate them, showing the many contexts where the design pattern many apply.


If you're still confused after having gone through these, and want to understand, shoot me a message (email in bio) and we can hash it out.


> Monads however, seem to be this purely abstract mathematical concept that comes from CS-as-a-math-discipline

Well, if they were, all those people wouldn't be trying to understand them.

They are applied on Haskell (and Javascript, Rust, C#... but they are way more visible on Haskell). Write some Haskell and you will get them.


What would be my motivation for learning Haskell other than learning monads? Learning an entire programming language (one that expressly states practicality as one of its non-goals and ostensibly only exists to advance type theory) just to that I can learn a concept that only appears useful in that language, seems like a waste of my limited time.


It's "write some Haskell", not "learn Haskell". It's much harder to learn Haskell than understanding monads, you just need to see how some look like on practice.

But, well, I don't have any reason for you to do that. You just asked how, and I gave you a way. If you don't have any interest in learning it, there's absolutely no problem, but posting "I never really tried, but that thing is bad because I haven't learned it already" FUD is pretty bad.


My contention is that people are learning monads because they’re on a journey to learn Haskell, then when they finally grok them, they turn around and say they’re this big important thing that all programmers should understand, because Haskell showed them how important they are.

But so far, no monad tutorial has made a good case for why they’re actually useful, other than for learning Haskell. And most advice for how to learn them sounds exactly like what you said: “oh if you still don’t get it, you just need to write more Haskell.” You can see why that’s a bit of a disappointment, right?

Basically, you have people making this argument:

    - monads are important outside of Haskell
    - to understand why, you have to learn them
    - to learn them, you should write Haskell
Which I’m sure you can understand comes off as a bit unconvincing.


Oh, ok. If you want to design a language nowadays, knowing monads (and other abstractions) is absolutely necessary, even if just to reject them in an informed fashion.

If you want to use a language to program, knowing them will help you with approximately nothing, because most languages either have awful ergonomics for that kind of abstraction or bring them on much more limited ways that you are better learning how they specifically apply to the language than the general concept. If you learn the general concept, the specific cases will be easier, but unless you plan on learning several languages, it's not worth it.

Anyway, one large exception is if you plan on doing heavily abstract C# libraries. The C# support for monads is almost as general as Haskell.


> so do pointers, mutable variables, recursive functions, macros., Boolean logic, Boolean values

I disagree -- these all seem pretty easy to explain to me.


I cringed a little when, in the example of fmap (+3) (Just 2), they call (Just 2) a functor. After just having explained that a functor is a typeclass. So Maybe is a functor, but it feels weird to call values of type (Maybe a) functors as well.

Monad tutorials often avoid this problem by talking about monadic values, so perhaps in this example it could be called a functoric value?!


> it feels weird to call values of type (Maybe a) functors as well.

Yes, that is bogus. If you're tolerant of math jargon, I found this article demystified the topic more than anything else did:

https://en.wikibooks.org/wiki/Haskell/Category_theory

And this explains where the monad laws come from:

https://www.haskellforall.com/2012/08/the-category-design-pa...


I think "functorial" may be the right word - try https://en.m.wikipedia.org/wiki/Functoriality


Would that make "monadial" the right word for monadic values?!


I usually see "monadic" - I mean, I don't think that spelling of words in general tends to be tied to their meaning in specific contexts.


Monadic/functorial are the adjective forms of monad/functor. You might say that "List has a monadic nature". This is hinting at the idea that List by itself isn't really a Monad, but instead (List, pure, bind) is.

The op is talking about something different, though. What is a term for List Int, a concrete type instead of a type constructor. We cannot call List Int a Monad, we cannot even call (List Int, pure_Int, bind_Int) a Monad. The definition really requires polymorphism. But, obviously, List Int is closely related to the List-Monad.

I've also heard "monadic value" used. I tend to just use "monad" even though I know it's an even greater abuse of language.


Does it feel weird to call spaghetti a meal?


A more accurate translation to food would be something like “does it feel weird to call a physical plate of spaghetti a recipe?”


Honest beginner question: Why wouldn't I just unwrap the values first and then do computation with the unwrapped values?

I get how Maybe types can be useful for parsing something or doing IO but once you get your Maybe value, wouldn't be the first action to unwrap it? If it is Nothing you can either provide a default value or just stop right there. I don't see the sense of dragging the Nothing further done with you and having to deal with potential nothingness further done the stack even if you have the tools to make it look nice.

Maybe there are other functors where this makes more practical sense?


> I get how Maybe types can be useful for parsing something or doing IO but once you get your Maybe value, wouldn't be the first action to unwrap it? If it is Nothing you can either provide a default value or just stop right there. I don't see the sense of dragging the Nothing further …

I agree that this technique works well — as long as you need to use only one Maybe at a time. As soon as you want to combine multiple computations, each of which returns a Maybe, this quickly becomes horrible and cumbersome:

    case parseNumber str0 of
      Nothing -> Left "parse error"
      Just str1 ->
        case parseChar '=' str1 of
          Nothing -> Left "parse error"
          Just str2 ->
            case parseNumber str2 of
              Nothing -> Left "parse error"
              Just str3 -> …
Monads are, quite simply, a way of abstracting this repeated structure.

And of course, then you get some data structures which can’t be unwrapped at all. The standard example is Promises as seen in e.g. JavaScript, but there are others.


"Monads are, quite simply, a way of abstracting this repeated structure."

More precisely, the monad interface allows abstracting this particular repeated structure in much the same way an iterator allows abstracting the repeated process of "get the next thing", but monad does not exist for Maybe just as iterators do not exist for arrays.

It's easy to make the mistake of focusing on the Maybe portion of that repeated code snippet, but the value of the monad interface in that code snippet is everything other than the Maybe portion, that pattern of descending into a computation over and over again with a closure, that fits into a number of data structures and an even larger number of data structures that can be coerced into fitting into that pattern without much effort.

Moreover, Maybe is a bad way of understanding the monad interface and pattern because it does not fully use the monad interface. It doesn't take advantage of the way the Maybe value is passed through to thread any additional values through. While a valid example of a monad interface usage, it is also a degenerate one that makes for a difficult lens to understand the interface.


I'm a big fan of trees and promises as more complete representatives of monads. Trees give us free monads, where `join` grafts the child tree contained in a leaf onto the parent tree; and promises are a bit more viscerally monadic, as you really can't get at the value in a promise, since it probably isn't even there yet.


Too true. I used Maybe as an example only because it had already been mentioned in the parent comment, but I agree that it doesn’t get across how incredibly useful monads are.


One benefit to keeping your value wrapped in a Maybe is that as you transform and manipulate the value and pass it around in your system, you leave it up to the last place in your system that uses the value to define the fallback value in the case of a None rather than defining a fallback value part way through and establish a convention that the fallback value means nothing was found at some other part of your system.

Another benefit to using Maybes is that you avoid the rigamarole of null checks at every call site where you want to use the value. If you have a function that returns null or a value, whenever you call that function you'll always have to add an if guard to validate it's not null. If it is, that function itself may return null, and callers to it will again have to implement the same check.

I wrote a JS implementation of the Option object and the readme has lots of specific examples about these benefits: https://github.com/sbernheim4/excoptional and a few blog posts on these ideas[0][1]

[0] https://sambernheim.com/#/blog/the-engineers-schrodingers-ca... [1] https://sambernheim.com/#/blog/building-a-monad


The unwrapping is implicit so you can pretend it's not there until you absolutely need it. To take a related example from c# (please excuse my pseudo c#, I'm rusty) you can use the operator ?. to say:

    return myList?.deleteFirst?.takeFirst?.name
you could also say:

    if (myList != null){
        firstDeleted = myList.deleteFirst;
        if (firstDeleted != null){
            newFirst = myList.takeFirst;
            if( newFirst != null){
                return  name;
            }  
        } 
    }
but I find the first one much easier to think about.


> Why wouldn't I just unwrap the values first and then do computation with the unwrapped values?

Same reason why you iterate over an array instead of assigning each value to a temporary variable: you have less code.


Plus you can do it also for indeterminate size iterators, or values not there yet (promises etc)...


you can unwrap. But a monad allows the monad implementor to treat the sequence of transformations applied to a monad as a first class object that can be manipulated.

For example for the list monad the transformation could be applied in parallel or not at all if the list is only partially consumed; for the IO monad you could perform every operation asynchronously in an event loop instead of blocking.


In general, the monad use case doesn't assume that there's the possibility of "unwrapping". For Maybe it's fine and can be used to replicate monadic functionality. But there are other functors/monads, as you suggest, which make less sense.

Here's a pretty odd one:

    data Needs r a = Needs { execute :: (a -> r) -> r }
In other words, a value x :: Needs Int String is a computation which can return an Int if you provide it a means to convert a String into an Int. You might conceptually say that it will give us a "complex" or composite "view" into values of type String given a "simple" or "atomic" view. For instance, length

    (execute x) length :: Int
This type seems as though it almost certainly "contains" a value of type a (or String in the concrete example above). It might actually contain many such values. Here are three examples, the first two are the cases mentioned above. The third is similar to the None case with Maybes.

    ex1 :: Needs (\view -> view "Hello, world")           -- contains one string
    ex2 :: Needs (\view -> view "Hello, " + view "world") -- contains two strings
    ex3 :: Needs (\view -> 0)                             -- contains no strings
We could also imagine more exotic manipulations occurring inside of the Needs. For example

    ex4 :: Needs (\view -> let n = view "Hello" in view (repeat n "world, ") \* n + 10)
It should also be clear that there's no way to "unwrap" a value of type Needs R A. This is obviously true if R and A are different types. You actually could imagine using mutable state, like a mutable vector v, to view all the As stored inside of a Needs

    (execute ex4) (\a -> { push v a; () }) -- not really Haskell syntax
But this throws away all of that repeat and * and + nonsense in ex4. It's not really the same as a true "unwrap".

But Needs is still a monad.

    pure :: a -> Needs r a
    pure a = Needs (\view -> view a)

    (>>=) :: Needs r a -> (a -> Needs r b) -> Needs r b
    m >>= f = Needs (\view -> execute m (\a -> execute (f a) view))
So, Needs is an example of a Monad which cannot in general be unwrapped. Working with it is a little complex in most cases, but the monad operations help to consolidate a couple very common, useful operations on Needs.

Finally, I'll give up the ghost and note that Needs is known as the Continuation-Passing Monad. It helps you write code in CPS style without having to actually, deliberately pass continuations all around. This can be really useful for a few dumb-programmer tricks, but it's also a pretty useful framework for building embedded domain-specific languages.


This absolutely takes me back -- this is one of the best easily-approachable guides to Monads I've ever seen. Similar to Learn You A Haskell For Great Good (colloquially "LYAH"), even if it's not the most in-depth or complete, it's the first thing I often send


I recently wrote an article[0] about implementing a new Monad. It starts off with a simple premise and solution that slowly builds up the use case to the point where something more complex is needed to satisfy additional capabilities but also a more ergonomic system which eventually leads into creating a new type of monad.

I find this approach can make understanding the intuition behind monads more approachable.

[0] https://sambernheim.com/#/blog/building-a-monad


    (a -> b) -> T a -> T b
  T (a -> b) -> T a -> T b
  (a -> T b) -> T a -> T b


I would classify this as a pun, not an explanation.

That said, I prefer a presentation using `times` and `join` rather than `ap` and `bind`:

   map   :: (a -> b) -> (f a -> f b)
   times :: (f a, f b) -> f (a, b)
   join  :: f (f a) -> f a
Reason being, `ap` and `bind` are just `times` and `join` composed with `map`; it's easier (for me) to think about what `ap` and `bind` bring to the table separately from the behavior of `map`.

`times` lets you take a pair of sources and turn it into a source of pairs. This lets you build pipelines that flow together from multiple sources into a single result; with just `map` you can only assemble a unary pipeline with one source.

`join` lets you take a source of sources and meld the layered structures together into a single source. This lets you create pipelines whose contents dynamically extend the pipeline. Without `join`, you can only construct pipelines whose stages are fixed up-front.

(As a bonus, the presentation with `times` leads to the idea that an "applicative functor" is a monoidal functor. If you don't know what a monoid is, maybe it's a toss-up in terminology, but the term "monoid" is vastly more common than the term "applicative".)


Thank you for that, the explanation about `times` makes a lot of sense. However I can't really understand your explanation for `join`. Do you have any example or longer explanation about "pipelines whose contents dynamically extend the pipeline" and how they are different from "pipelines whose stages are fixed up-front"?


Are you familiar with promises, particularly in the JavaScript ecosystem? A Promise<A> represents a pending interaction; you'll get an A when the interaction has finished, but you don't have an A now. But even while you don't have the A, you can prepare a pipeline now that will process it later.

Promise<_> is a functor because we can write a function `map : (Promise<A>, A -> B) -> Promise<B>` that takes a pipeline stage to run, and promises to run it on the A when it arrives. That stage produces a B, but we don't have it yet, either (since it needs that A). Just having a `Promise<B>` doesn't tell us how many pipeline stages it's built -- the fact that we called `map` isn't evidenced in the type itself -- but we know that there's a fixed number, and that all of the stages have already been given to it. After all, we call `map` in the current timestep; the stages are known long before they get to run.

Promise<_> is a monoidal functor because we can write a function `times : (Promise<A>, Promise<B>) -> Promise<(A, B)>` (but see note^). We don't necessarily have either the A or the B, and the interactions that produce them may not complete at the same time; but eventually we will have both of them, so we can wait until then and pair them together. Because we can wait for two pending interactions at once, we can wait for any number of them at once; think about how to write a function `[Promise<A>] -> Promise<[A]>` using `times`.

With `times`, we're just combining all of the stages in the existing promises together. Unlike `map`, which applies stages sequentially, `times` applies them concurrently -- like a branching series of pipes with Y-bends where `times` is used. With this, a `Promise<A>` still has a fixed number of stages known up-front -- all stages are still provided through `map,` which we still call before we ever have an actual `A` to run them with -- but we also now have a fixed number of forks/branches, where before it was a straight-line series of stages.

Promise<_> is a monad because we can write a function `join : Promise<Promise<A>> -> Promise<A>`. The nested promise means that we need to wait for one pending interaction, and then wait for another pending interaction, before we can get an A. This often happens because we interact sequentially with a backend service; we need to log in before we query the system, so if we try to put together pipeline stages like `(Username, Password) -> Promise<Session>` and `Session -> Promise<SensitiveInfo>`, we'll end up with a `Promise<Promise<SensitiveInfo>>`. This type only lets us make up to two interactions; if we can't get `SensitiveInfo` within two interactions, we simply can't write the program at all.

The `join` function lets us collapse two interactions within a single Promise; and just like `times`, if we can collapse two interactions, we can collapse any number of them. With `join`, a type like `Promise<SensitiveInfo>` may take an arbitrary number of interactions before we finally get the result. However, this is a lot more powerful than you might expect: those inner promises don't exist yet! They will exist, as each interaction completes; but now a single `Promise<A>` includes both the stages we've set now, and any stages that future interactions may generate. We can no longer tell how many stages it will take until the final `A` is computed, nor how many interactions it will take. We'll only know when it actually happens.

(^) Well, we also need a function `pure : A -> Promise<A>`, which lets us pretend we don't have an `A` yet. Or, put differently, it puts an A behind a "no-op" interaction that completes immediately and makes the A available to any downstream stages as soon as it's needed. Something like an echo server.


Thanks a lot again, that was a great explanation. I see what you mean now by "pipelines whose contents dynamically extend the pipeline".


Only Haskell programmers would consider the type signature a replacement for an explanation. I don't mean to be rude to you personally - this is endemic to the culture of Haskell.

Possibly once you're immersed in the world and up to speed with Haskell type signatures are all you need. Unfortunately for people coming to the language we need more context to fit the pieces together, and the vast majority of documentation I found lacked that context.


The type signature and the laws are all functors, applicative and monads are. It's just that these type signatures happen often "in the wild", so we give them names to make reasoning easier.

To take another example, an iterator could be something like:

    next    :: T a -> a
    hasNext :: T a -> bool
That's all an iterator is. The cool thing is that if you implement next and hasNext for a list, an array, a tree and a graph, you can all use them with the same code. That's what monads, functors and applicatives are.

As for the culture part, I think Rust is the sweet spot. Types are here, but you also have explanations. I personally find explanations without types worse than types without explanations, but ideally you should have both. Of course that's assuming meaningful types.


Strictly speaking, in an immutable world, your iterator only gives you access to the head of your collection.

I think this is a fairly standard functional representation of iterable entities:

  next :: T a -> Maybe a
  rest :: T a -> Maybe (T a)
We could (and you should) argue that this is correct by intuition, but it's also possible to sketch a proof that these structures all "behave" the same as lists would. We can first combine `next` and `rest` into a single function -- note that we can always interconvert between (next, rest) and step:

  step :: T a -> Maybe (a, T a)
This has the form of something called a "co-algebra", which consists of (1) a functor `F`, (2) a type `k`, and (3) and a function `k -> F k`. In this case, `k = T a` and `F = Maybe (a, -)`.

The interesting thing is that the functor `Maybe (a, -)` can have many co-algebras, depending on which type and step function you pick. The type [a] -- that is, picking T to be [] -- gives a particularly special co-algebra, called the "terminal" coalgebra for `Maybe (a, -)`. In short, while any other choice of coalgebra may support additional operations, you can convert any one of them to a list without disturbing a process that only makes use of the nominated `step` function. They're observationally equivalent to lists.


> Strictly speaking, in an immutable world, your iterator only gives you access to the head of your collection.

That's true, but I was talking about iterators in general. My point was that functors, applicative and monads may seem weird and special, but we use the same thing in everyday life. Same thing about the signatures: iterator is that, just a signature, but nobody complains about it.

Your explanation about functional iterators is great, I think it shows really well how iterators are about treating any data structure as if it was the simplest data structure. In a imperative language, it's the array, in a functional language, it's the list.


> The type signature and the laws are all functors, applicative and monads are

Yes, and the parent I was replying to had only provided the type signatures.

Even more than that though, describing what something is is usually not sufficient. You also have to describe why it's useful, or rather what it's for.

This is analogous with comments in code - in well written code you don't need many comments telling you what the code is doing, it's often fairly clear. They're also for convenience not necessity because you can, with enough time, figure out what the code is doing because you have all the information. What are irreplaceable are "why" comments telling you why something is done. E.g. "this prints two newline characters at the end of the document" is inferrable, but "print two newline characters because X program is unreliable unless we end with a blank line" is invaluable.


> Even more than that though, describing what something is is usually not sufficient. You also have to describe why it's useful, or rather what it's for.

The thing is, these things are very abstract. The signature is what they are and why they are useful. A way to think about them is that they were discovered. People wrote a lot of code, saw the same pattern come up over and over again, and named it so they can abstract over it and reason about it.

> This is analogous with comments in code - in well written code you don't need many comments telling you what the code is doing, it's often fairly clear. They're also for convenience not necessity because you can, with enough time, figure out what the code is doing because you have all the information. What are irreplaceable are "why" comments telling you why something is done. E.g. "this prints two newline characters at the end of the document" is inferrable, but "print two newline characters because X program is unreliable unless we end with a blank line" is invaluable.

I agree with you here. The problem is that it's very hard to describe why they are useful if you didn't encounter the problem they solve yourself.


Yes and no, there are tons of folks who are part of the haskell culture who recognize the need for more context (myself included).

But I think GP comment summarized it pretty well, it shows how they all relate to each other in a concise way. If its not enough of an explanation, ok, look for another.

Of course, there are lots of libraries that only have type signatures as documentation. I think most people think that's less than optimal, but the haskell community is much smaller than many others. So what you'll want to do is look for a blog post or something that explains it in more detail.


This is not an explanation. But it's handy to keep up on a post-it not next to your monitor after you already understand the concepts but are still getting comfortable with them.


I'm not an haskel programmer. I barely understand haskell type signatures. But it worked for me. It is not a substitute for an explanation but as a very condensed summary (as in if you forget everything remember at least this) it works great.


It's not a "replacement", it's an alternative. Many different things can simultaneously exist.


I wonder why auto-curried function syntax has become so ubiquitous in explaining this. For anyone coming from a more traditional CS background, it's just extra cognitive overhead in understanding the meat of the argument.

For example,

  (a -> b,T a) -> T b
  (a -> b,T<a>) -> T<b>
Is just as good in explaining what bind means as the curried version, but more familiar to the vast majority of programmers and CS graduates.

Note: I do understand why auto-currying is extremely useful for doing functional programming, and I have no objections to teaching it. I just don't think it's important for teaching functors, applicatives and monads.

In a separate nit-pick, these three function definitions don't mean much on their own, and there are many pure functions that satisfy them without forming a functor/applicative/monad.


> I wonder why auto-curried function syntax has become so ubiquitous in explaining this.

Probably because Haskell (and related languages) use it.

> For anyone coming from a more traditional CS background, it's just extra cognitive overhead in understanding the meat of the argument

I find that curried function signatures are slightly lower cognitive overhead after a bit of exposure, and while I can see that its the kind of thing that is likely to vary from person to person a bit, I can't see that it's likely to be a big thing either way.


> I wonder why auto-curried function syntax has become so ubiquitous in explaining this.

I think it's primarily because this is the form you use them in in a language like Haskell. Most explanations are exported from Haskell in one way or another, so they're deeply colored by Haskellisms.

It's the same reason why `ap` and `bind` are generally preferred over `times` and `join` -- in Haskell you can even use `f <$> a <*> b`, which is a deeply clever use of infix `fmap` and `ap` that drives a syntactic analogy with regular function application, `f a b`. Try doing that in another language and you'll barf. And do-notation desugars most cleanly into >>=, so that's where Haskell-oriented explanations start from.

If you implement these in Java (which you can, you just can't abstract over T), it looks like this:

    class List<A> {
      <B> List<A> map(Function<A, B> f) { ... }
      
      <B, C> List<C> map2(List<B> other, BiFunction<A, B, C> f) { ... }

      <B> List<B> flatMap(Function<A, List<B>> f) { ... }

      // bonus; note that these make more sense static
      static <A, B> List<Pair<A, B>> times(List<A> as, List<B> bs) { ... }

      // in fact, this one has to be static
      static <A> List<A> join(List<List<A>> aas) { ... }
    }
That is, the second argument `T a` in the top poster's comment becomes the implicit `this` parameter for instance methods. I find the `static` alternatives often make more syntactic sense in context; just goes to show that one language's norms don't always transfer.


You can also look at the Scala implementations if you like. Scala has HKTs so you can write it all directly.

   trait Functor[F[_]] {
     def map[A, B](f: A => B)(x: F[A]): F[B]
   }

   trait Applicative[F[_]] extends Functor[F] {
     def pure[A](a: A): F[A]
     def apply[A, B](f: F[A => B])(x: F[A]): F[B]
     def product[A, B](xa: F[A], xb: F[B]): F[(A, B)]
   }

   trait Monad[F[_]] extends Applicative[F] {
     def bind[A, B](f: A => F[B])(x: F[A]): F[B]
     def join[A](x: F[F[A]]): F[A]
   }


That version is extremely noisy, imo.


I once had to add logging to a large set of methods in C#, so I thought, well, let's make a function decorator like Python or Haskell, there's no reason it wouldn't work on C#.

Well, it indeed worked perfectly well. But the amount of noise added to satisfy the type checker was larger than reimplementing the log procedure at each place.


The version in Java? Yes, because it's Java.


I think it's clearer to think of map as a function

    (a -> b) -> (f a -> f b)
i.e. one which lifts function application over values of the functor. This is then analagous to the role of `f` for mapping types, and corresponds to the concept of a functor in the category of types. This relationship is harder to see in the tupled version. There's a similar argument for e.g.

    =< :: (a -> m b) -> (m a -> m b)


Typo: =<<


> it's just extra cognitive overhead in understanding the meat of the argument.

Compared to other versions I have seen this Haskell-style is the most elegant and understandable.


That’s a very pretty way to lay it out. I’ve never thought of reversing the order of arguments in the monad before, but it makes sense when you see it next to the others.

What about this? Is this even possible?

(T a -> b) -> T a -> T b

I guess to complete the set you can also add this one which is not very interesting, it’s just “apply the function”

(T a -> T b) -> T a -> T b


> What about this? Is this even possible?

> (T a -> b) -> T a -> T b

Yep; that's the signature for the "extend" method of a comonad.

https://hackage.haskell.org/package/comonad-5.0.8/docs/Contr...

Though I prefer the presentation with `duplicate :: w a -> w (w a)`, which is just the signature of Monad's `join` but flipped around.


> (T a -> b) -> T a -> T b

That’s just a normal function application followed by return.


Not at all. There’s sensible implementations which aren’t like that. (You might as well say that (>>=) is just function application proceeded by ‘unwrap’.) My favourite example is cellular automata: if you have a data structure containing a grid along with a position on that grid, then that function signature corresponds precisely to running a cellular automaton on that grid. The idea is that the initial ‘T a -> b’ function is run for every single position, then all the results are combined into a single ‘T b’ grid.


Except of course there is no unwrap in the actual definition while both application and return are sure to be there. It’s not like any of this is relevant anyway. The usual abstractions liked by Haskellers are utterly pointless as this whole discussion nicely demonstrate.


> Except of course there is no unwrap in the actual definition while both application and return are sure to be there.

The actual definition of what? `return` is not part of the Comonad class, while `unwrap` (alternatively called `extract`) is not part of the Monad class. No matter what your preferred way of designing abstractions, it seems a strange complaint to say "feature X is useless with abstraction Y" if you don't have a Y in the first place.

> It’s not like any of this is relevant anyway. The usual abstractions liked by Haskellers are utterly pointless as this whole discussion nicely demonstrate.

Well, okay then.

You may not find value in this way of thinking, but other software developers -- many of whom, yes, ship actual software with actual business value -- find these ideas extremely valuable in organizing their thoughts about the problem domain.

I'm open to sharing that mindset with anyone of a curious mind, but most of us aren't going to shove it down your throat. You're welcome to do what works for you.


The important missing context is that a monad has `(return, bind)`, while a comonad has `(extract, extend)`, and both have their own respective laws letting you reason about combinations of those operations.

You can certainly do what you describe if you have a monad, but since you can't even try to unwrap a value if you don't know what the monad is concretely, you have no way to write a function `T a -> b` that's generic over T. Functions `T a -> b` only really make sense for comonads, which give you the `extract` and `extend` functions as primitive.


You might as well say that `(a -> T b) -> T a -> T b` is extract followed by normal function application.


Huh, interesting how we came up with nearly identical comments at the same time! (Though I somehow managed to get the function name wrong…)


I wish people would say map instead of fmap, as fmap is Haskell-specific, and map exists in lots of other languages.


If the examples are in Haskell, it should definitely use the Haskell term, so that people can then run the code.

An alternate might be to have examples in multiple languages, colour-coded.


I think they should use map in the text and fmap in the Haskell examples, while explaining quickly why Haskell uses fmap. You can translate an example with map in Python, JS, Java, etc. fmap is Haskell-specific. I also think we shouldn't tie "functional design patterns" to Haskell so much, but that's a bigger issue.


Seems like this would be a bit confusing though? `fmap` (flatMap) corresponds to `bind` or `chain` (Monad) rather than `map` (Functor) right?


Nope, `fmap` is indeed short for functor map. It's a historical accident in Haskell as `map` was specialised for lists (ie. `[]`) from the start and later, when mapping would be generalised to all things Functorial, it needed a new name so as to not break everything, hence `fmap`.


Awesome thank you for the clarification. I extra agree with the confusion then since `fmap` sounded exactly like `flatMap` to me!


No, "fmap" is actually the name in Haskell given to the function defined by the Functor typeclass. It was named "fmap" ("f"unctor map) because "map" was already taken by the map specialized to lists.


`flatMap` is actually called `concatMap` in Haskell, or more generically the monadic bind operator `>>=`.


This is basically exactly how I understand functor/applicative/monad. I wonder how well it explains to someone whose not as familiar.


I'm someone who doesn't understand monads, and the tutorial basically lost me at "Just what is a Functor, really? Functor is a typeclass. Here's the definition"

Where typeclass links to this other massive tutorial and then my eyes started to glaze over.

(Side note, this is probably the 50th or so Monad tutorial I've read and I still don't understand them. For context, I've been programming professionally for nearly 20 years now. My only conclusion at this point is either Monads aren't actually that important, or I actually do understand some useful subset of what Monads are and I just don't know it, and I just never had a need to use them "fully".)

Edited to add: I think my problem with Monad tutorials is that they all tend to explain the concepts using Haskell, and I can't read Haskell. It syntax is utterly impenetrable to me and I have no idea what any of the silly symbols mean. And any tutorial that tries to teach Monads using other languages ultimately feels like it's teaching me something I don't need to know... (how to do side effects in a pure functional language, when I don't code in pure functional languages and likely never will.)


There are a good deal of resources explaining these concepts in non-Haskell languages. This one is quite good and in javascript: https://github.com/MostlyAdequate/mostly-adequate-guide

As far as functors this could be rephrased more comprehensibly to those unfamiliar with Haskell as "What is a Functor, really? Functor is an interface here are some definitions:"

(typescript)

  interface ArrayFunctor<A, B> {
      map: (f: ((a: A) -> B), arr: Array<A>) -> Array<B>
  }

  interface PromiseFunctor<A, B> {
      map: (f: ((a: A) -> B), arr: Promise<A>) -> Promise<B>
  }

  interface ObservableFunctor<A, B> {
      map: (f: ((a: A) -> B), arr: Observable<A>) -> Observable<B>
  }

"definitions" is plural here because typescript doesn't support higher-kinded types or else it might look like this:

  interface Functor<F, A, B> {
      map: (f: ((a: A) -> B), arr: F<A>) -> F<B>
  }

(note, there are ways around this used in the fp-ts library, see here: https://www.matechs.com/blog/encoding-hkts-in-ts4-1)

> I actually do understand some useful subset of what Monads are and I just don't know it

This is certainly the case if you have ever used flatMap on an array or chained Promises (or other async structures like futures in other langs).



I read this example years ago, and it didn't clear anything up for me.

After half a year of trying to understand monads, I saw a video by MPJ/Funfunfunction [0] and it finally made sense.

[0] https://www.youtube.com/watch?v=9QveBbn7t_c


I dont think I understand how this concept has acquired such gravity... but I think Im coming around to appreciating that it does.

Almost anything that inspires this many people to be interested in learning, teaching, and talking about math cant be a bad thing.


Monads aren't really about math, just like OO isn't really about objects in the real life, it's just inspired by them. Learning more about object in real life doesn't help much when doing OO, just like learning more about math doesn't help much when using monads in programming.


I agree that in principle one ought to be able to do without thinking explicitly about logic, algebra, etc... but is that how it really goes in practice?


It's not, and I think that's why some people take a long time "getting" monads. Lots of tutorials online spend a lot of time talking about category theory to introduce monads. You can see the opposite view in things like this: http://dev.stephendiehl.com/hask/#eightfold-path-to-monad-sa....


So in the last illustration with all three concepts in a row, why does the monad change the wrapped 2 to a 3 that gets shoved into the function? What makes the 2 turn into a 3?


That looks like a... whatever the graphical analogue of a typo is. It seems they mostly reused assets from earlier in the page for that summary.


Ahh, thanks for the reassurance!


Is this a mistake/typo in the article?

  > Just 20 >>= half >>= half >>= half
  Nothing


No: it goes Just 20 → Just 10 → Just 5 → Nothing, because 5/2 can’t be represented as an integer.


Ah, it's integral division.

I'd expect some kind of expection in that case.

This sounds like a dangerous example of monads (since you don't know if you had a Nothing value up in the chain or a valid value that can't be representated)!


> I'd expect some kind of expection in that case.

Sorry, what exactly do you mean by ‘expectation’ here?

> This sounds like a dangerous example of monads (since you don't know if you had a Nothing value up in the chain or a valid value that can't be representated)!

Well, this particular case is of course oversimplified. There’s plenty of ways of dealing with different kinds of errors. The easiest is to define a sum type of errors:

    data DivisionError = ValueMissing | NonIntegralValue
Then redefine the function to return an ‘Either DivisionError Int’:

    half x
        | even x    = Right (x `div` 2)
        | otherwise = Left NonIntegralValue
Now chaining several calls to ‘half’ will abort at the first error.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: