Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inventing Monads (stopa.io)
128 points by stopachka on Aug 29, 2020 | hide | past | favorite | 122 comments


> This begins to get us to the fundamental abstraction of a monad: a box, with an interface for map, and flatMap

Before I understood monads, I read variations of the above sentence a million times, always got stuck here: https://i.imgur.com/McThkuh.png

How does this abstraction let us perform IO, do in-place destructive updates, etc? The answer is that it doesn't. These must be primitives supplied by the runtime. A monad like IO has its two functions (return+bind), and a bunch of other magic functions. It's obvious now but that fact is hardly ever stated.

I wonder if anyone has tried explaining from the other direction? Rather than building up to Maybe, try building down from a desire to print Hello World.


What you say is correct, but let me try and explain a little further. The Haskell function `getChar` is used to get a character from stdin. It has the type `IO Char` and is a totally, 100% pure. In fact, it isn't even a function, it's just a value.

You can write `getChar` as many times as you want, and you will get the same result every time, and it will have nothing to do with whatever character is being sent to stdin. Instead, what you will get is an instruction, saying "please get a character from stdin".

What the IO monad allows you to do is to compose instructions together. For example, you could write `getChar >> getChar". This makes a new instruction that says "please get a character from stdin, discard it, then get a character from stdin". The `>>` operator means "follow the instruction on the left, discard the result, then do the instruction on the right".

Your entire program is made by composing instructions like this, into one new huge instruction that specifies your entire program's behaviour. You assign that to the special name `main`. At runtime, the instruction is executed.

The only bit of compiler magic that's necessary is the code for interpreting the instructions and actually executing them. In theory, although haskell doesn't let you do this, you could write `getChar` yourself, and just return the same thing that `getChar` does, and it would behave identically.

This I feel is the simplest explanation for why monads are necessary in Haskell. They allow you to conveniently specify what you want your program to do, using only pure functions and values. Why do you want all values to be pure? Many don't because it's kind of a pain, but Haskell kind of exists to see if a lazy, pure functional language can be fun to use and many find that it is. Hope this helps someone!


Great description of IO, but I think it doesn't actually motivate monads. Your example of `getChar >> getChar` could be simply `[getChar, getChar]`. Why can't we just compose instructions into an ordinary list?

The answer is that we can, and Haskell used to work this way [1]! So as 'chowells' observed the monad-part of IO is almost incidental.

1: https://stackoverflow.com/questions/17002119/haskell-pre-mon...


> Your example of `getChar >> getChar` could be simply `[getChar, getChar]`. Why can't we just compose instructions into an ordinary list?

Because you can't pass results from a earlier list element into the computation of a later element. You'd need a list with existentially-quantified element types such as:

  {-# LANGUAGE ExistentialQuantification, GADTs #-}
  foreign import ccall "exit" exit :: Int -> IO ()
  infixr 1 :>

  data IOList a = forall b. (:>) (a -> Act b) (IOList b)
  data Act b where
    Getchar :: Act Char
    Print :: String -> Act ()
    Exit :: Int -> Act ()
  
  main' :: IOList ()
  main' = (\_->Getchar)
       :> (\c->Print ("Got '"++[c]++"'\n"))
       :> (\_ -> Exit 0)
       :> undefined
  
  runio :: a -> IOList a -> IO ()
  runio a (f:>fs) = flip runio fs =<< case f a of
    Getchar -> getChar
    Print s -> putStr s
    Exit n -> exit n
  
  main = runio () main'
where f :> g :> h :> ... is your list of instructions (rather than f : g : h : ..., which doesn't allow the response type of f to be related to the argument type of g).


Ah, thanks - this is a really nice explanation. From reading a few "what is a monad" posts in the past, I get how the a monad can chain functions in sequence for imperative-style execution, but how the information like 'getChar' actually got in was never clear. It all comes down to the runtime I guess.

I'm favoriting this comment so I can come back the next time I forget how all this works :)


Although true; I'm not sure much of that provides clarity. Eg, "Your entire program is ... one new huge instruction" - that is true of any program, and it doesn't sound like a useful abstraction for reasoning about programs. The idea that getChar returns an instruction rather than a character is similarly not an obvious improvement from getChar() returning a character. It is quite subtle for someone to complain that getChar() returning a char is suboptimal.

Everything I've seen is monads are confusing because they solve a problem imperative languages don't think they have. As the article points out,

  function getDisplayPictureFromId(id) { 
    const user = getUser(id)
    if (!user) return
    const profile = getProfile(user)
    if (!profile) return
    return getDisplayPicture(profile)
  }
is totally fine and idiomatic in most imperative languages. Once a programmer starts chaining functions together (as functional programmers are wont to do) then the "if (!user) return" become a substantial blocker that needs a solution - which leads very quickly to monads and abstracting away IO.


That's not fine and idiomatic in any language I know.

This would be:

    function getDisplayPictureFromId(id) {
        return getUser(id)?.profile?.displayPicture
    }
If the language does not have the null-safe operator '?.' you just write it:

    function nullSafe(obj, callback) {
        if (obj) {
            return callback(obj)
        }
        return null
    }
    function getDisplayPictureFromId(id) {
        return nullSafe(getUser(id),
            (user) => nullSafe(user.profile,
                (profile) => profile.displayPicture)
        ))
    }
Not as nice in this case, but with a few similar functions for composing other functions, you achieve similar benefits to Monads without having the complexity of higher-kinded types.


> In fact, it isn't even a function, it's just a value.

What's the difference?


Functions are values, but values that are not functions are those that cannot be supplied an argument.


I think most of this happens because lots of people say things like "the IO monad" when they actually just mean "IO". It misleads people into thinking the monad part is important. It isn't. IO is the tough concept. Monad is almost trivial.


Good point, I agree. But you must learn it to get anything done. Grappling with IO + monad + do-notation all at once is hard!

I once spent a day thinking I had misunderstood something fundamental, but really I just had an errant tab instead of a space.


Ok, yes. Monads as a concept are almost trivial in comparison to thinking in terms of the IO type. But that doesn't mean they are easy and don't add more hurdles to get over. I think I undersold how daunting it can be to face all the new things at once, and you're right about it.


As a recent Haskell learner, I found the indentation part of Haskell hard to learn and a little undiscoverable/hard to google.

The number of times I've used curly braces, single lines or expression substitution to bail me out is many...


You might like hinc, it's haskell but with C style syntax: https://github.com/serras/hinc/blob/master/why.md


Thanks for the suggestion but Haskell would not be Haskell without its ML-yness.

As confusing as operator precedence can get, I do see the beauty in "programmable whitespace" but to get anywhere close in other language syntaxes you need to do even more perverse things like here [1]

If I did have to choose a different syntax for haskell it would be S-expressions rather than Algol so perhaps Hackett from Lexi Lambda

[1] https://stackoverflow.com/a/48553568


I like it.

"This is not by coincidence: people learning Haskell usually have problems with point-free style not because of the composition operator per se, but because code is suddenly "reversed". Other communities such as F# have adopted the "pipe forward" operator instead of composition as the default style."

That's why i prefer the reverse application operator & to .


+ type inference + return-type polymorphism!


Yeah this is like saying objects lets us perform IO in Java, and then going deep into explanations of what objects are and what rules they obey (Liskov substitution principle etc) - instead of just explaining that you call println() to write "hello world".


We often forget how much assumed knowledge (and accepted knowledge) went into learning OOP.

Turing Machine and Lambda Calculus are two entry points into computational thinking, is it really so alarming that if you've only learned one that trying to learn the other will make you feel like a beginner all over again?


Y'know what, maybe looking at it like that is actually one of the problems: We don't learn our first imperative language(s) by way of Turing machines. That theory-level understanding comes much later, after we have a handle on the language itself.

When first learning, the most common abstractions are variations on "a list of steps" - like an instruction manual or To-do list - without touching on memory.

I wonder if there's some sort of similar abstraction here that could be used instead, for the just-getting-started stage...


Excel function pipelines which take inputs to outputs. People build wonderfully useful things in Excel using its standard library of spreadsheet functions. They are all referentially transparent.

In imperative steps of code, time flows down your code.

In functional code your flow of time is each function application.


This is what I call haskellsplaining - explaining concept from first principles and mathematical definitions rather than from how they are used and what they are useful for.


I think the sticky wicket with Haskell is less the terse syntax and more the execution model.

Why not start enumerating the lines of a file, and show how we're building up a list of thinks, and there is no actual variable representing the line numbers because state is for Statists?


Thunks


Thanks


to me the issue is twofold:

- cultural reliance on lambdas as the universal force (nothing against lambdas, but outside the fp world, there will be too much culture shock)

- representation as computation, the monad is only kinda building a graph to be evaluated, and each node can be wrapped with rules as you see fit


> the monad is only kinda building a graph to be evaluated

That's what the free monad does. Not every monad is a free monad, IO certainly isn't.


Why isn't IO free?


Come think about it, it could be. That's not how it's implemented in any Haskell implementation I know of.


Hmm. Your last line makes me think that they're, essentially, using monads to represent S-expressions. That seems... somewhat clumsier than just using S-expressions. You gain a really good type system, though.


I'm so not an expert I hope people don't read me too seriously. It's just my understanding of the FP world. A lot of time they will encode ideas in threaded closures that do nothing until they're consumed (the programmable semicolon doing the 'interpretative' part) which is a bit like a DSL tree and not so removed from a ~lisp.


Hopefully Java programmers are used to lambdas since Java 8


It's not the same context really. A lot of languages have lambdas, and had before Java such as js, python.. but in these languages, it was an obscure feature.

In lisp/fp it was the "only" feature. People were thinking in how to encode anything with lambdas and not with the usual imperative traits (prog, do, loop). Consider that in 75, the first scheme paper was already talking about CPS (aka pre-monadic style).

I'm not making value judgement, it's just that using parameterized code blocks in LINQ select/where or java streams or python map/reduce is just the very first step in a long series of stairs. And monads are on the 3rd floor kinda.

So to the fp crowd it's business as usual, but for the rest of the world it's a twisted idiom at first.


A monad really is just a wrapper around a value, with flatMap and return.

For some kinds of monads, like maybe, that’s all you need.

But, some other kinds of monads need more logic. List is one, IO is another.

In OOP language, sometimes all you need is the parent class. But sometimes you need to extend the parent and add more.


It's actually more like an abstract class! Even the Maybe monad has some implementation, it's just super simple :)

https://hackage.haskell.org/package/base-4.14.0.0/docs/src/G...


Totally agree, that’s a good point.


In various places on the web one can find the quote that 'Dependency injection is a 25-dollar term for a 5-cent concept'.

I feel it is the same with monads including all the false suggestions that one might need to understand category theory and similar such nonsense.


suggestions that one might need to understand category theory and similar such nonsense

It’s definitely unnecessary to understand category theory to work with monads. What is helpful, however, is having some comfort with math.

What does that mean exactly? It’s a level of comfort in working with definitions, properties, operations, special elements, proofs. People can get so frustrated because they think they don’t understand what a monad is. Like they want to hold it in their hand the way they would an apple or a tennis ball.

When you’re comfortable with math you kind of lose that need to think about an object concretely. You start to only care about the definitions, properties, axioms, laws, theorems, etc that concern a particular object. Then you just play around with a few examples and see the implications of these things. That’s all there is to it. The power comes from the abstraction. It can take time to become comfortable with abstract concepts though.


Another thing that is often missed is that we think we “understand” something when in fact we just got used to it. Even such simple concept as “number” would probably be very difficult to explain to someone (who either doesn’t know what a number is or wishes to “really understand” it).


Oh yeah. The history of numbers is long and complicated. I think today we take for granted the idea that numbers are objects (in some abstract sense). In the past, there was simply no concept of number as a thing. Numbers were used for counting or measuring, so they existed only as adjectives attached to their objects of counting/measuring, not nouns in their own rite.


Well the natural numbers are the decatigorification [1] of the category of finite sets. I won't speak to other kinds of numbers but it's funny how category theory helps answer that question as well.

1 https://math.ucr.edu/home/baez/week121.html


If only DI had a clear definition like "monad" does, I might be able to understand what it is.


It just means passing a dependency as an argument to method or constructor. Seriously, it is just that!


Which, in FP, is all but invisible: the arguments to a function literally are its dependencies.

I think OOP has more primitive concepts (and more mutation) than FP, so dependency injection in OOP also includes object construction and often mocking effectful operations. That's why it gets its own name in OOP, while being more of an ambient idea in FP.


I don't think I agree with that. In scheme you can write (display "hello world") inside a function and this is directed to some globally configured port. If either the port or the display function was passed as a parameter to the function, then it would be dependency injection.


On its face, you're right. You can formalize this approach using dynamically-scoped variables, which is a step on the road toward coeffect systems in typed functional languages. But in a coeffect system, the type of `display` would explicitly call out that its environment must provide the necessary dynamic variables, which brings us back to dependency injection. `display`'s dependencies would be injected via dynamic scope, which you can override in the caller by defining a dynamic binding.

As an alternative, I would suggest that `display` is special, and that instead of thinking of passing an extra parameter to `display`, that the module that calls it should instead have `display` itself injected.


Yes that is actually what I meant, I guess I was not being clear. The function calling "display" could have "display" passed as an argument rather than calling it as a globally defined function.

Relying on dynamic variables would probably not be considered dependency injection. A major purpose of DI is that dependencies should be declared in the signature, making it explicit which dependencies a unction or object depends on.


> A major purpose of DI is that dependencies should be declared in the signature

Yes, and a coeffect system would cause dependencies on dynamic variables to be declared statically, even if they're provided by "the environment" at runtime.

Tomas Petricek's PhD project page is a good introduction to coeffect systems, and it illustrates dynamic variables as an example. http://tomasp.net/coeffects/


I used to restrict DI as dynamic binding but someone told me that for business, having a special category of arguments to be tweaked as see fit is useful. It's variable at the system level maybe ?


Imagine a function, that accepts another function as an argument.

Badabing badaboom, the essence of dependency injection


Isn't that Inversion Of Control, not Dependency Injection? I've always thought of Dependency Injection as about "declaring a class in terms of its dependencies' interfaces, but allowing a framework to take responsibility for instantiating the actual dependency objects" - whereas "a function that accepts a function" is IoC (e.g. https://kentcdodds.com/blog/inversion-of-control/)

They're very closely related concepts - both abstractly saying "let me define unit of logic in terms of how it composes passed-in units of logic", but the distinction between "objects that need to be instantiated and injected at construction time" and "functions that are passed in at runtime" is pretty large.


I think the distinction may blurry a bit, if you consider classes as a fancy way of writing higher order functions.

i.e:

You can think of a class as a higher order function, that takes in a list of arguments (constructor), and returns a list of functions, that are defined within the closure of those arguments

--

Hence, to me, the essence of these things are the same -- thought you are right that when people talk about dependency injection, they are also implying a specific way that these dependencies are provided (by the framework, usually magically)


> You can think of a class as a higher order function, that takes in a list of arguments (constructor), and returns a list of functions, that are defined within the closure of those arguments

That's a really great definition of a class (when used as a "service" class, rather than a data class or an enum class, for example). And to me, shows that FP and OOP are really two sides of the same coin. Limit OOP to immutable data structures, and you get something extremely close to FP.


> Isn't that Inversion Of Control, not Dependency Injection?

Dependency injection is one of the techniques used to implement inversion of control.

> I've always thought of Dependency Injection as about "declaring a class in terms of its dependencies' interfaces, but allowing a framework to take responsibility for instantiating the actual dependency objects" - whereas "a function that accepts a function" is IoC (e.g. https://kentcdodds.com/blog/inversion-of-control/)

Nope, dependency injection is about passing dependencies to the objects that depends on them (i.e., inject the dependency), instead of letting the object itself instantiate them directly or actively request access to them.

You may or may not depend on a framework to manage dependencies (service locator) and pass them to instances (service injector) but those are just helper components that assist with the whole dependency injection workflow. The key point is that dependencies are passed to the objects that depends on them.


There's also the classic http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...

Written by a 3-time Oscar winner, no less!


Looks like there has been only one small discussion from 2009: https://news.ycombinator.com/item?id=958789


> Writen by 3-time Oscar winner, no less!

Sorry, what? Their About page is blank.



2015 - Technical Achievement Award, shared with Kim Libreri, George Borshukov: For their pioneering work in the development of Universal Capture at ESC Entertainment.

2014 - Technical Achievement Award, shared with Olivier Maury, Ian Sachs: For the creation of the ILM Plume system that simulates and renders fire, smoke and explosions for motion picture visual effects.

2001 - Technical Achievement Award, shared with George Borshukov, Kim Libreri: For the development of a system for image-based rendering allowing choreographed camera movements through computer graphic reconstructed sets.


To understand monads, I think it helps to know a language where expressing such a concept is more natural. Although I do not claim to know 100% what monads are yet, coming across these features made it easier for me to understand the concept a little bit.

- Algebraic data types

- OCaml's let expressions

- F#'s computation expressions

I also think that it is important to write something that naturally requires the use of monads. One thing that spring to mind is Parser Combinators.

- https://fsharpforfunandprofit.com/posts/understanding-parser...

- https://www.youtube.com/watch?v=N9RUqGYuGfw


I really like how this article builds up from a simple real life use case.

Shameless plug for my own article that did something similar about a week ago: https://medium.com/@ameltzer91/an-easy-to-understand-monad-g...


Nice article, I linked to it from my tumblelog https://plurrrr.com/archive/2020/08/19.html 10 days ago. Thanks for writing this.


Good article! I enjoyed that you used flatMap. And the FinancialRegulatorMonad is a great way to show that monads are not just IO, State or Maybe. :)


Thank you for the article. I had an idea of what Monads were and even played with them a bit in Rust, but I didn't know they had applications more specific than Result/Option types.


Thank you for the kind words, and nice article! : )


I think the hard part for most people is moving beyond Maybe/Either. It’s hard to find motivating examples when most languages lack denotational semantics and everything implicitly happens inside IO all the time.


That is something I've struggled with as well. Do you have any suggestions for resources, that could help overcome that challenge?


Nothing off the top of my head. The way I really got monads was just writing Haskell code. It's one of the very few programming languages with a clear denotational semantics for everything (as a result of being pure), and you end up with lots of simple problems like "how do I keep this value around and 'mutate' it" that are conveniently addressed by a monad like the State monad.


If you come from a C# background, this book is excellent: https://www.manning.com/books/functional-programming-in-c-sh...

It does have some minor shortcomings in my opinion (e.g. if you haven't already started on the path of reinventing monads yourself you may struggle to immediately understand why this is a big deal and how it will save your life, the author should have used LanguageExt instead of their own library as LanguageExt is actively maintained and extremely well thought out, the book stops just short of becoming practical in the sense of "here's how to start a new C# project while thinking in functions", etc.).

You might also consider the LanguageExt guide itself: https://github.com/louthy/language-ext/wiki/Thinking-Functio...


I actually found the LangExt package a few months ago, when I started a new job and had to use C# as the default language (I've primarily been using Python, Typescript, and Rust in my previous jobs)

I spent some time explaining the advantages of an FP approach to my colleagues and LangExt has started to pop up in their PRs, which I'm very happy about

It's definitely a great library and I'm really impressed with the effort that's being put into it!

I'll give the book you're suggesting a read. I think it might help cover some of the gaps in my knowledge, which is exactly what I'm looking for, so thank you for the suggestion


As someone who isn't familiar with functional programming, what benefit does this give us over throwing an error when trying to access a resource that doesn't exist?


WRT this specific post: Graceful error propagation with zero boilerplate code for passing through errors. If all you need to know is "nope" at the end of some computation, you won't have to handle all different "nopes" you encounter en route. Or the other way round: You can write lots of code without checking input values (for null, in this case) because you have a guarantee it won't be called if the input value is invalid (null here).

EDIT: I have serious problems with the post, because a) it claims that discovering one application of monads is understanding monads, b) for me the true strength of monads shines in strictly typed languages, everything else is just an approximation of the concept.


A monad's special function application lets you write much simpler code in certain situations.

Say you're working with some data structure that contains/emits numbers: a pointer to a resource containing a number. A list of numbers. A function that returns a number. An optional number (or null).

A common operation is unpacking that structure to get a number, applying a function to the number, and packing it back up: Reading from the pointer, applying the function, and returning a pointer of the result. Applying the function to each element of the list and returning a list of the results. Composing a function with another function. Applying a function to the optional number or just returning the null.

When you're writing code on this, it's error prone to do the unpacking, application, and repacking. It's much simpler if you can write code that looks like `def f(x): return exp(x)/x + 23`. Much more testable too. If you have two or more of these structured things, it might get even more error prone. It's much easier to write code that takes three integers and does stuff, instead of writing code that takes three pointers/lists/functions/optionals.

Monads are part of a hierarchy that abstracts that. Anything that defines that sort of function application in a particularly convenient way is a monad. There's more to it, but that's why it's useful.

It lets you write code dealing with the things in your data structure, letting you mostly ignore the structure itself.

-------

In this specific situation, say you want to replace your error handling with something else. Maybe it writes to a log file then errors. Or maybe it does something fancier. Or maybe you even change the way you get the resource as well as the erroring to something fancy. As you swap out the "structure" code, with a monad it's just switching to a new monad, rather than refactoring the business logic related code. It's a nice separation of concerns.


> Anything that defines that sort of function application in a particularly convenient way is a monad. There's more to it, but that's why it's useful.

Aaand with this statement you skipped over what IMO is the most important missing piece, because everything above it fits higher-order functions such as map(), which as far as I understand aren't monads.


I disagree. Map (or a functor) alone can't do everything I've said. There are a few reasons. I assume you're already familiar with the subject matter, so I'm going to cross my fingers that you know it in haskell and use that notation and terminology to save us time.

tl;dr map works for applying the simplest functions to "containers." To apply more interesting functions, you need bind, pure/return, and/or whatever applicative's <*> is called.

Let's say you're working in something like Maybe. You might want to write some code like

    f :: Float -> Float
    f x = 5 + x
In that case map is fine. fmap lets you focus on simple code like that. Or maybe you want to write

    f :: Float -> Maybe Float
    f x = if x == 0
          then None
          else 5 / x
In that case you need more than map, because you don't want to deal with what map would give you: a Maybe (Maybe Float). (>>=) lets you still focus on simple code like this, since you dont have to deal with any unpacking/flattening, which was what I'm saying is why monads are so useful.

More importantly, you need more of the FAM hierarchy than map if you care about multivariate functions, which I'd say is most code. Lets say we're working with

    f :: Float -> Float -> Float
    f x y = x + y
We want to write code like that and use some version of function application (like map). If we just use map, we get the following

    f <$> maybeX :: Maybe (Float -> Float)
which isn't at all what we want, because we can't apply it to a maybeY (or even to a float y, which `pure`/`return` lets us treat as a maybeY). If we define an additional way to apply that Maybe (Float -> Float) to a Maybe Float, we've defined Applicative, forcing us to go beyond a functor.

The motivation and usefulness is the same throughout: we just want to write code and apply functions that don't care about the structures emitting/containing our inputs and outputs. It just turns out that there are three cases depending on the kinds of functions we're writing and applying

   f :: a -> b  -- functor is sufficient, like you say.
   f :: a -> b -> c -- functor isn't sufficient. applicative is.
   f :: a -> m b -- functor and applicative aren't sufficient. monad is.
I wrote a series of posts ages ago deriving all these from that one motivation (in a more fleshed out manner) http://imh.github.io/2016/05/26/why-monads.html


> I assume you're already familiar with the subject matter

Nope. I think this is why you don't see where the previous post falls short, too much familiarity so you don't realize you're skipping over important aspects.

> so I'm going to cross my fingers that you know it in haskell and use that notation and terminology to save us time.

I get just enough Haskell to understand the first 3 code blocks, and can guess what the 4th is depicting, but am not sure.


Sorry, I thought when you said missing piece in your previous message, that you meant the missing piece of what makes monads unique (arguing from a place of knowledge about “no THIS is what makes monads important”), rather than the missing piece of the explanation. Either way, hopefully the posts I wrote and linked are written assuming nothing more than the basic Haskell syntax, so if you’re curious about the rest of the explanation, hopefully it’s clearer there. If it’s not clear there, I’d appreciate any feedback.


Well, nothing stops you from implementing equivalent wrapper using error mechanics (that could actually make it faster in some cases), but turning this idea into first-class value allows you to abstract over it easily. E.g. in Haskell, base library comes with lots of functions for manipulating monadic wrappers in generic ways, mapping, sequencing and threading through them in common way (once you start using them, you actually realise that a lot of business logic that looks perfectly reasonable in common languages ends up being boilerplate that can be avoided easily using simple combinator).

Few of them are actually bound to syntactic sugar known as "do-notation", that let's you write that sequenced code in post as if you were binding simple variables, adding branching, effectful statements or auxiliary definitions along the way. This really pays out when you start turning simple monads into so-called "monad transformers", that let you stack multiple behaviours/wrappers on top of each other, keeping the same pretty do-notation untouched.


Throwing an error is implicitly making a decision, that there's no way to recover to a working state for the program

A lot of times that's a perfectly fine decision, but six months down the road when the code has grown a lot, you might find an alternative way to get the resource on a fail

If you made a decision to throw an exception immediately after failing to get the resource, you then have to either rewrite the logic, which can be very expensive, or catch the error, which bloats the code (throwing the exception is now redundant and is fixed by adding code that catches that exception)

By instead putting the return value in an appropriate monad, you can postpone throwing the exception until you're sure that there's no way to recover

Throwing an exception is still something that's necessary occasionally, but it should not be done until there's no possible way to recover, and be done in a way, so it's easy to rewrite if a way to recover becomes available at some future point in time


One is just plain code that you write on a library. The other is a special construction on the compiler that will solve this specific use case. Your question is on the wrong way around. You should be asking why do you need specialized compiler support for just that use case.

Notice that that short introductory article already has examples of two different monads. People use many more of them.


It’s not good if each library/component/framework or other unit of independently developed code uses a different error handling scheme.

So the ability to define this independently isn’t useful. You want to create a standard. Whether that goes into the standard library, or gets a little language syntax support as well, is something you can argue about, but is pretty arbitrary and probably just comes down to the style of the language.

The burden on the developer is equal: the hard part is learning the conceptual patterns, how to compose solutions in terms of them, and how other libraries you use expect you to use them.


> Whether that goes into the standard library, or gets a little language syntax support as well, is something you can argue about, but is pretty arbitrary and probably just comes down to the style of the language.

Not really. That's maybe the impression you might, but it's not true. The "language syntax support" isn't really syntax, it is a deep conceptual change of the language. Now instead having functions return one value, they _can always return 2 different types_ and you don't know if they really do unless you know the implementation (which sometimes you can't).

That not only causes pain on a daily basis for most developers, it especially causes pain for library authors (which you are maybe not, so it is not as visible to you) and it especially causes troubles down the road with other concepts.

As example, check out Java's try-with-resources. It is a bandage over a bandage and while it improves things it is difficult to get right and has a lot of corner cases and really strange behaviour, especially in combination with constructors.

With plain simple language features, something like that does not happen.


> a different error handling scheme

Monads aren't about handling error. That's just one of two examples on the article (and yes, that one is on the standard library) and it's used for more stuff than errors already.

> You want to create a standard.

Monads are a standard. That's basically all they have into them. If you do it right, most of your monadic code won't even know what monad it's running in.


This is one example of a monad, probably not the most compelling if your language supports exceptions already (apart from having more explicit types).

But there are many other examples that are useful in practice (IO, streams, parsers, lists, operations in context, futures, etc as mentioned in other threads). A monad is the interface you need to implement for each of these to compose nicely.

Then one day you'll need to compose Foo's, so you'll ask yourself "is there a monad for Foo's?" and if there is the code generally writes itself.


You need a special language feature to "throw an error", which can break your reasoning about code. A seemingly harmless refactor like swapping two lines might completely change your behaviour because one of those lines actually threw an error. It becomes very difficult to do things like manage a resource properly (ensuring it's always released), to the point that you probably end up adding more special language features to handle that.


What if you need to access 5 resources in a row, any of which can throw exceptions? A monad centralizes the repeated logic in its flatMap function.


I think the comparison is to something like this

  try:
    user = getUser()
    profile = getProfile(user)
    pic = getProfilePicture(profile)
    thumb = getThumbnail(pic)
    return thumb

  except Missing:
    return None


That example seems a bit odd as a design. You’re throwing an exception to represent a no result, but suppressing it to convert the exception to a none. Monads give you the convenience while being consistent.


Well, the problem here is that using exceptions to implement the program’s logic is considered bad practice.


It's the structure you need to be able to throw errors.


That code that "is getting pretty ugly" seemed ok to me.


Sure, that example wasn’t that bad. But if your specific monad is somewhat more complicated - e.g state, either, promise - and you’re using it all over the place all that boilerplate is going to obscure your domain logic pretty quickly.

Monads can help readability by focusing on the composition of your domain logic instead of writing out the same low-level boilerplate over and over again.


I had a similar example today in the functional programming slack.

> hello, quick question: i feel like there's a more elegant way to express this, but I'm struggling to come up with one:

    foo :: (a -> Bool) -> (a -> Bool) -> a -> Bool
    foo f g x = f x && g x
For those who aren't as familiar with Haskell, that is taking a function f and a function g, passing argument x to both, and then doing a binary and on their boolean result.

One answer suggested:

> You can get fancy, but the “simple” version is almost always more readable:

    (not . null . f $ x) && (not . null . g $ x)
Which I find myself agreeing with in many ways, but for some reason leaves me desiring more.

The answer to that desire complements this article I think:

> if f and g returned Maybe you could have:

    > f _ = Nothing
    > g _ = Just ()
    > x = ()
    >  (f x) <*> (g x)
    Nothing
    > import Data.Maybe
    >  (f x) <*> (g x) & isJust
    False
    > -- or to avoid <*> you can do
    > liftA f g x
    Nothing
If that's not clear, let me know what's confusing and I'll try to explain further


So, I'm a bit rusty on haskell but I have some notes on a similar concept. Essentially, fmap with a twist - instead of applying the same function to a list of values, you have a list of functions that you want to evaluate on the same value. "fpam". In this case, we're dealing with a list of size 2

  fpam :: [(a -> b)] -> a -> [b]
  fpam fns v = fns <*> pure v
after that then fold the list with boolean &&

  foo :: (a -> Bool) -> (a -> Bool) -> a -> Bool
  foo f g x = foldr1 (&&) ( fpam [f,g] x)
or alternatively with no helper functions

  foo2 :: (a -> Bool) -> (a -> Bool) -> a -> Bool
  foo2 f g x = foldr1 (&&) ( [f,g] <*> pure x)
for example

  Prelude> foldr1  (&&) ( [ (==3), (==4) ]  <*> pure 3 )
  False
Alternate implementations:

    import Data.List
    import Data.Function
    import Control.Monad.State
    import Data.Foldable
    import Control.Arrow((>>>))
    import Control.Monad.Reader
    import Control.Monad.List

    applyList :: [(a -> a)] -> a -> a
    applyList list = execState $ for_ list modify

    applyList2 :: [(a -> a)] -> a -> a
    applyList2 = foldr1 (>>>) 

    fpam :: [(a -> b)] -> a -> [b]
    fpam fns v = fns <*> pure v

    fpam2 :: [(a -> b)] -> a -> [b]
    fpam2 fns = runReader $ forM fns reader

    fpam3 :: [(a -> b)] -> a -> [b]
    fpam3 fns v = fmap (\f -> f v) fns

    fpam4 :: [(a -> b)] -> a -> [b]
    fpam4 fns = runReaderT $ do
      fn <- lift fns
      reader fn



Catching up on comments and kind of fuzzy, but this is pretty cool :)

    foldr1  (&&) ( [ (==3), (==4) ]  <*> pure 3 )


I'll occasionally hide the monomorphic && and define something like:

    class Predicate a where
        (&&) :: a -> a -> a
        not :: a -> a
        etc :: ...
and then define the obvious instances for `Bool` and `Predicate b => a -> b`

It's really nice to be able to just write `(> 6) && even` or `isAlpha || (== '_')`


`on` runs a binary function by first running a unary function on each argument.

  Prelude> import Data.Function (on)
  Prelude Data.Function> :t on
  on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
  Prelude Data.Function> f = undefined :: Int -> [Int]
  Prelude Data.Function> g = undefined :: Int -> [Int]
  Prelude Data.Function> :t ((&&) `on` (not . null . ($ x))) f g
  ((&&) `on` (not . null . ($ x))) f g :: Bool
Alternatively in Control.Arrow there's a 'fanout' operator &&&:

  Prelude Control.Arrow> f = undefined :: Int -> Bool
  Prelude Control.Arrow> g = undefined :: Int -> Bool
  Prelude Control.Arrow> :t uncurry (&&) . (f &&& g)
  uncurry (&&) . (f &&& g) :: Int -> Bool
Of course it's still probably simpler to just write it out in this case.



How many Monad explanations need to be written before we admit that the Monad is an unsatisfactory abstraction for practical purposes?


These things are monadic:

Functions, continuations, state, IO, parsers, futures, streams, lists, transactions, LINQ, observables.

I can only think of one abstraction more useful - the function.


I didn't say that monads aren't useful; I said that the abstraction is unsatisfactory. The world needs a new clearer concept to use instead of the monad, just as raft can be used instead of paxos.


Monads are literally the simplest/smallest abstraction that makes all these examples compose nicely. It will be hard to come up with something clearer I think.

What is so unsatisfactory about them?


Why would we judge them by that standard? Monads are useful models for plenty of things. That has nothing to do with how many monad tutorials are out there.


> How many Monad explanations need to be written before we admit that the Monad is an unsatisfactory abstraction for practical purposes?

That's really not true, is it? I mean, you only need to look at how monads are used to express results as return types to understand the practical usefulness of monads.


I wrote an article with the same name! [1] The problem solving based approach seems to be the most relatable and easy for people to get an idea of what monads are. Nice work.

[1] https://blog.kabir.sh/inventing-monads


Great work Kabir! : )


I think, the main benefit of FP is alsoits main problem.

Patterns like the monad are so abstract, that ob its own nobody knows what to do with it, but it makes them so very powerful.

OOP patterns, on the other hand, come from a more inductive source. They are more concrete, but also not as powerful. Easier to grasp, but less concise.

We need a step (or more) between the definition of FP concepts and their application, to make all this more approachable for the average programmer.


You mean simple an concise patterns such as AbstractAbstractVisitorBeanFactoryStrategySingleton? Honestly, I find much of FP easier to understand than Java-style "OOP".


I said they are less concise.

But anyway, I didn't mean that Java garbage. I meant the Go4 patterns, they are more cocrete than monads or lenses.


I thought I understand monad after reading the article, but then I read the comment and now I am confused what is monad (again)?


I recommend reading the original "Monads for Functional Programming" paper by Phil Wadler, it's in a tutorial style and IMHO still better than any blog post explanation I have seen:

https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...


I wonder why it isn't ever described as a design pattern? For me, it is one..


I would call the State Monad a design pattern, I wouldn't call the List Monad a design pattern.


[Edited away]


So do you need monads in Haskell because that's the only way you could do some of the things that are trivial in an imperative language (albeit verbose and arguably inelegant)?


This is the paper that introduced them to FP, if you’re interested in the original reasoning.

https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...


Not exactly. For example, IO actually existed in Haskell before the IO Monad. It essentially used lists injected and retuned from the main function, plus the fact that Haskell is lazily evaluated (so those IO commands in the list wouldn’t be evaluated until main is run): https://stackoverflow.com/questions/17002119/haskell-pre-mon...


Go is imperative and could really use some monads, especially for error handling. It has nothing to do with a language being pure or not really.


Yes. (For example, ML, being impure, does not need this.)


Haskell only "needs" monads for IO (there are other alternatives to implement io in a pure language too though). The reason that they are popular in haskell is that they are convenient and the reason that they are not popular in ML is that it lacks higher order types and things like typeclasses.


No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: