Hacker News new | past | comments | ask | show | jobs | submit login
Monads Are Not Metaphors (2010) (codecommit.com)
71 points by ckarmann on Sept 10, 2015 | hide | past | favorite | 45 comments



Did bad high-school mathematics education conditioned us all to hate everything that's related to mathematics?

Label a concept "this is just engineering practice" or "this is just physics. this equation describes how universe works" and people will happily accept it as is.

But label it "mathematics" and explanations are in order because maths is hard to understand...

I don't see any posts where authors concern themselves with metaphors for Maxwell's equations (or anything else in physics; maybe I'm hanging out in the wrong circles though).

What is a process of understanding a mathematical equation? Stare at it until you grok it? Use it to solve problems to get a feel of what it means? Write proofs about it so you can state some properties with certainty?

I don't know what the best/structured process is, but the core is "take this equation and do something with it".

Do the same with monads! I think people having problems understanding monads are those that defer writing any code until they feel "they get it". Instead of "now I get it, monads are burritos, time to write some code" one should write some code in order to understand monads


I think it has much to do with the abstractness of mathematics (especially category theory). Some just have a hard time anchoring the ideas to something concrete, and they are in need of that anchoring to make the ideas stick. With engineering and physics equations there is always something physical to relate them to (at least somewhere in the universe(s)) -- not to say there aren't harder equations to grasp in those fields too, but again that is typically trending into the more abstract (less macro) areas of the field. The trouble with math is sometimes you just have to work through equations enough until you form some intuition about how they work. Some folks just want to skip that step and others Oblidge them by assisting their mental model with metaphors which seem to never be universal enough -- if it were it would just be math.


Just thought about something that I think is analogous to explaining monads. I remember reading few articles about "core math" in the States and it had example problems related to addition, subtraction, multiplication.

The complaint was that those problems where incomprehensible gibberish and kids and adults alike had really hard time figuring them out.

From what I recall the core of the issue was that the question authors had some specific mental model of how addition and multiplication works and the questions were structured in a way that required figuring out what that model was (otherwise it was very hard to understand what was being asked).

Why does 2 + 3 = 5? Well, because it does. I don't even know what my mental model of addition is. I just do it. I stared at it long enough as a kid and I developed neural circuits that compute the result.

Instead if I asked:

I have two stars in a box. How many squares do I have to put in the box to have 5 shapes?

    | * * | + | ??? | = | * * # # # |
Which is how I recall those weird maths questions were structured.

Now you a have a (terrible IMHO) model of addition. That's what I think those monad tutorials do for you.


In order to first learning how to add, examples with apples and bananas are helpful. Learning happens by relating some new concepts to something you already know; that applies also to monads.

Those monad metaphors provide the same benefit. Even if some intuitions are wrong and you still need to learn the match to get the whole picture, at least you get a sense of (partial) understanding and purpose during the whole learning process thanks to the metaphor, instead of feeling lost the whole time.


"Did bad high-school mathematics education conditioned us all to hate everything that's related to mathematics?"

Yes.

https://www.maa.org/external_archive/devlin/LockhartsLament....

(A frequent HN link, but, well, for good reason.)


While I think Lockhart correctly points out there is something wrong with math education, I disagree with the solution he proposes to fix it.

Read page 16+17 and tell me who you agree with more, SIMPLICIO or SALVIATI


I definitely root with SALVIATI. I learned basic set theory - -including bijection, injection and surjection- in primary school, and never understood why mathematicians cared about defining things in such way until I studied formal semantics in college.


I'm Team Salviati.


I assume you also disapprove of zed shaw's "Learn X the Hard Way" books (which, btw, have many adherents and praises), which he explains partly here? https://news.ycombinator.com/item?id=7207283

The irony is that even in painting, there is a fair amount of "rote" or repetition needed before ones "creativity" can be explored.

This may be somewhat tangential/distracting, but I find the whole notion of 'creativity' problematic - many artists also think so (“Amateurs look for inspiration; the rest of us just get up and go to work.” ― Chuck Close).


I tend to think that a certain amount of rote arithmetic is required, and, well, if the student doesn't like it, I'm not sure what to do about that. Reading and spelling are the same way... if you want to be a fluent reader, you need to put the time in.

What I learned in elementary school is OK. (There my objections would center more around the cohort system being terrible, but that's another objection. Also, I have to qualify that because the education system considers screwing with elementary math education one of their primary mandates.) But math education flings itself off the rails when it starts doing symbolic manipulation. (And if anything, every time they screw with it they're flinging themselves off the rails harder and sooner.)


> Did bad high-school mathematics education conditioned us all to hate everything that's related to mathematics? Label a concept "this is just engineering practice" or "this is just physics. this equation describes how universe works" and people will happily accept it as is.

I won't say it better than in this post![1] It's really too bad that people in general are struggling with the different problems related to Mathematics, or because they don't have a good environment to learn, or other things like that.

The thing is when someone starts to learn Mathematics, the beginning is really tough. You have a new language to learn, you have to do everything the "hard way", you have to learn the "handshakes", and because your mind is your only compiler/error check. But the longer you train yourself the better you get, until you have enough basic knowledge to read and learn anything. At the end it is so grateful to overcome that, and so useful because Mathematics is such a powerful tool.

[1] http://jeremykun.com/2013/02/08/why-there-is-no-hitchhikers-...


I've written code with monads plenty of times. I've even made my own. I'm still not sure I know what a monad is in the mathematical sense of the word. I don't have any idea how it interacts with other mathematical constructs or fits into Category Theory.

Using Monads in a programming context teaches you how to use monads in a programming context.


Which, if I'm trying to program, is good enough. I'm not trying to write a paper in category theory...


>I don't see any posts where authors concern themselves with metaphors for Maxwell's equations (or anything else in physics; maybe I'm hanging out in the wrong circles though).

This is quantum mechanics through and through. While the equations are agreed on, interpretations vary and you get weird varied metaphors all over the place.


High-school mathematics education taught us to hand-execute algorithms to manipulate symbols into equivalent forms. The rules of arithmetic, then the rules of algebra, properties of trigonometry, more algebra, then the obscure edge-case tricks of integration and differentiation. That is 100% of "math" to any high school graduate. Nothing remotely related to reasoning about monads.

There is no reason at all that a straight-A high school math student should be expected to be even slightly good at writing proofs.


I loved school math classes, but a decade later I do have a hard time understanding monads and category theory.

Compared to programming (even functional), "math" is a completely different way of thinking. Abstract properties are stated, from which some arbitrary (partial) structure is implied. Whereas with programming, the structure of "doing" is always concrete, regardless of paradigm (algorithmic complexity is everpresent, even when you're ignoring it). Compared to abstract math, school classes are more akin to programming in that you're mostly following algorithms and applying patterns with a slight intuition, even when you get into algebra and calculus.

My biggest hurdle to understanding is the use of completely different terminology for concepts that are the same, at least intuition wise. My thought process is based on intuitions first, rather than manipulating symbols in the abstract (which seems to be more common? or maybe it just seems this way?). So when confronted with terms like 'conjunction' and 'disjunction', it just throws me off that they're 1. the "same" as AND/OR, 2. the nuance of difference is rarely stated. So the way I cope is by directly thinking AND/OR, while being painfully aware that there is some distinction I'm not aware of. This leads to a lot of reading that's completely disconnected from anything until I find the gem that illustrates the actual difference.

I'd learned the workings of practical Haskell monads some time ago, but what made monads/category theory finally click is reading Moggi's "Notions of computation and monads". Going back to the "source" let me see the specific motivation and actually understand how Haskell objects differ from perfect ones from category theory. Apparently it's just non-termination, but I had to do a lot of searching to find where that was actually stated, rather than assumed and vaguely referenced. It feels like to someone who thinks "more mathematically" this is just a small detail, since they deal with each type of structure in isolation. But until I can relate them, I'm just out in the weeds.

Of course now that I understand this it seems like quite a simple concept. But I had to do an awful lot of work to get to the point where my thought gamut was "expanded" to be able to include this one concept.

So I don't really know what my exact point is. But to connect to something you said, it seems like experience from writing code and desire to understand abstract concepts ("monads are burritos") come from two different places, and the latter isn't necessarily served by increasingly outlandish metaphors, but by making the details accessibly explicit.


This coming from the "you don't really understand a topic until you learn its maths" school. I don't mind that position as there's some truth to it, up until they commit the mistake to consider metaphors harmful and an obstacle.

I greatly benefited from the "conveyor belt" metaphor for learning monads; it made me grasp instantly something that I would have never understood from a pure theoretical explanation - namely why they are so useful and widespread in functional programming, as a building tool to distribute logic among several composable functions over a data type. Sentences like "We start with one thing and use its value to compute a new thing" and "Monads are an abstract, mathematical label affixed to a pattern found in almost all code" will never convey information about how it's intended to be used in the same vivid way as the metaphor.

I know for true that the pure mathematical approach leaves me hanging. I learned linear algebra in the theorem-proof style, and as of today I still don't know when it's an adequate technique to use. I can't tell what matrix ranks, kernels or tensors are good for even though I can calculate their values very precisely.


...the "conveyor belt" metaphor for learning monads;

Thanks for that! Here's a link:

http://web.archive.org/web/20100910074354/http://www.haskell...



Anybody who didn't know what monads were and now reached an epiphany thanks to this article please comment.

Also is there anyone here who understands what monads are but has never programmed in haskell or a functional language?


The problem with this article is it assumes you understand Scala. There's a lot of line noise going on there, which I was later able to infer as "this is Lambda notation". Following that is the immediate (apologetic) use of `andThen` with no real explanation of what it is or what it does. Again, I can eventually infer the basics, but this interferes badly with understanding of the underlying concepts.

If the OP really wanted to make this a better introduction, they really needed to slow down, use pseudo code with lots of explanations of the concept of "and then".


...immediate (apologetic) use of `andThen`...

That is hilarious:

note: the andThen method isn’t defined for functions of 0-arity, but we’re going to pretend that it is and that it works the same as it does for functions of one argument.

Here's this thing that no one who doesn't already understand monads has ever seen, and that actually doesn't even work in this situation, but let's pretend it has some other definition that would work, if that were actually possible, which isn't the case. Why is everyone running, screaming, back to metaphor-based tutorials?


I don't believe monads are worth the bother without some syntactic sugar. Without sugar, it is just a sequence of operations converted into a chain of nested lambdas with one lambda for each operation. Cool, but far to convoluted to write for any real-world programs.

The attempts to introduce monads in other languages like Python, Ruby etc. is missing what make them a useful tool in Haskell.


I've used Async and LWT in OCaml—libraries for monadic asynchronous programming—and the abstraction can still be useful without syntax sugar.

Hell, JavaScript promises are basically a bastardized and slightly inconsistent monad, and they're still useful even without syntax sugar. If we could generalize over the API and use them for other things it would be even better, but that's not the JavaScript way.


How are JS promises monads?


They actually operate on more or less the same principle as Async and LWT, similar to Haskell's Cont.

The .then method is bind. Or it would be if they didn't automatically flatten nested promises. Since they do, then is a hybrid of map and bind that doesn't quite cover the use cases for both.


> The attempts to introduce monads in other languages like Python, Ruby etc. is missing what make them a useful tool in Haskell.

While I agree that the syntactic sugar of the "do" notation in Haskell makes Monads vastly more useful, there are examples where Monads have been applied in other programming languages without any syntactic sugar.

For example, there are parser combinator libraries (influenced by Parsec) in many languages. E.g. Python's pyparsec is internally using monads that are defined just as they are in Haskell.

Monads are a great idea for many practical tasks in programming, their use in Haskell is often emphasized becuse they're required for IO (which is not the case for imperative languages). For many of the non-IO applications of monads, they're just as useful in other languages too.


I studied monads in a Categorical context in college (Domain Theory, if interested).

Monads can be understood very simply in Category Theory if you don't care so much about the details: pick a couple of your favorite categories (which is just a bunch of objects with morphisms) and find a functor between them. Then find a functor back the other way. Composing those two functors gives a Monad - a functor from a category back into itself (an endofunctor). Not just any endofunctor will be a Monad though, so if you start with an endofunctor (instead of two functors) you need to also have two other "natural transformations". (That endofunctor + the two natural transformations is why monads are sometimes called triples).

The application of monads to programming was not obvious to me, even having dealt with them a bit mathematically. I started looking at Haskell earlier this year and it turns out that a Monad in haskell is defined on the Hask category, where objects are types and morphisms are functions between them. Even that wasn't enough for me to get why they're useful, and that seems to be because as far as I can tell they're only used for "threading" state through a series of functions...but perhaps I'm still missing something?


> as far as I can tell they're only used for "threading" state through a series of functions

That is why people bump up against them when learning Haskell, but monads run much deeper in functional programming.. Even "pure" functions in Haskell are monadic with respect to the non-terminating "bottom".

I said it elsewhere in this topic, but check out Moggi's "Notions of computation and monads".


Monads come from category theory. You can certainly know what they are without ever having programmed. "Basic Category Theory for Computer Scientists" and "Category Theory (Oxford Logic Guides)" are good places to start.


Rightly so, but look at the replies to my post. Essentially nobody responded to my query. Looks like mostly Nobody knows what monads are without learning functional programming AND this article totally failed to help anyone understand what monads are.


This article seems a bit heavy on the computational monads.

Monads don't necessarily have anything to do with sequencing or computation, and the author is objectively incorrect to say "monads are a pattern, not a specific type". In fact, monads are an algebraic structure, defined only by the types of the operations defined on them and the laws those operations obey.

That's not to say that concrete examples like the one here aren't useful, but the author is not covering the full scope of what monads are.


Brushing aside "you can do anything in one language that you can in another," what do monads allow me to do better that I can only approach without them?

I've tried reading (but not coding), and all I hear is sequence of operations, chaining, and similar. I know I'm missing something.

Basically I haven't seen or synthesized the tl;dr that makes me want to learn a language that has them.


If you're in a pure functional language, monads are the way you do sequencing while still being purely functional. If you regard pure functional as the "better" (for reasons of, say, controlling the state space explosion of your code, or reasons of understandability and therefore maintainability), then monads are part of the necessary tooling to get you there (if your problems have any parts at all that are necessarily sequential).

As far as other uses: The best explanation that I've seen is that they let you hide the parts that you need to do, but that aren't the real thing your doing. Logging, for instance. You can add logging to something, and just string functions together as if there were no logging going on. (Of course, if you were doing non-pure-functional programming, you could just add logging statements wherever you wanted, and string your functions together just the same...)


Haskell per se doesn't really have monads, except in the sense that there's an opaque `IO` type which is only useful for "doing IO" because it is an instance of `Monad` (and `Functor`), and in the sense that there's some very thin syntactic sugar for composing monadic types. So haskell-the-language has type classes (like `Monad` or `Functor` or `Show`), and more importantly strong static typing. The latter means you can create abstractions (like Monad) that are not brittle.

You can check out this module:

http://hackage.haskell.org/package/base-4.8.1.0/docs/Control...

You can see all the varied sorts of things which are "instances of Monad", then scroll beyond and see all these different useful operations which behave sensibly for all those different instances (and from which you can build your own more complex operations which work for any Monad, and which also behave sensibly).


All useful languages have monads. It's a pattern that is nearly impossible to avoid. Where Haskell is different than most languages is that it lets you write generic code that operates on any monad.


"It's a pattern."

"All useful languages have monads ... nearly impossible to avoid."

So here's a post on Python (hopefully useful) and monads, it appears to be implementing a pattern, although I don't think I would have ever stumbled across it naturally.

http://www.valuedlessons.com/2008/01/monads-in-python-with-n...

And here's a library implementing monads and other stuff:

https://pypi.python.org/pypi/PyMonad/

It still looks like I'm going to have to try using them before I can figure out if it's worth trying to use them.


Meaning that any language that doesn't have monads, you're defining to not be useful? Or meaning that monad-the-pattern is everywhere, in every even barely useful language, even if there's no explicit monads in the language?


The monad pattern is everywhere, most languages just don't explicitly recognize it. Any non-trivial program will include many different monads; they just won't be recognized as such in most languages.

For instance in C#, there is a monad with bind=?? and return=no-op (similar in use to the Maybe monad in Haskell). There's another monad with bind=; and return=no-op (similar in use to the IO monad in Haskell). C# just doesn't explicitly recognize those as monads.


I like this comment because it helps illustrate the subtlety of monads as they apply to programming.

I think mode C# programmers would likely think of ';' and '??' as syntax, and who is thinking of 'no-op' as anything!?

But if you were in haskell this would not be syntax, but just functions on an instance of the Monad type class.

Haskell could easily add some sugar to make these constructs syntax as well, but it would just be sugar -- and that is where the power, and confusion I think arises when most programmers think about monads.


Metaphors a tricky. Reminds me of the many attempts to explain objects and classes in terms of real-world object. (Cars, fruits, persons etc.) These explanations are often more confusing than enlightening, since object and classes are actually not like real world objects at all.



> These explanations are often more confusing than enlightening, since object and classes are actually not like real world objects at all.

I desagree. While it's true that metaphors can be over-stretched and cause problems later, a well-placed metaphor at the beginning of the learning process can do wonders to quickstart understanding of a topic about which you know nothing at all.

At the very least, the metaphor will provide meaning to the very act of learning, explaining why you should bother at all with that topic. Experts explaining their subject often forget what it's like to not know how two concepts in their domain relate to each other and the uncertainty when reading a sentence that connects them in a way that has not been seen yet.



Could someone translate this to Python?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: