Hacker News new | past | comments | ask | show | jobs | submit login
Deconstructing Functional Programming [video] (infoq.com)
106 points by newgame on Dec 20, 2013 | hide | past | favorite | 109 comments



Gilad Bracha sounds like he hasn't used a typed language long enough to stop struggling with basic type errors.

As such, it is of a position of extreme ignorance that he speaks of the uselessness of type checking and inference.

Claiming Smalltalk has the best closure syntax shows he doesn't understand call by need. Haskell defines easier to use control structures than Smalltalk.

Claiming patterns don't give exhaustiveness, ignoring their extra safety shows Gilad doesn't understand patterns.

Claiming monads are about particular instances having the two monad methods, when they are about abstracting over the interface, shows Gilad doesn't understand monads.

Claiming single argument functions have the inflexibility of identical Lego bricks shows he doesn't understand the richness of function types and combinators.

In short, Gilad sounds to me very much like a charlatan who'd benefit greatly from going through lyah.


I found Bracha's talk poor. That guy really has a chip on his shoulder vis-a-vis functional programming. A lot of things he said were not well though out. Here are some examples.

- He claimed that tail recursion could be seen as the essence of functional programming. How so?

- He complained that tail recursion has problems with debugging. Well, tail recursion throws away stack information, so it should not be a surprise. You don't get better debug information in while loops either. And you can use a 'debug' flag to get the compiler to retain the debug information (at the cost of slower execution).

- His remarks about Hindley-Milner being bad are bizarre. Exactly what is his argument?

- His claims about pattern-matching are equally poor. Yes, pattern matching does some dynamic checks, and in some sense are similar to reflection. But the types constrain what you can do, removing large classes of error possibilities. Moreover, typing of patterns can give you compile-time exhaustiveness checks. Pattern matching has various other advantages, such as locally scoped names for subcomponents of the thing you are matching against, and compile-time optimisation of matching strategies.

- He also repeatedly made fun of Milner's "well-typed programs do not go wrong", implying that Milner's statement is obviously non-sense. Had he studied Milner's "A Theory of Type Polymorphism in Programming" where the statement originated, Bracha would have learned that Milner uses a particular understanding of going wrong which does not mean complete absence of any errors whatsoever. Milner uses a peculiar meaning, and in Milner's sense, well-typed programs do indeed not go wrong.

- He also criticises patterns for not being first-class citizens. Of course first-class patterns are nice, and some languages have them, but there are performance implications of having them.

- His critique of monads was focussed on something superficial, how they are named in Haskell. But the interesting question is: are monads a good abstraction to provide in a programming language? Most languages provide special cases: C has the state monad, Java has the state and exception monad etc. There are good reasons for that.

- And yes, normal programmers could have invented monads. But they didn't. Maybe there's a message in this failure?


Indeed, I found his talk pretty poor as well. A lot of it comes down to not wanting to learn new terminology, and forgetting that a lot of "common sense" terminology from, say, Java, is also learned. I don't get more insight from "FlatMappable" than from "Monad"; in both cases I must learn about them first, and neither is intuitive without prior knowledge.

It is instructive to read Bracha's blog too, mostly for the comments where readers refute a lot of what he claims.

His argument against Hindley-Milner seems to be that "he hates it", and that type errors are sometimes hard to understand. It is true IMO that they are hard to understand (even though, like everything in programming, you get better with practice), but what is the alternative? Debugging runtime errors while on production?

He also presents Scala as a successful marriage between OOP and FP, but in reality this is a controversial issue. Some of the resistance to Scala (witnessed here in Hacker News, for example) is due to it trying to be a jack of all trades and master of none. Scala's syntax is arguably _harder to read_ than that of other FP languages.

Some of his "funny" remarks sounded mean-spirited to me. Nobody in his right mind claims that FP invented map or reduce, for example.

The only point of his talk I somewhat agree with is that language evangelists are annoying. Oh, and that "return" is poorly named.


> His argument against Hindley-Milner seems to be that "he hates it", and that type errors are sometimes hard to understand. It is true IMO that they are hard to understand (even though, like everything in programming, you get better with practice), but what is the alternative? Debugging runtime errors while on production?

He pointed out that a more nominal type system is a solution. Because when you give meaningful names to your types the error messages will become clearer and not full of long, inferred types that reveal potentially confusing or unimportant implementation details.


Most programming languages with Damas-Hindley-Milner do not prevent you from using explicit type annotation, and inventing semantically meaningful type names.

More importantly, I think the reason why error messages are sparse and not meaningful in languages with Damas-Hindley-Milner is that nobody bothered to improve the situation. And the reason why nobody botheres is that it's simply not a problem in practise. Any even moderately experienced programmer can easily detect and fix typing errors as they are given in Haskell, Ocaml, F#, Scala etc.


Recursion and iteration are equivalent, so it's not much of a stretch to call it a core concept (for more, see SICP & the lambda calculus). It is the only form of iteration available in functional programming.

Monads are not only a good abstraction, they are essential* if we are to move away from haphazard construction. Normal programmers have invented them many times; as the truism goes, you have have even "invented" them yourself.

Remember when function pointers seemed tricky and unnecessary? Remember when closures seemed tricky and unnecessary? Yeah. One day you're going to see monads in the same way.

* - Functional programming => programming with pure functions.


First, thanks for all involved in getting this posted!

I'm somewhat curious on why the industry has such an aversion to simulating things in our mind. Especially when this seems to be one of the arguments employed against monads in this speech. That it basically couches something known in an odd name that is not known. Isn't this just stating that it is bad because it confuses the simulator that is the reader?

That said, the live coding aspect is something that I am just now learning from lisp with emacs. Being able to evaluate a function inline is rather nice. It is somewhat sad, as I still wish I could get a better vote in for literate programming. (Betraying my appeal to the human factor moreso than the mechanical one.)


Monads have nothing to do with simulating anything. They are just a commonly recurring pattern of computational contexts (more precisely, functors) that also provide two basic operations:

1. entering the context (pure :: a -> m a) 2. collapsing nested contexts into one (join :: m (m a) -> m a)

Together with some coherence laws that ensure that these operations do exactly, no more or less, than entering the context and collapsing nested instances of it.


Did you watch the video? I'm not referring to monads simulating something. I'm referring to the observation that when reading code you are simulating its execution. My understanding of the video's complaint against monads is that the signature of monads is actually quite simple and well understood in different contexts by different names.

The video goes on to display an environment where you do not have to simulate the code in your head.

This progression seems somewhat interesting to me. As does the desire to not have to simulate code in your head.


>This progression seems somewhat interesting to me. As does the desire to not have to simulate code in your head.

But none of that has anything to do with monads.


Ok... I think I'm getting trolled at this point.

I am taking issue with the video's critique of monads. Wherein it is claimed that monads manage to take a common and understandable behavior and make it laughably impossible to explain to people by giving it a weird name. Essentially, the problem with monads is one of it being difficult to "simulate" under the name "monad" for many individuals.

This part, I actually feel makes sense and resonates well. Simply follow the progression in the video and see how "FlatMappable" becomes less and less intuitive as it is given worse and worse names.

The part that is interesting to me, is how this then progresses into a point on how programmers should not have to simulate the code in their head. Now, I realize there is a big difference between "should not have to" and "is difficult to intuitively do so". Still seems an odd progression, though.


>Ok... I think I'm getting trolled at this point.

If you don't want to discuss something, then don't post. You are not making any sense, and calling people trolls does not help at all.


I should have put a smiley on that, then. While feeling trolled, I highly suspect this is just a rather amusing case of poor communication.

At no point was I trying to describe or discuss monads. That is something a response to me thought I was trying to do. When referring to "simulating" a system, I was referring to where the video refers to the process of reading "dead code" in a text editor. There is a large rant on monads in the video where the argument appears to be that the problem is strictly with the name. The reason given that it takes something understood, and hides it behind non-obvious names. I extrapolated this to be that it makes the program and the idea "hard to simulate" for the coder reading the code.


Great talk. Particularly the bit on the value of naming things - I rather wish he'd flogged that a bit harder.

As time goes on I'm finding it more and more frustrating to try and maintain code that relies entirely on anonymous and structural constructs without any nominal component. Yes, I do feel super-powerful when I can bang out a bunch of code really quickly by just stacking a bunch of more-or-less purely mathematical constructs on top of each other. . . but as the story of the Mars Climate Orbiter should teach us rather poignantly, when you're trying to engineer larger, more complex systems it turns out that meta-information is actually really useful stuff.


I'd say static typing and purity as advocated by FP are some of the tools one wants when trying to engineer larger, more complex systems.

I wasn't familiar with the Mars Climate Orbiter case, but a cursory reading suggests one of the causes was a type error (confusing newtons with pound-force).


As advocated widely in the FP blogosphere. . . not necessarily as commonly practiced in FP programming culture, or supported by many FP languages.

For example, I strongly prefer F# to its cousin OCaml largely because F# uses nominal typing and OCaml uses structural typing. I've also got some misgivings about being overly reliant on type inference. Both structural typing and advanced type inference are admittedly incredibly convenient. What worries me is that they also seem to be incredibly convenient as ways to obfuscate the programmer's intent w/r/t types and their semantics.


I'd say not so much as advocated by the blogosphere (which can be annoying, as fans of almost anything often are), but by the people actually designing and using FP languages.

In any case, there is certainly valid criticism of FP, but Bracha's just isn't it. My impression is that the guy -- as clever as he may be in other areas -- barely understands FP, and makes disparaging remarks about things he isn't familiar with. Read his blog; every assertion he makes is shown to be incorrect or misleading by people who do understand FP, like Tony Morris or (very politely) Philip Wadler himself.


I'm just learning functional programming with Haskell, and it was great to hear him explain that learning Haskell is really hard because of the terminology. I feel a little (just a little) less stupid.

That said, he's a terrible presenter. His smarmy style was really off-putting, and his motives a little sketchy. He spends a good portion of the talk slamming just about every language in existence except for the two he works on (Dart and Newspeak). It seemed very disingenuous and I don't need another ranting nerd spouting venom about why something's not very good in that holier-than-thou tone. I would have rather had a straightforward talk showing the strengths and weaknesses than the bitter tone this had.


This is a brilliant talk. It's getting far too easy to annoy the FP cult(ure).

As an aside, Scala is not unique in marrying a FP approach with an OO system. CL has had CLOS, IMO one of the better implementations of "OO" outside of Smalltalk, for much longer than Scala.

Definitely watch this!


Scala and Common Lisp are not particularly functional languages. Functional programming in Scala is doable, although it takes a nontrivial amount of effort (see: scalaz), and it is outright impractical in Common Lisp.

As an aside, CLOS multimethods resemble Haskell's multiparameter type classes (except CLOS is dumber: you cannot provide any guarantee that the same types will provide two or more common operations) more than they resemble anything else also called "object-oriented".


It is a common mistake I've heard from many CL newbies that believe CL is a "FP" language.

The best descriptor I can find to date (of CL) is, "programmable programming language," which allows it to encompass almost every desired feature one may need; including many that fall under the FP umbrella which may be where the confusion stems from.

However one of the opening points of the talk was that, "FP," is not a rigorously defined term and is subject to interpretation. Which leads to bikeshedding over language features and a lot of hype.

I believe it also leads to a lot of misplaced faith in the purity and completeness of mathematics (it's almost as if the popular notion of FP is being reborn as a modern Principia Mathematica).

CL obviously cannot be called an, "FP," language since its inception seems to predate the popular notion of the term. Scala may suffer in the same way due to its reliance on the JVM and the expression semantics it has carried over from Java. However many of the features one tends to associate with modern FP languages (though not all) are present in both languages.

As for your aside, how so? Perhaps a discussion we can have over email if you're interested. You sound smart. However I don't understand your statement and would like to know more.


> As for your aside, how so?

CLOS multimethods do not "belong" to an object or even to a class declaration. Particular implementations of generic methods are declared globally, just like Haskell type class instances. Although, as Peaker noted, type classes can dispatch on any part of the type signature. It is impossible to make a CLOS multimethod with signature:

    (SomeClass a b) => String -> (a, b)
> Perhaps a discussion we can have over email if you're interested.

Sorry, I never check email. But I am almost always on Freenode. My nick is pyon.

> You sound smart.

Not really. The regulars in #haskell - now they are frigging smart.


I think the comparison to type classes is specious and ends there. They look similar but they tackle very different problems. You've actually explained why rather well.

> Not really. The regulars in #haskell - now they are frigging smart.

Don't sell yourself short.


It seems to me type classes have a superset of the features of CL multimethods. Why not compare them?


Multimethods are not quite as powerful as type-classes. Type-classes can dispatch on any part of the type signature, whether it is an argument, result type, parameter to a type, etc.


Agreed there. But give me a little break, I only said "resemble", not "are the same as". :-)


CLOS and scala have very little in common, both in the functional side and the OO side. Ocaml and F# are better examples. Can I ask what you think made this a brilliant talk? It seemed like the standard "I don't want to have to learn so I will pretend there's no reason to learn" nonsense we hear all the time.


wrt. CLOS/Scala, indeed very little in common and I didn't intend to suggest they were similar. In recent articles that mention this idea Scala is often mentioned in the same breath as if it has exclusive domain over it. I simply meant to debunk that claim if it exists.

I thought it was brilliant because Gilad provides a humble deconstruction of common myths and claims of the FP culture. He is skeptical and I didn't find any of his conclusions to be dismissive: he walks through the reasoning behind his opinions. I certainly didn't find any point where I thought he was ignorant of the subject of which he was speaking. And if you listen to his opening remarks about "deconstruction," and his conclusion do note that he points out some FP concepts that are useful and should be exploited more. He was there to break through the hype and I think he was successful.


>Gilad provides a humble deconstruction of common myths and claims of the FP culture

He argued with a joke from a comic and lost. Even he would laugh in your face at the notion that there was anything humble about his talk.

>He is skeptical and I didn't find any of his conclusions to be dismissive

That is precisely the opposite of reality. He doesn't even understand functional programming, he is thus not skeptical, he is dismissive.

>He was there to break through the hype and I think he was successful.

The fact that both he and you believe there is "hype" is indicative of the problem. "Hey, you should learn things and improve your skills" is not hype.

Most of what he says is outright wrong. He talks about smalltalk inventing all of this FP stuff that was in ML before smalltalk-76 "invented" them. He pretends smalltalk predates FP, except again, ML predates smalltalk-76. and smalltalk-72 didn't have the stuff he is talking about. He talks about things "FP languages can't do", but that I do all the time in haskell with no issues. He repeats the oldest most worn out fallacious arguments that have been debunked over and over, and pretends that since nobody is allowed to interrupt the talk to correct him, his arguments are correct. Everything about his talk is an example of the exact opposite of what you suggest it is. If you want someone to convincingly lie to you about how FP isn't all that, look to Erik Meijer. Gilad sucks at it.


Interesting talk! Bracha has some good arguments against features that I generally enjoy in programming languages, like Damas–Hindley–Milner type inference and pattern matching.

Regarding Haskell: The points he makes against obtuse names based in category theory are valid, but then again, Haskell has its roots in research programming languages. Math-based terminology makes more sense for an academic audience.


>The points he makes against obtuse names based in category theory are valid

No, they aren't. When you have a class of "things" that doesn't have a name most people are familiar with, you are left with two options. Either choose a name people are familiar with, but which is wrong and misleading. Or choose the correct name and people have to learn a name. Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?


To an extent, I think it's a valid criticism. There are two main problems with the mathy names that many concepts in Haskell have.

The first is that they hide the meaning. For example, "Monoid" is a really scary term, and explaining it further as "something with an identity and an associative operation" really doesn't help much either. Calling it instead "Addable" or "Joinable", and explaining it instead as "things with a default 'zero' version, and which have a way to add two of them together", while perhaps not a perfect definition, would be much more intuitive for the majority of people.

That brings me to the second problem I see, which is that the esoteric terminology in Haskell creates a barrier between those who understand it, and those who don't, and contribute to a sense of Haskell culture being exclusionary and cult-like, which discourages cross-talk.

Criticizing Hindley-Milner, on the other hand, I'm confused by. It's such a useful and powerful system. I suppose it can make compiler errors more obscure at times, but you get used to reading them and they aren't so bad. Hindley-Milner isn't just a type inferrence system; it's a typing system which allows for the most general typing to always be used, so that the functions one writes are as general as possible, encouraging modularity and code reuse.


"Addable" will not actually be more informative than "Monoid", to someone who doesn't know "Monoid".

"Monoid" will be very informative to anyone who learned it from mathematics.

A "Monoid" is a type which supports an associative operation (`m -> m -> m`) and a neutral element (`m`) which forms its identity element.

"Addable" suggests it is an "addition". Does this mean it is commutative? For the sake of preciseness, I'd hope so! (Monoids aren't commutative). Does this mean it has a negation? No. So it is not "addition", why use a misleading name for the sake of some false sense of "intuition"?

The actual explanation of what a Monoid is precisely is so short and simple, it makes no sense to try to appeal to inaccurate intuitions.


That's a completely valid point of view. You're not wrong at all. I'm guessing, though, that you had learned it before from mathematics. My point is one of pragmatic, not theoretical, distinction. To those without a mathematical background (most people are not going to learn monoid unless they've studied abstract algebra), or who are less interested in mathematics in general, an obscure term like that is discouraging. I know that the Haskell community is heavily mathematical, and have little interest in "dumbing down" the language for the sake of those who are put off by theory, but it is a real tradeoff and one of the things that is likely to impede the introduction of Haskell into the mainstream.


I've learned Monoid in Haskell, not maths. It's just so simple and easy that there's really no dumbing down necessary.

Monad is simple and hard, but Monoid is simple and easy.


With respect to monoid, you're right. It's really quite simple when you get down to it. I don't have any arguments there. In fact, the fact that monoids are really so simple is kind of my point. In almost any other language, were such a thing to exist, monoids would not be called monoids but by some descriptive term which conveyed an intuitive sense of their meaning and use; it would be the purview of the mathematically inclined to write articles explaining how "actually, what we call the Joinable type class is known in abstract algebra as a Monoid, and its use extends beyond just joining things; for example..."

My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions. Like I've said a few times, this isn't incorrect at all. Nor is it surprising given Haskell's origins, nor is it without purpose since it deepens your understanding of what's going on in the language. It's just a simple fact that the mathematical jargon is a turn-off to newcomers and those who don't feel they want to be forced to learn math while they're programming, or might think they're incapable of doing so.

As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.


Well, yeah, but... the term "monoid" already exists, and has a definite meaning. A different name might give people an intuition for it--but it will be a wrong one that they'll have to unlearn later, like the infamous burrito (not that you or anyone has suggested that monads be renamed burritos, I am happy to say!).


We must have different ideas about what "practicality" is.


>In almost any other language, were such a thing to exist, monoids would not be called monoids but by some descriptive term which conveyed an intuitive sense of their meaning and use

There is no such term, that is the point. Offering up misleading terms that do not convey a sense of their meaning is much worse than a word that is unfamiliar.

>My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions.

But it isn't an example of that. It is quite bizarre to see people insist that this goes on, and give examples that do not support that claim, while being fully convinced in their proof.

>As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.

You don't need to be like that, that is the point we're making. I am not a math person. I am not a CS person. I am a high school drop out who taught himself to code in PHP and C. I learned haskell just fine. I learned monoids and functors and monads just fine. I am no more mathematically inclined now than I was before. I know nothing of category theory, and care nothing of it. They are very general abstractions that do not reflect a narrow, specific use case, and thus do not benefit from a word that describes some narrow, specific use case.


I hate to say this sort of, because it is going to sound snobby, and I do not believe in being snobby. But the undeniable truth is, there's a fraction of the world's population of programmers who simply do not have the particular mental traits that would allow them to ever be completely comfortable and confident with a concept as abstract as monad.

Some people will call me Satan now, and others will jump on what I said and say, "Hell yeah, the world is full of dumb blub programmers." But both those groups are misunderstanding me.

I think a programmer who cannot understand this level of abstraction can still do plenty of valuable things as a programmer. I would not call them dumb. They may be -- and many are -- fabulously creative, driven, capable and highly productive.

There's a certain type of programmer who is more comfortable with abstraction and whose brain is more wired to deal with these amorphous, unnamed concepts. The same kind of brain wiring is needed to go far with mathematics.

But as FP becomes more prominent, this is going to become a dividing issue. Some will not make the transition, or will do so only partially. I think it's great to try to communicate better where possible, but even the best communication is not going to completely erase the issue.


Isaac Asimov, in one of his essays, gave an analogy for those unfamiliar with, or perhaps frightened by the scary name of, the "complex" numbers: street addresses. Should programming languages get rid of that scary name and refer to complex numbers as "addressable"?


Hence why Monads have been named "Warm Fuzzy Things" in some Haskell papers about outreach.


> explaining it further as "something with an identity and an associative operation" really doesn't help much either

My 3rd grade daughter learns about associativity and identity. Is it too much to expect adults to not get all defensive over 3rd grade terminology?


What is so scary about monoid? A classical monoid is precisely a category with one object, hence the name. "Addable" and "Joinable" do not quite cut it - not all monoids are defined on numbers (or generalizations of them such as vectors or matrices) or sets (or generalizations of them such as categories or topological spaces).


Like I said, it's a scary term, because hearing the word "monoid" conveys exactly zilch about what it is, and it sounds strange and abstract. And like I said, the definition I gave is not a precise one, but it's an intuitive one. Once you have an intuitive understanding as a starting point, you can abstract to other things.

This is just my opinion, of course.


Most of what you learn in CS is a bunch of words you have no idea what they mean until someone gives you a precise definition. Deterministic Finite Automata, regular expression, static typing, serialization, compilation, singleton pattern, etc. are all terms we use every day in our profession, but it's not clear what they mean. But we read the definition, forget it, someone reminds us, we forget it again, we implement it and we remember. Same with monoids, functors or monads, we need to take some time to learn what they mean and then we can include them in our vocabulary.


Hearing the word "dog" conveys exactly zilch about what it is, and it sounds strange and abstract. Maybe we should call these animals "barkables".


Except that it doesn't sound strange and abstract, because it's a word that everyone is familiar with. My point is about accessibility, not theoretical correctness.


My point is that a monoid is a monoid. It's an abstraction that's so basic that it cannot be broken down. It's a concept that you learn, like how you learn what a dog is or what integers, loops, functions, sets, hashmaps, etc are.

A very large part of our job is to apply abstractions. I don't often hear lawyers complaining about how accessible the name of some law is, or from doctors about how accessible the name of some disease is. I've never heard an American football player say "We should call the pistol formation something else. Calling it pistol is potentially confusing". They just learn what a pistol formation is and carry on.

As programmers, abstraction is a very large part of our job. We owe it to ourselves to learn the basics and to improve our abilities with respect to our craft, even though sometimes it's hard.


Welcome to engineering. We use specialized jargon to talk about concepts that laymen might not find obvious, but are indispensable for us to get our work done.


I think some people should at least try to get over their math-phobia. I am trying to myself, and math is an uphill battle for me even without that kind of fear. The sloppy, opaque, inconsistent and often overloaded notation is an impediment, the 'let the reader infer most of this' proofs are an impediment... but if people are turned off to such a degree over a few names, they wouldn't last long in something which is such a mathy language anyway, relatable names or not.


You are using the exact reasoning I was talking about. Monoids are monoids. That is what they are. 99% of programmers are not familiar with them. If you call it "addable" or "joinable" or "appendable" then you are just making people think that one subset of some monoids is the definition of monoids when it isn't. They still don't know what monoids are, now they just also don't know what they are called. You are literally giving it an incorrect and misleading name. All that does is confuse people. You have to learn what monoids actually are even if you call them "addable"s. Rather than learning a misleading name for them, it is quite simple to learn a new term like "monoid". Considering there is really only 3 that people need to learn (monoid, functor, monad) this is not an overwhelming burden.


Check yourself before you try to say that "appendable" is an inappropriate name for Monoid.

http://hackage.haskell.org/package/base-4.6.0.1/docs/Data-Mo...

    Methods
      mempty 
      mappend
      mconcat

Haskell people like their mathy terms. Mathy terms aren't universally unambigious ("group"? "ring"? "field"?), but they are mostly unambiguous within math. Haskell people tend to pretend Haskell is the same as math, ignoring the programming part of its heritage.


In practical Haskell people rarely use mappend eschewing it for the more generic (<>) operator. Personally I think it's exactly for the reason stated above—monoid is far more general than "appending".

In particular, it's easy to define a reverse monoid for any (non-commutative) monoid such that append becomes prepend. It's easy to construct monoids which have different spacial properties like Diagrams' "stacking" monoid (they have many others, too, see this entire paper http://www.cis.upenn.edu/~byorgey/pub/monoid-pearl.pdf). It's also easy to construct monoids which don't have any spatial sense at all like set union.


Everyone hates the name mappend but it's still more commonly used than <> which iirc is relatively recent.


For the record, I agree. There's a lot of older code where `mappend` is used commonly. More accurately, I should have said that (<>) has taken modern coding style by storm.


probably because 'mappend' was defined while thinking about usage for lists, but is hardly appropriate for the general case. So, 'Appending' 5 to 3 gives 8; it could be better.


The choices are to either abuse an existing word, or make up new one (possibly a homograph).

What is '+' ? Addition? Modular addition? Logical OR? Concatenaton? Sometimes, any of these.

There where always be more concepts than distinct labels, since the space of concept is exponential combination (power set) of words.


I didn't say it had to be plus.

I don't have the perfect answer, but append certainly seems like choosing a specific concept , rather than trying to come up with a more general name.


I am fully aware of what the typeclass defines. It is an inappropriate name. I don't think there is anyone who likes those names. Everyone uses <> instead of mappened. Append is inappropriate because you do not append lots of monoids, like Product and Sum for example.


> Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?

We are even more pathetic than that. If the underlying concepts are misleading but evoke a warm and fuzzy sense of familiarity (objects), we will accept them wholeheartedly. If the underlying concepts are mathematical, we will reject them as disconnected with our everyday needs.


Most forum debates about computer science can be replaced by pointers to Edgar Dijkstra's writings.

http://en.wikipedia.org/wiki/On_the_Cruelty_of_Really_Teachi...

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EW...

example:

"""My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. ..

I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions."""


>Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?

For most people, yeah, I think monads are a big hurdle. They look intimidating to outsiders.

I still have to admit that I like the approach Haskell has taken. Sure, it's harder to grasp the concepts if you don't have a background in math, but it's not like monads, monoids, arrows, and functors were thrown in there just to be pretentious. There's a whole lot of useful theory surrounding those concepts that can be used to the programmer's advantage.


Part of his critique is that they do use terms that people are familiar with but which are misleading, like "return".


So call it "pure". I agree that "return" is not the most fortunate term.


This is a very valid critique, but I don't remember ever hearing that (except from Haskellers!)


It's a fairly superficial matter, not worthy of a lengthy diatribe. One gets used to names.

After all compilers don't compile, they translate.


[deleted]


As someone who went through the same thing, my best advice is, don't read monad tutorials; just write monadic code. Reading too much can just be confusing and might make you feel like an idiot for not understanding it yet. In the beginning it might be weird, and you'll no doubt spend a great deal of time puzzling over obscure type errors, but it will eventually become intuitive, if you're actually writing code and working through it. Haskell is a theory-heavy language, but it's still a programming language, which is actually meant to do things. There's no substitute for experience.

Perhaps try going through "Write Yourself a Scheme" which uses monads from the outset, or look at "Monad Transformers Step-by-Step" (be warned though, it starts off mostly simple but then makes a sudden and somewhat jarring leap forward). Try to implement a stack with "push" and "pop" monadic operations (use this as a starting point: http://brandon.si/code/the-state-monad-a-tutorial-for-the-co... but keep in mind it's much more important to WRITE the code, and play with it, than to try to understand how it's all working just via explanations).

For what it's worth, here's my ten cent explanation of monads:

A monad is interface for containers. A type which implements this interface must have two methods, `return` which inserts an object into a container, and `bind` which says what should happen when we use the value in one container to create a new container.


I know it's a running joke how many monad tutorials and analogies there are. And I get the humor and very much enjoy the joke.

However, I was able to gain a solid understanding of monads by working through a number of those tutorials. Over that period of time, I came to suspect, and then eventually confirm, that I had previously created my own monad for a particular purpose in C# (my case was checking the value of something in an XML tree, an attribute of an element of an element of an element, where the attribute might be missing, or its parent element might be missing, or its parent, and so on. So it was much like Bracha's ".?" sugar example).

So, monad tutorials actually (eventually) made the concept clear to me. It's fun to make fun of them, and they deserve to take a little heat, but we have to give them credit where credit is due too.


The container analogy does break down in some instances. For example, using monads to model a workflow. I've come to accept 'Computational Context' as the best descriptor so far.


I recommend: http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...

But in the end I think for most people 'understanding' the concept of monads is just something that is not to be had within a couple of hours. It takes a little bit of patience thinking about them and using them for a while.


My first non haskell monad tutorial was this dorophone.blogspot.com/2011/04/deep-emacs-part-1.html‎

very 'operational', zero magic, good to get your hands dirty.

Dans' tutorial you linked is one of my favorite too.



The core problem seems to be that we just don't (yet) have good words for this particular space of abstractions. Many programmers have thought about or solved the same problem monads address, but they haven't mentally labelled the concept, and we certainly have agreed as an industry on these labels. So we're all struggling to try to talk about some things we don't have terms for.


"The core problem seems to be that we just don't (yet) have good words for this particular space of abstractions"

How about "functor," "monad," "applicative" etc.?


You've probably read too many monad tutorials that make terrible analogies. Try reading this instead:

http://dev.stephendiehl.com/hask/


yes, i can comfortably say "any doc by stephen diehl is worth reading" :)


[deleted]


Why isn't it helpful to you?


Have you read "Monads are Elephants"? That helped me a great deal.

http://james-iry.blogspot.com/2007/09/monads-are-elephants-p...


In his talk on currying, he mentioned replying on type system to not be a good thing. Does anyone know the reasons behind his view?


Currying can obfuscate what is applied to what. Consider in any ML language "a b c d" – we can see that "a" is a function, but we have no idea of its arity. Uncurried, it could be: "a(b, c, d)", "a(b, c)(d)", "a(b)(c, d)", "a(b)(c)(d)" (oh, that's the curried form again). Especially when function definitions are implied through pattern matching, it is hard to understand the contract of a function at a glance.

As a reader of that code cannot easily understand whether the number and type of arguments is correct, one has to rely on the type checker that everything will work out.

However, this is more of a criticism of ML syntax than of currying – all things are good in moderation.


It's actually simpler in some ways, because we know that "a" must have arity 1. What we know is that "a" should be a function which takes a "b", that "a b" should be a function which takes a "c", and "a b c" should take a "d".

As a practical consideration, this rarely if ever becomes an issue, and if it does, the type checker will tell you straight away.

Type annotations can make clear what isn't intuitively clear with a function's signature, and since the correctness of the type checker is rigorously proven, I don't see anything particularly wrong with "relying" on the type checker.


The argument that every function has arity 1 is technically true (this is the whole point of currying) but is not useful when definitions like "let a b c = ..." suggest other semantics. It's possible you've had a difference experience with this, but I tend to get confused when the semantic argument list isn't delimited.

There is nothing wrong with relying on the type checker, except that it tends to add cognitive overhead.


In my experience, the more you use currying, the more intuitive it becomes (surprise, surprise). In any case, you very quickly develop an understanding that `let foo bar baz = qux` is just syntactic sugar for `let foo = \bar -> \baz -> qux`. Of course, if you want to simulate higher-arity functions, you could just use tuples. It's perfectly acceptable to write `let foo(bar, baz) = qux`.


Every function having arity 1 is reducing complexity. It's extremely uniform, and let a b c = ... is merely syntactic sugar for let a = (lambda b. (lambda c. ...)).

It's very natural, and as Lisp/Scheme/Racket shows, it's perfectly fine in a dynamically typed context as well.


"let a b c = ..." doesn't suggest other semantics. It's saying: "a applied to b, and then applied to c, equals ...". Note Haskell makes an effort to have the LHS of definitions imitate the exact syntax of function application. Patterns use the same syntax as data constructor applications.


I'm not following you here. In ML-like languages a b c d is clearcut: it's means (((a b) c) d). No ambiguity whatsoever.

Bracha's critique, as usual, is missing the point.


Agreed. a b c d is a function applied to three arguments that returns a value. Functions are values, that's the whole point.

For some reason it seems parent post would like to specifically indicate the case where a function application results in a value that is specifically not a function? Seem quite strange to me.


That's not true. At least in SML every function only takes one argument. If a function has an arity higher than 1 it is because it take a single tuple as an argument. But you can't use the sugar for currying and tuple arguments interchangeably.


It's a problem with ML syntax, but one that can easily be overcome with parentheses. Sort of like how a circumspect C programmer uses parentheses in complex mathematical expressions rather than relying on everyone being able to correctly remember complex order of operation rules.

OTOH, pipeline operators make a good case for currying. There really is something nice about being able to write

  sliceOfBread
  |> smearWith peanut-butter
  |> smearWith jelly
  |> topWith sliceOfBread
  |> cutInHalf
  |> eat
instead of

  eat(cutInHalf(topWith(sliceOfBread, smearWith(jelly, smearWith(peanut-butter, sliceOfBread)))))


I just want to point out that the pipeline operator (or, more accurately, the forward application operator), is not provided by many ML implementations, but it's trivial to define it yourself. Here it is in Standard ML:

    infix |>;
    fun x |> f = f x;
The definition is also similar in Haskell:

    x |> f = f x
And in OCaml:

    let (|>) x f = f x;;
F# and Elm provide this operator out of the box.


Why isn't it used more in haskell?


For one thing, nobody could agree what it should be named...

http://thread.gmane.org/gmane.comp.lang.haskell.libraries/18...


> more of a criticism of ML syntax than of currying – all things are good in moderation.

I don't follow this. My understanding is that currying is pure syntactic sugar: it's a cheap way to expression partial application.

What am I missing?


It is not syntactic sugar. It is one of two demonstrably equivalent ways to emulate functions of two arguments. The demonstrable equivalence comes from the equational theory of cartesian closed categories. The need to emulate functions of more than one variable comes from the fact that only functions of one variable are a native concept.


The word is encode, not emulate. Native support isn't any more concise, in fact it is more verbose when partial application is involved.

So why complicate the language with native support for a feature whose encoding on top of one argument functions is concise, elegant, and works well?


> Native support isn't any more concise, in fact it is more verbose when partial application is involved.

I only know too well. Everytime I write stuff like

    using std::placeholders;
    std::bind(foo, bar, _1, _2, baz, _3, _4);
I wish I were using an applicative language instead.


This is not a problem in Haskell, as it makes a distinction between the types of all these different functions. The uncurried forms take tuples (a distinct type) as arguments whereas the curried form does not.


In ML, they are all different functions as well.


> all things are good in moderation

What about poison? Rabies? Rabies in moderation actually sounds quite appealing.


Two words: medicines and vaccines. Of everything there is a “too little” and a “too much”, which is especially true with programming paradigms. Some people write procedural code where OOP should be used, others abuse OOP for something that should have been done in a functional manner, and sometimes functional programs should rather be expressed with procedural code.


Thank you for your explanation.


Can we get an [audio] indicator?


YES, I've been waiting for this! Thanks so much! :)


TL;DR FP hater talks about FP.


I'm going to save these HN comments for 5 years time when the hype on functional programming has died down a bit. Will be very humorous to read this again then.


No chance. The hype will recurse forever. Even on stackoverflow.


Deploy the canaries!


You're silly.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: