Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Currying is one of those cases where the code is the explanation

I think in many cases this isnt right, eg., Monads. The reason flatMap() "flattens" is just that "flattening" is really just sequencing, denesting the type using a function requires a sequenced function call: f(g(..))

This applies to many of these "functional design patterns"... theyre just ways of expressing often trivial ideas (such as sequencing) under some constraints.



While I _think_ I understand monads on some rudimentary level through how join and bind operate, your "just sequencing" doesn't tell me anything. And this is a problem with a lot of these texts. Maybe that's trivial to the writers, but it makes me feel even dumber when I cannot understand a concept I already understand, lol.


This is one of those things where looking at the type signature hard enough eventually gives the game away, but most writing on it sucks:

    bind :: m a -> (a -> m b) -> m b
Because that function in the middle takes an `a`, your implementation of `bind` needs to be able to take an `m a` and pull an `a` out of it, which means it also has to evaluate however much of `m` is needed to actually get to that `a`.

Because that function in the middle returns an `m b`, binding again with a function `b -> m c` requires you to pull `b` out of `m b`, which in turn forces pulling an `a` out of `m a` to make progress. This is where you force sequentiality — you can only evaluate the `m` in `m b` after you've evaluated the `m` in `m a`


Caveat: there is the trivial case of calling the middle function zero times. Which does happen (in the case of a Maybe type) but presumably any useful code needs to be able to call the function sometimes.


This makes a lot more sense than anything I've read about this in the past, thanks for the explanation!


Thanks, that's reassuring. I've long been frustrated by just how bad monad posts are in general, and been meaning to write something up about it that makes it clearer. It's good to know that I'm at least going in the right direction :)


A big problem with "just sequencing" is that there are a lot of ways of sequencing that either don't require the complexity of a monad or for which the monad interface is insufficient.


The former is probably uncontroversial but I'd love to see some examples of the latter since my understanding was that monads are essentially the generalization of sequentiality.


One obvious place to look is things that are almost a monad but can't be lawful and which might nonetheless be useful. An example that comes to mind is trying to use Set as a monad (for nondeterminism like the list monad, but with deduplication), which you can't do for some potentially avoidable technical reasons (it would have to be restricted to things you can Ord) but more importantly Set can't be a monad because Set can't be a functor, because `x == y` does not actually imply that `f x == f y` (even for reasonable definitions of Eq), so `fmap (f . g)` might not be equivalent to `fmap f . fmap g`.


I thought it was the case that you can't implement the Monad type class using the built in Data.Set implementation of Sets but there's no reason in principle why you couldn't build a Set Monad that works that way, right? Perhaps I am mistaken so I will investigate further.


There is a reason in principle that you can't build a Set Monad that works that way: it violates the functor laws.

In order to be a functor, you need `fmap (f . g) == fmap f . fmap g`.

The problem is that some functions (for instance, Data.Set.showTree) can produce different results for two values that compare equal.

For instance, imagine we have a set containing (only) the lists `[1,2]` and `[2,1]`; let's call it s. How many elements are in `fmap (Data.Set.showTree . Data.Set.fromList) s`? Two, because the result of fromList winds up being order dependent in the structure of the internal tree, even though both Sets represent the same set of values. On the other hand, how many elements in `fmap showTree . fmap fromList`? Just one, because the Set has a chance to "see" the output of fromList and notice that they are equal according to Eq.

Does this violation of the functor laws matter? Maybe; likely not; it depends on your use case.

But mathematically it means that the thing we are working with is not a functor, and so the thing we are working with is also not a monad (as all monads are functors).


İ prefer to say that show tree violates the abstraction. You can have s1=s2 but not f(s1)=f(S2). So f (show tree) is not a real function (or = is not a real equal). But showtree is a debug function, it doesn't count. İt's like saying in C that a=13 and b=13 isn't a real equality because &a and &b are not equal. There is the underlying structure and the structure after the equivalence. You have to know of what you're talking.


I'm not unsympathetic to that position, but I think it's ill advised to take it as far as "Functor means functor up to Eq". I grant that showTree in particular is a debug function that will usually be called at the end of any meaningful computation, and handwaving it in particular away is probably fine. I don't think we can say the same of something like splitRoot, which instead breaks the abstraction for the sake of efficiency.

> You have to know of what you're talking.

And the language (/ecosystem) gives you no way of specifying, so we have to be somewhat conservative. I've actually been thinking (partly driven by trains of thought such as under discussion here) that it might be very interesting to have a language that let you reason explicitly about different sorts of equality (tentatively defined, in the context of the language, as indistinguishability relative to some interface).


It's called math.



We need an explanation why we should care about support for currying. Where it is really just to support variadic argument lists, it is a big hammer for a little problem.


And precisely because of the code sample, it should be obvious that currying is not the same as variadic function arguments.

Instead, it allows for very concise partial function application, as demonstrated by the code. I can just call any function with a subset of its arguments, and it automatically returns a “new function” which you can call with the remainder of the arguments. It allows you to reason about calling functions in a different way.

Currying is one of those concepts done very well in languages such as Haskell (not a coincidence, as both derive their name from Haskell Curry).

But saying it’s a “big hammer to implement variadic function arguments” is like saying that pattern matching is a big hammer to implement a switch statement. Yes, it provides that functionality, but also much more in a very elegant manner.


Saying "it allows for very concise partial function application" does nothing but repeat the definition. It does not, in particular, offer any reason to want that. What is so special about the first argument, that I want to fix it? Why not the third? Why is what I do to fix the third not just as good for the first?

Pattern matching is a good example of a language feature included because they could not figure out how to provide features expressive enough to be composed to implement the feature in a library.


In a language like Haskell, pattern matching was explicitly chosen to be a primitive operation. It's not that there's no way to put it in a library; it's that it was chosen to be one of the small set of ideas everything else is described in terms of. Along with allocation and function application, you've got the entirety of Haskell's evaluation model. (Note: not execution model. That needs a bit more.) Having such a small evaluation model probably should be taken as evidence the primitives were chosen well.


> Note: not execution model. That needs a bit more.

Actually... not really? You need the foreign function interface to have anything useful to execute, but (unless you're talking about something else?) the execution model is basically just a State monad carrying a unique magic token, built on top of the same evaluation model as everything else.


The execution model needs something that actually drives execution. There needs to be something that explains how IO actions are actually run. The state approach kind of works until you need to explain multiple threads, then it falls over.

Edward Kmett did some work to describe IO as an external interpreter working through a free monad. That approach provides very neat semantics that include multiple threads and FFI easily.

But IO needs something to make it go, and that something needs capabilities that aren't necessary for the evaluation model.


> The execution model needs something that actually drives execution.

Er, yes: specifically, execution is drivenby trying to pull the final RealWorld token out of a expression along the lines of `runState (RealWorld#) (main)`, which requires the evaluation of (the internal RealWorld -> (RealWorld,a) functions of) IO actions provided by FFI functions (hence why you need FFI to have anything useful (or more accurately, side-effectual) to execute).

> until you need to explain multiple threads

I don't. It's a terrible idea that's effectively the apotheosis of premature (and ill-targeted) optimization as the root of all evil.


I'm not trying to dodge your question, but the answer to your question is that until you work with it for a while you're not going to understand it. Any blog-sized, bite-sized snippet isn't impressive. You have to work with it for a while.

I speak many computer languages, and one of the ways I measure them is, "what do I miss from X when using Y?" The answers will often surprise you; the thing you'd swear up and down you'd miss from X you may never think of again if you leave the language, and some feature you hardly consider while you're in the thick of programming in X may turn out to be the thing you miss in every other language for the rest of your life. For me, one of the things I deeply miss from Haskell when programming elsewhere is the fluidity of the use of partial function application through the currying syntax. I see so many people trying to force it out of Haskell into some other language but there just isn't any comparison to the fluidity of

    map someFunc . filter other . map (makeMap x y) $ userList
Currying enables that syntax. If you don't see how, I'm not surprised. Sit down and try to write a syntax extension for an Algol-descended language that makes it work that well. Be honest about it. Run some non-trivial expressions through it. What I show above you should consider the minimum level of expression to play with, not the maximum. If you pick something like Python to work in, be sure to consider the interaction with all the types of arguments, like positional, keyword, default, etc. It can absolutely be done, but short of re-importing currying through the back door I guarantee the result is a lot less fluid.

The question about "what is special about the first argument" is backwards. The utility of the first argument is something you choose at writing time, not use time. The reason why map's first argument is the function to map on and the second is the list to map over is nothing more and nothing less than most of the time, users of map use it as I show above. There's no deep mathematical or group theory reason. If you don't like it there are simple functions to change the orders around, and the goal of picking function argument orders is just to minimize the amount of such functions in the code. Nothing more, nothing less.


OK, thank you. The key seems to be that argument lists for functions in a language with easy currying are carefully designed so as to make currying maximally useful.


That's an example of why Haskell code is often hard to read, and why I'm glad this isn't available in other languages. Why not write out the callback given to map using a let block? Then you can give meaningful names to intermediate values so it's easier to see how the pipeline works.

(Though, functional programmers would probably pass up the opportunity and use one-letter variable names.)


"It depends". My long pipelines were still usually nice to read because I had context-relevant names and I didn't tend to name them with single letters. You could still read off from them what they were doing. However honesty compels me to admit that by Haskell standards I tended to write long variable names. Not everywhere; there's a lot of "f" in Haskell for "functions that I only know about them that they are funcitons" for the same reason one you aren't winning in C by insisting that every for loop has to use a descriptive variable rather than "i". But I tended towards the longer.

In real code I probably would have named it, but example code is always silly.


Is this what you mean?

    let f = makeMap x y
    in map someFunc . filter other . map f $ userList
I suspect not as it certainly doesn't seem to improve the situation. Perhaps you could elaborate?

Edit to add: I was focused on the "write out the callback given to map using a let block" part and not the meaningful name part, but that's perhaps what's confusing about this to me, as (makeMap x y) seems like the most clear name possible here, what else could you do? This?

    let mapOfXy = makeMap x y
    in map someFunc . filter other . map mapOfXy $ userList
It only obscures things.


The feature is not called currying but partial application, it is very nice and intuitive.


Couldn't I "curry" a function

    f(x,y)
to "partially apply" it on x=3 with just:

    function(y) { return f(3,y); }
?

And then I've "partially applied" f. Is this what currying is?

The syntax I posted is ambivalent about which arguments you're partially applying, isn't that superior to only being able to provide the first argument?


Currying is turning a function with arity n into a sequence of unary functions.

  f(x,y) = ... // arity 2
  f = \x -> \y -> ... // sequence of two unary functions
  f x y = ... // more compact representation of the above
Regarding partial application, your JS (?) example is basically what happens under the hood with Haskell, but without the ceremony:

  f x y = ...
  fByThree = f 3
  fByThree y = f 3 y
Those last two are equivalent, but you don't have to include the y parameter explicitly, because f 3 returns a function that expects another parameter (like the function you wrote out explicitly).

And the Haskell version is more general, since it doesn't require a unique function to be written for each possible partial application. Of course, you can do this in languages with closures:

  function fByX(x) {
    return y => f(x,y);
  }
  fByThree = fByX(3);
But in Haskell that extra function isn't needed, it's just already there and called f. Regarding your last statement, there are also combinators in Haskell that allow you to do things like this:

  fXbyThree = flip f 3
  // equivalent to:
  fXByThree x = f x 3
  // JS
  function fXByThree(x) { return f(x,3); }
  // or
  function fXByFixedY(y) { return x => f(x,y); }
  fXByThree = fXByFixedY(3);
So I'm not sure it's strictly superior, it is more explicit though.


Thanks!


This is why I said to be sure to consider all the things you can do with function arguments. You can't in general assume that f(3) for a two-part argument can be read by the compiler as a curry, because the second argument may have a default. You also have to watch out in dynamic languages because the resulting function instead of a value may propagate a long way before being caught, making refactoring a challenge. (In Haskell it'll always blow up because the type system is strong, although you may not get the nicest error.) Imagine you have a whole bunch of code using this approach, and then you set a default value for the second argument. Imagine you're applying it to a whole bunch of code that wasn't written to optimize the first parameter as the one you want to partially apply.

The end result is you need a lot more noise in the partial call to indicate what it is you are doing, no matter how you slice it, and it interacts poorly with default parameters (which almost every language has nowadays) anyhow.


Currying support is orthogonal to how function arguments work. Some languages (e.g. OCaml according to this question[1]) combine named parameters with currying, allowing partial application with any of the function's arguments.

[1] https://stackoverflow.com/questions/3015019/is-there-a-progr...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: