> whereas it's not at all clear what ordering applies to the !s - it's generally "evaluation order"
But the evaluation rule for ordering of effects is fairly simple. Left-to-right (taking into account paren nesting) and then top-to-bottom. It's the exact same as the `<-`s, just with an additional left-to-right dimension.
I brought up "evaluation order" in languages because I wanted to drive home that this left-to-right, top-to-bottom order is natural for developers (and C/C++ is a well-known outlier among imperative languages in this regard with a full stable of unspecified and undefined things, but Java, Python, Javascript, Rust, etc. all evaluate in that order). But it's not actually essential that any of the non-monadic code obey that evaluation order. You don't actually rely on evaluation order anymore than the sequencing provided by monads. All the expressions in the middle could be lazily or eagerly evaluated, it doesn't matter.
You bring up a good point with closures in the form of anonymous functions/objects, but that is also the only place that I would argue it's confusing for a developer coming from an imperative background (some who might expect this to be an exception to the left-to-right, top-to-bottom rule as it is e.g. with async-await in Javascript where a closure is not created and the Promise is regenerated where it is created), but you could just disallow it there or require additional annotation of the anonymous function, as e.g. happens in most languages such as Rust and JS. Even there the left-right, top-bottom rule could apply, it might just confuse some developers, who are used to it having an exception in anonymous functions.
Everywhere else not only does left-right, top-bottom give a concise and evaluation-independent (up to the sequencing constraints imposed by the monad) description of how the effects will be executed (which can be made universally true if you're willing to tolerate it for closures or you can just disallow `!`s in anonymous function bodies), this ordering also agrees with what developers coming from other languages would expect.
Put another way, I don't think most developers will be confused by the execution order of effects of `!` that doesn't have anonymous functions.
I think you could ask any developer, even those unfamiliar with FP, what order they think the effects preceded by a `!` will evaluate in the following expression and the overwhelming majority of them would get it right.
MyMonad {
x = !effectfulOp0
someFunction x !(effectfulOp1 x !effectfulOp2) !effectfulOp3
}
Again you don't need to follow what the evaluation is doing operationally. You just need to do a syntactic scan. No matter what evaluation strategy is used, the order of effects will always be `effectfulOp0`, `effectfulOp2`, `effectfulOp1`, and then `effectfulOp3`. There is no operational reasoning here at all. It's purely syntactic and local. This is true even for monads that seem to have a weird order of effects such as the reverse state monad, you just have to be very specific about what the effect actually is (the addition of thunks in the case of the reverse state monad), which is the same case as with do-notation, which is why lazy IO is so confusing, even though technically the monadic sequencing of effects is still being followed.
Indeed this distinction between operational and denotational reasoning has nothing to do with syntax (as it shouldn't since those are semantic questions), but rather your choice of monad. As Conal Elliott rightfully points out monadic IO is intrinsically a dismal failure if you want denotational semantics instead of operational semantics to begin with, even were you to write out everything with bind syntax. The problem is not with the syntax but with the monad itself (Conal would probably prefer the word monad here to have scare quotes around it when referring to monadic IO).
At the end of the day I view the differences between async-await syntax and do-notation as a difference in degree rather than kind (and indeed this is usually how I explain do-notation to developers familiar with async-await), whereas I think there's a quantum jump from normal bind/flatMap nesting and do-notation. However, I view the syntactic affordance given by async-await syntax to be a much larger win, especially if your language already begins with a block-inspired syntax with variable declaration statements rather than a lambda calculus-inspired syntax with let expressions.
> I brought up "evaluation order" in languages because I wanted to drive home that this left-to-right, top-to-bottom order is natural for developers (and C/C++ is a well-known outlier among imperative languages in this regard with a full stable of unspecified and undefined things, but Java, Python, Javascript, Rust, etc. all evaluate in that order). But it's not actually essential that any of the non-monadic code obey that evaluation order. You don't actually rely on evaluation order anymore than the sequencing provided by monads. All the expressions in the middle could be lazily or eagerly evaluated, it doesn't matter.
My point is that you are in much the same place (and have the same difficulties in understanding and refactoring) as if you were reading code in a language with pervasive side effects. In an imperative language all the effects happen left-to-right, top-to-bottom - but if you're using monads at all it's presumably because you find that kind of code difficult to understand or maintain.
> You bring up a good point with closures in the form of anonymous functions/objects, but that is also the only place that I would argue it's confusing for a developer coming from an imperative background (some who might expect this to be an exception to the left-to-right, top-to-bottom rule as it is e.g. with async-await in Javascript where a closure is not created and the Promise is regenerated where it is created), but you could just disallow it there or require additional annotation of the anonymous function, as e.g. happens in most languages such as Rust and JS. Even there the left-right, top-bottom rule could apply, it might just confuse some developers, who are used to it having an exception in anonymous functions.
It happens anywhere you have control flow - I thought of closures first because in functional style that's the most natural way to have control flow, but if you're looking to make this something that works the way imperative programmers expect then they'll expect to be able to use if/then/while/... . Even if you only allow expressions, many programmers would expect && and || to have short-circuiting behaviour that affects any effects in their operands.
Yes, simple, straight-through code is easy to understand in this style (although even then, I'd argue that having to scan every full line for !s slows you down a lot compared to being able to scan down a single column for <-s) - but simple, straight-through code is easy to understand in any style. The measure of a programming style is how well it handles complex, messy code.
> Indeed this distinction between operational and denotational reasoning has nothing to do with syntax (as it shouldn't since those are semantic questions), but rather your choice of monad.
Disagree. Whether something is a statement or an expression is very much a question about syntax, but also very much a question of whether you reason operationally or denotationally.
> As Conal Elliott rightfully points out monadic IO is intrinsically a dismal failure if you want denotational semantics instead of operational semantics to begin with, even were you to write out everything with bind syntax. The problem is not with the syntax but with the monad itself (Conal would probably prefer the word monad here to have scare quotes around it when referring to monadic IO).
IO values (or rather "values") have extremely weak denotational semantics (and almost no algebraic properties), no argument there. But I'd actually say that that makes it even more important to put the monad laws front and center when working with IO, since they're pretty much the only laws that IO has. If you want to change the sequence of IO actions that your program performs, you can only reason about that operationally, but if you want the program to perform the same (opaque) sequence of IO actions, you can do that denotationally, and it turns out that's enough for a lot of the refactorings you would want to perform.
All that said, IO is not remotely the most important use case for monads, and for many applications I'd actually be ok with using something like monadless solely for IO, if the codebase stuck to traditional for/yield/<- style for more semantically important monads.
> but if you're looking to make this something that works the way imperative programmers expect
I think you're misunderstanding me and I regret bringing up the appeal to "imperative programmers." That was simply to illustrate that the semantics of the syntax are immediately accessible to the majority of programmers who have never encountered denotational semantics. But if denotational semantics are what are important to you, let me back up because the whole appeal to "imperative programmers" is not the purpose of `!`, the purpose is rather more meant here in what I wrote:
> having to contort perfectly fine code into a for-comprehension when I introduce some effects simply because I didn't write my pure code with the Identity monad the first time around.
The perspective I'm coming from is based on Moggi's original paper, which is to realize that semantically monadic code is simply a generalization of "pure" (this is kind of an underspecified term here, but I think we both understand what this is meant to refer to) code, and to have the syntax reflect that generalization, rather than draw a sharp divide between monadic code and pure code that `do`-notation introduces (most noticeably by forcing A-normal form). You still get that division at the type level and through various fencing syntax, which is good for making sure you don't lose purity guarantees, but it's important to realize that these things share the same fundamentals (this is why e.g. Haskell can claim to be pure even in the presence of effects, which is confusing to newcomers who view do-notation as a completely different part of Haskell).
Indeed I'll make a perhaps bold claim. This `!` or async-await syntax is a far more faithful representation of the denotational semantics laid out in Moggi's original paper describing the semantics of monads and computations than do-notation is.
So since we're on the subject of denotational semantics, let's start with the denotational semantics of monads. When considering denotational semantics, what is a monad? We can appeal to Moggi's original presentation of this using Kleisli triples, most notably the Kleisli extension operator and the component of the natural transformation on the identity endofunctor (for Haskell programmers this is the categorical version of the reverse bind operator and pure). In particular, Moggi gives a compelling overview of how to achieve a meaningful denotational semantic mapping here here, given a well-defined monad. This doesn't come for free in syntax inspired by the lambda calculus because the categorical representation translate function application to composition (as it does everything see e.g. the categorical representation of monoids), but we can get there by thinking of "expression sections, " by analogy to tuple sections. That is, roughly speaking, you can think of `(_ subExpr0 subExpr1)` as being the function `x -> (x subExpr0 subExpr1)` and more generally of parenthesized expression in general as a series of compositions of expression sections (with atomic values such as literals being the usual morphisms from terminal objects in the category, given that we're in the lambda calculus and therefore in a Cartesian closed category we're guaranteed these exist).
Fundamentally Moggi's insight is that the denotational semantics behind
f x
can be augmented via a monad to be
f* x*
where `*` here is the Kleisli extension operator (what Moggi refers to as `_*`). In Haskell terms this is
(=<< f) (pure x)
The fact that we're using reverse-bind in one case and pure in the other case is a quirk of function application in the lambda calculus; again at the categorical level where everything including function application is just composition, these are both just applications of the Kleisli extension operator.
So what is `!` then? Well `!` is simply the absence of `*`!
f !x = f* x
Denotationally that's all `!` is! Indeed we can choose to think of `!` as syntactic sugar, but that's optional. You can instead give it an immediate denotation instead of having to desugar things first. The reason we use `!` to denote the absence of Kleisli extension rather than its presence is because most of the time you care only about its absence. Otherwise your code would be absolutely filled with `!`s.
So that's the denotational semantics of what's going on here. There was a lot there, but it's all in service of a single point: if you care about denotational semantics `!`/monadless syntax has a much stronger story than do-notation (e.g. what exactly is the denotational semantic of `<-`?). To describe the denotational semantics of do-notation you have to essentially perform the syntax desugaring first before resorting to the non-monadic denotational semantics of the desugared code. This is completely ignoring Moggi's insight!
So from the perspective of denotational semantics, do-notation is absolutely no help! We have to actively ignore our monadic context and "undo" the do-notation and resort to the underlying original denotational semantics instead of getting any semantic help from our monad at all.
But of course most programmers have not heard of denotational semantics and don't care about denotational semantics. So for them, what is the point of `!`? The point is not to force-fit imperative code. Indeed in that case, as you point out, the natural evolution is to throw in all sorts of different kinds of if-statements, while-statements, exceptions and the like, which is not what I'm advocating. The point is to realize that all that is just a reflection of the same thing! You just need to assign a different semantic mapping is all!
[Identity| f x |]
is one mapping where everything is "pure" in the way we normally think of it
[Maybe| f x |]
is one with short-circuiting failure.
[List| f x |]
is non-determinism and so on.
This desire has appeared many times in the FP community and given birth to many proposals (notably I'm basically stealing syntax from McBride's idiom brackets here, the `!` addition here essentially can be thought of as an addition to make idiom brackets work for monads instead of just applicatives, and indeed `!`-notation itself has already been implemented in Idris although in a slightly different fashion than how I've presented it).
The point of `!` is to expose the fact that monadic semantics are exactly what matters and the only thing that matters!*
That we cannot seamlessly express this relationship syntactically by just appending brackets and instead must do so through contortion to an entirely different syntax in the form of do-notation represents to me a failure of denotational reasoning! Indeed while e.g. Idris describes this notation in terms of do-notation (http://docs.idris-lang.org/en/latest/tutorial/interfaces.htm...) (one thing I like about monadless and why I use it as my go-to example is because it doesn't define itself in terms of for-comprehensions), as I've outlined above, I believe it has a far stronger standalone semantic interpretation.
Meh - I find the idea of overloading what function application means very counterproductive. You can say that function application is just a special case of Kleisli composition and the syntax should really mean Kleisli composition, but really that's just the statement that a lot of things behave as composition in some sense (which is why category theory works at all).
> Denotationally that's all `!` is! Indeed we can choose to think of `!` as syntactic sugar, but that's optional. You can instead give it an immediate denotation instead of having to desugar things first. The reason we use `!` to denote the absence of Kleisli extension rather than its presence is because most of the time you care only about its absence. Otherwise your code would be absolutely filled with `!`s.
I think it's worth tugging at this thread. Even within our effectful/imperative DSL block (which is what I see do notation as), most of the time we don't actually want to do effectful Kleisli composition. Most of the time we want our function application syntax to denote regular function application. For the same reason, I don't find Moggi's approach compelling - in theory you could understand the denotation of a do block by understanding the whole thing as Kleisli composition and then algebraically simplifying, but in practice that would be far too complex for humans. (In the same way, I felt Frank was completely the wrong approach, because I don't want to push a maximal bundle of effects in, I want to pull a minimal bundle of effects out).
In many cases I'm skeptical about using do notation at all - often using the applicative or arrow combinators expresses the denotation more clearly, putting us in a position where it's much easier to apply algebraic laws - but where I think it does succeed is in letting you mix a minimal piece of operationally-understood effect into a procedure the bulk of which is still denotationally-understood, and, ideally, lets you view that piece of code from both perspectives at the same time. So the way I'd approach it is "what is the simplest/most restricted possible set of effectful-compositions that allows us to express everything we want", and current do notation mostly fits the bill.
As you say, there's a lot of proposals and efforts to relax the restraints of do notation and offer a more expressive way of working with effects, but I've always found they lead to code that was confusing to maintain. So in the end I bite the bullet and say that yes, I will refactor the whole call chain if it turns out it needs the current time somewhere at the bottom (or else I'll decide that accessing the current time is not an effect that I care to track explicitly at all), because although it's more work up-front, it's ultimately the only way to avoid ending up with code that's confusing to read.
But the evaluation rule for ordering of effects is fairly simple. Left-to-right (taking into account paren nesting) and then top-to-bottom. It's the exact same as the `<-`s, just with an additional left-to-right dimension.
I brought up "evaluation order" in languages because I wanted to drive home that this left-to-right, top-to-bottom order is natural for developers (and C/C++ is a well-known outlier among imperative languages in this regard with a full stable of unspecified and undefined things, but Java, Python, Javascript, Rust, etc. all evaluate in that order). But it's not actually essential that any of the non-monadic code obey that evaluation order. You don't actually rely on evaluation order anymore than the sequencing provided by monads. All the expressions in the middle could be lazily or eagerly evaluated, it doesn't matter.
You bring up a good point with closures in the form of anonymous functions/objects, but that is also the only place that I would argue it's confusing for a developer coming from an imperative background (some who might expect this to be an exception to the left-to-right, top-to-bottom rule as it is e.g. with async-await in Javascript where a closure is not created and the Promise is regenerated where it is created), but you could just disallow it there or require additional annotation of the anonymous function, as e.g. happens in most languages such as Rust and JS. Even there the left-right, top-bottom rule could apply, it might just confuse some developers, who are used to it having an exception in anonymous functions.
Everywhere else not only does left-right, top-bottom give a concise and evaluation-independent (up to the sequencing constraints imposed by the monad) description of how the effects will be executed (which can be made universally true if you're willing to tolerate it for closures or you can just disallow `!`s in anonymous function bodies), this ordering also agrees with what developers coming from other languages would expect.
Put another way, I don't think most developers will be confused by the execution order of effects of `!` that doesn't have anonymous functions.
I think you could ask any developer, even those unfamiliar with FP, what order they think the effects preceded by a `!` will evaluate in the following expression and the overwhelming majority of them would get it right.
Again you don't need to follow what the evaluation is doing operationally. You just need to do a syntactic scan. No matter what evaluation strategy is used, the order of effects will always be `effectfulOp0`, `effectfulOp2`, `effectfulOp1`, and then `effectfulOp3`. There is no operational reasoning here at all. It's purely syntactic and local. This is true even for monads that seem to have a weird order of effects such as the reverse state monad, you just have to be very specific about what the effect actually is (the addition of thunks in the case of the reverse state monad), which is the same case as with do-notation, which is why lazy IO is so confusing, even though technically the monadic sequencing of effects is still being followed.Indeed this distinction between operational and denotational reasoning has nothing to do with syntax (as it shouldn't since those are semantic questions), but rather your choice of monad. As Conal Elliott rightfully points out monadic IO is intrinsically a dismal failure if you want denotational semantics instead of operational semantics to begin with, even were you to write out everything with bind syntax. The problem is not with the syntax but with the monad itself (Conal would probably prefer the word monad here to have scare quotes around it when referring to monadic IO).
At the end of the day I view the differences between async-await syntax and do-notation as a difference in degree rather than kind (and indeed this is usually how I explain do-notation to developers familiar with async-await), whereas I think there's a quantum jump from normal bind/flatMap nesting and do-notation. However, I view the syntactic affordance given by async-await syntax to be a much larger win, especially if your language already begins with a block-inspired syntax with variable declaration statements rather than a lambda calculus-inspired syntax with let expressions.