Can anyone explain how syntax elements such as { } are a net win? This looks like they've tuned the syntax to not scare anyone off in the first week, at the expense of an economy of expression later.
Rob Pike once justified C syntax for its ability to survive pipes that mangled whitespace. That was then, this is now. To this old C programmer, anything that unnecessarily resembles C is ground glass in the eye.
My understanding (from reading arguments from language designers) is that curlies introduce error correction redundancy to a language. So if, for example, your editor decides to be stupid and mangle indentation of some copy-pasted snippet, the curlies provides the developer with semantic hints to deduce/reconstruct the original program.
This property is particularly important for readability of "embeddable" languages ("embeddable" in the sense of a language source code existing within another, for example JS within HTML script tags, or bash in a Dockerfile, or YAML string concatenation).
Hmm. I understand your rejection for `{}`, but I really think the end result feels nothing like the Java experience. Can I have a shot at defending it? :) It will be a long answer including some of my reasonings, and by all means feel free to disagree if you think I'm wrong about something.
Kind is extremely permissible and terse in general, and a lot of caution and love is put in making it very practical. While `{}` looks like noise in some places, it allows others to be less verbose, resulting in some snippets that look almost like Python, even though Kind isn't indentation aware (which, IMO, is very annoying). For example, commas and semicolons are always optional, so you can actually write lists as just `[1 2 3]`, maps as just `{"foo":1 "bar":2}` and function calls as just `f(1 2 3)`. That wouldn't work without `{}` on case.
There are many places where the Haskell-like syntax break down. We aim to fix every one of these. For example, the following Haskell function:
blocks :: Entity -> Bool -> Bool
blocks entity floats True = True
blocks entity floats False = case entity of
Creature name hp age items pos -> True
Tile pos effect isWater -> not isWater || floats
Wall pos -> True
Fire pos damage kind -> False
Both do the same thing. Which one is more readable? The key insight is that we avoid repetition of variable names (a consequence of Haskell's equational notation), and the need to re-name every field of every variant every time you pattern-match (by naming fields as `matched_name.field_name` by default). That sounds like a small thing, but pattern-matching is the bread and butter of functional programming. Having such a terse syntax for case-of makes the whole language feel so much more refreshing.
Just like that, there are dozens of small little things that we do to make the syntax just work in practice. For example, working with Maybe is often frustrating in functional languages. In Kind, we have a default operator:
let x = some(3) <> 5 // x is 3
let y = none <> 5 // x is 5
An "abort" operator, that flattens your code:
sum_first_3(list: List<Nat>): Nat
x = list[0] abort 0 // returns 0 if x is none
y = list[1] abort 0
z = list[2] abort 0
x + y + z
And, of course, Maybe monad:
sum_first_3(list: List<Nat>): Nat
Maybe {
get x = list[0]
get y = list[1]
bet z = list[2]
return x + y + z
} <> 0
And the same attention we gave to Maybe, we gave to so many other small things. For another example, we HATE indentation. Pattern-matching a structure with only one constructor causes a needless indentation. In Kind, you can use "open", that flattens it too:
open vec
vec.x + vec.y + vec.z
Another source of frustration in Haskell is setting/getting values from lists, maps, records. Things are so inconsistent. `Map.lookup`, `List.index`, `!!`. And lenses, while amazing, are currently really messy, hard to learn and read, full of dependencies and impact the performance. Kind has simple and obvious answers for all of these. For example:
name = my_list[7]
Works just as you expect, returning the 7th element of `my_list : List<A>` as a `Maybe<A>`.
name = my_map{"foo"}
Returns the "foo" entry of `my_map : Map<A>` as a `Maybe<A>`. And:
name = my_record@foo
Returns the `foo` field of `my_record` as `A`. Don't need a maybe? Just default it:
name = my_list[7] <> "anonymous"
Wanna set it?
name = my_list[7] <- "Eric"
That gives the user the convenience of working on JavaScript lists, in a pure functional language.
Something else I noticed in Haskell is that code tends to become really messy when the function body becomes big. That is, when you need to chain many intermediate values. There isn't an obvious answer as to where stop growing horizontally and start adding lets vertically, and when to use `let` or `where`. Sometimes the computation flow becomes upside down, sometimes not. Also, foldr/foldl tends to become really messy when the function body is large. We have one straigth-forward answers to all these situations in Kind. Our let and "pure for" syntax allows for some pretty Python-looking functions:
age_factorial(birth:_year Nat, names: List<String>): String
// Concatenates all names in a string
name = ""
name = for name in names:
name | " " | name
// Computes the age
age = current_year - birth_year
// Computes the factorial of the age
fact = 1
fact = for n from 0 to age:
fact * n
if age <? 18 then
"You're too young to use this program"
else
"Hello, " | name | ", the factorial of your age is " | fact
Obviously, pure fors just desugar to folds. Also, maps are well-thought. In JavaScript, `{foo: 3}` and `{"foo": 3}` both are the same map. In Kind, if you don't write `""`, then it is a variable, which makes a lot more sense. For example:
key = "foo"
map = {key: 3}
Just a small thing though. This answer is already big, and it doesn't even cover things like the built-in HTML syntax, the `sigma` sugars (`[x: Nat] x <? 50` is the type of Nats that are smaller than `50`), other sugars for theorem proving, like `rewrite`...
Basically, what I'm just saying that there are so many nice things behind the initial impression, perhaps you're judging prematurely because of your taste for `{}`? Give it a chance and I think you'll love it!
I really like the language, but you're exaggerating the verbosity of the Haskell code by not comparing them in an apples-to-apples way.
Here's the same Haskell code written in the same way you've written your Kind code.
blocks entity floats ghost = case ghost of
True -> True
False -> case entity of
Creature{..} -> True
Tile{..} -> not isWater || True
Wall{..} -> True
Fire{..} -> False
The `{..}` is from `RecordWildCards`, a common extension.
I do like the omission of `let` and `where` in favor of just variable definitions. For monadic notation, instead of `get` and `return` I'd prefer something like https://github.com/monadless/monadless
The main problem with `do`-inspired monadic notation is that it forces monadic code to always be in A-normal form, which is pretty annoying. It'd be far nicer to allow for monadic code to look the same as normal code, just with say an extra brace (such as "IO { }") to indicate that you're inside a monad.
Fair point! That is a recent update of Haskell, though (one I wasn't aware of when designing the case syntax). It does agree with my point, though, right?
> The main problem with `do`-inspired monadic notation is that it forces monadic code to always be in A-normal form, which is pretty annoying. It'd be far nicer to allow for monadic code to look the same as normal code, just with say an extra brace (such as "IO { }") to indicate that you're inside a monad.
I agree with that. Not sure how that could work, though. Idris had an interesting syntax, but IIRC it wasn't general.
Well `RecordWildcards` has been around for 14 years... but even without it instead of `{..}` you'd just have `_`s. The main thing that is different is that your Kind example had nested case statements while your Haskell example tried to match everything on one shot, which makes for a non-equivalent comparison, and you purposefully repeated field names on both sides of `->`, which is not necessary even in vanilla Haskell when you work with records.
> Not sure how that could work, though. Idris had an interesting syntax, but IIRC it wasn't general.
RE Idris I assume you're talking about idiom brackets for applicatives? The general syntax is given in something like https://github.com/monadless/monadless. The idea is to basically take async-await syntax and generalize it to any monad.
So e.g. your `Maybe` example (using `!` for the equivalent of `await` for concision) would look like
Maybe {
x = !list[0]
y = !list[1]
z = !list[2]
x + y + z
}
but because it doesn't have to be in A-normal form anymore you can compress it all into a single line.
Maybe {
!list[0] + !list[1] + !list[2]
}
So you can take any normal code you have and put in a monadic context simply by inserting brackets around the whole thing and then sprinkling in `!`s as necessary.
EDIT: Or to think of it another way, just move the `get` from the left-hand side of `=` to the right-hand side. I'd recommend renaming it to something to indicate that this is a syntactic transformation, rather than a normal function (which is why I like something like `!` or anything else that has a symbol in it to indicate this is essentially a macro).
Oh, ok, then I was ignorant about {..} - sorry! You still need to rename every field you need, though. So Haskell is, at best, comparably verbose, and, at worst, much more. It also kind of encourage one-letter variable names.
About the monadic syntax, that is really clever, and makes a lot of sense to me. I want this implemented, now. Will keep that in mind. (And accept PRs!)
Having used various monadless-style implementations I think they're a mistake. The whole point of using a monad is because you want to explicitly sequence some (generally noncommutative) effect and be sure that refactoring won't change the behaviour (whereas if you reorder two `<-` lines then that is a behavioural change that you need to test). If you allow ! at arbitrary points in an expression then you end up with something more akin to checked exceptions where you know that an effect is happening somewhere but not what all the possible code paths are, and at that point you might as well just use side-effecting code.
Yeah this was the same discussion that happened when monadless was released in the Scala community. I personally don't see much of a difference between reordering `<-` lines and reordering !-ed expressions, especially in a strict language (such as Kind). The same arguments for how you rearranging different !-ed expressions can result in different behavior whereas non-!-ed expressions can be reordered with no change ring exactly analogously to me for `<-` statements vs the usual `=` in a let expression.
The fact that
val x = something
val y = somethingElse
doThis(x, y)
allows for interchanging x and y if `something` and `somethingElse` are pure but
for {
x <- something
y <- somethingElse
} yields doThis(x, y)
doesn't allow for interchanging `x` and `y` feels approximately just as confusing (or not-confusing) as
x = something
y = somethingElse
doThis x y
having interchangeable `x`s and `y`s but
MyMonad {
x = !something
y = !somethingElse
doThis x y
}
not having interchangeable `x`s and `y`s.
I suppose what you're really getting at is what happens when you combine it all into a single line a la
MyMonad { doThis !something !somethingElse }
but I still don't find it confusing for a strict language (and besides legions of developers have had a chance to familiarize themselves with this via async-await). The rules in either case are the same. If you move around `->`s behavior might change. Likewise if you move around `!`s behavior might change.
In a lazy language such as Haskell I agree that it can be rather weird to have to think about how inserting a pair of brackets and some `!`s around suddenly causes sequencing that is different from how a normal statement is evaluated (although the bangs are quite suggestive, especially given their lineage in Haskell, indeed given a `Lazy` monad these bangs just become the same thing as bangs in Haskell, so another way you can view this is as a generalization of Haskell bang patterns). But Kind is a strict language AFAICT and in a strict language evaluation from left to right and top to bottom is the norm. And that left-to-right, top-to-bottom order immediately tells you what the code paths are.
Even in a lazy context where the order of evaluation might be a bit more confusing, this async-await style of writing code is still valuable for delineating which code blocks are side-effectful are and which aren't. Checked exceptions/checked effects are still very valuable! It's valuable enough that e.g. the Haskell community is perfectly willing to compromise on knowing what all the code paths are with stuff like ApplicativeDo.
Having spent a considerable time with cats-effects in Scala and a little time with Kotlin's Arrow Fx there are times I still prefer Arrow Fx despite the general clunkiness of Kotlin's Arrow ecosystem over cats-effect precisely because I just need to stick in a `suspend` instead of having to contort perfectly fine code into a for-comprehension when I introduce some effects simply because I didn't write my pure code with the Identity monad the first time around.
> but I still don't find it confusing for a strict language (and besides legions of developers have had a chance to familiarize themselves with this via async-await). The rules in either case are the same. If you move around `->`s behavior might change. Likewise if you move around `!`s behavior might change.
The <-s are always in the same place on the line and always ordered top-to-bottom, whereas it's not at all clear what ordering applies to the !s - it's generally "evaluation order", but the whole point of using monads is to get away from relying on evaluation order! When it gets confusing is when you're doing something that has a nontrivial evaluation order in the expressions, e.g.
Even left-to-right evaluation of function arguments isn't always obvious - for some functions it would be more natural to evaluate in a different order, and not all strict languages evaluate left-to-right (I believe in C/C++ the order is undefined).
I agree that it's not so different from async/await or ApplicativeDo, but I find both those things confusing, and I hate dealing with arrow FX precisely because I can't follow what the code is doing except by thinking through what all the evaluation is doing operationally.
> whereas it's not at all clear what ordering applies to the !s - it's generally "evaluation order"
But the evaluation rule for ordering of effects is fairly simple. Left-to-right (taking into account paren nesting) and then top-to-bottom. It's the exact same as the `<-`s, just with an additional left-to-right dimension.
I brought up "evaluation order" in languages because I wanted to drive home that this left-to-right, top-to-bottom order is natural for developers (and C/C++ is a well-known outlier among imperative languages in this regard with a full stable of unspecified and undefined things, but Java, Python, Javascript, Rust, etc. all evaluate in that order). But it's not actually essential that any of the non-monadic code obey that evaluation order. You don't actually rely on evaluation order anymore than the sequencing provided by monads. All the expressions in the middle could be lazily or eagerly evaluated, it doesn't matter.
You bring up a good point with closures in the form of anonymous functions/objects, but that is also the only place that I would argue it's confusing for a developer coming from an imperative background (some who might expect this to be an exception to the left-to-right, top-to-bottom rule as it is e.g. with async-await in Javascript where a closure is not created and the Promise is regenerated where it is created), but you could just disallow it there or require additional annotation of the anonymous function, as e.g. happens in most languages such as Rust and JS. Even there the left-right, top-bottom rule could apply, it might just confuse some developers, who are used to it having an exception in anonymous functions.
Everywhere else not only does left-right, top-bottom give a concise and evaluation-independent (up to the sequencing constraints imposed by the monad) description of how the effects will be executed (which can be made universally true if you're willing to tolerate it for closures or you can just disallow `!`s in anonymous function bodies), this ordering also agrees with what developers coming from other languages would expect.
Put another way, I don't think most developers will be confused by the execution order of effects of `!` that doesn't have anonymous functions.
I think you could ask any developer, even those unfamiliar with FP, what order they think the effects preceded by a `!` will evaluate in the following expression and the overwhelming majority of them would get it right.
MyMonad {
x = !effectfulOp0
someFunction x !(effectfulOp1 x !effectfulOp2) !effectfulOp3
}
Again you don't need to follow what the evaluation is doing operationally. You just need to do a syntactic scan. No matter what evaluation strategy is used, the order of effects will always be `effectfulOp0`, `effectfulOp2`, `effectfulOp1`, and then `effectfulOp3`. There is no operational reasoning here at all. It's purely syntactic and local. This is true even for monads that seem to have a weird order of effects such as the reverse state monad, you just have to be very specific about what the effect actually is (the addition of thunks in the case of the reverse state monad), which is the same case as with do-notation, which is why lazy IO is so confusing, even though technically the monadic sequencing of effects is still being followed.
Indeed this distinction between operational and denotational reasoning has nothing to do with syntax (as it shouldn't since those are semantic questions), but rather your choice of monad. As Conal Elliott rightfully points out monadic IO is intrinsically a dismal failure if you want denotational semantics instead of operational semantics to begin with, even were you to write out everything with bind syntax. The problem is not with the syntax but with the monad itself (Conal would probably prefer the word monad here to have scare quotes around it when referring to monadic IO).
At the end of the day I view the differences between async-await syntax and do-notation as a difference in degree rather than kind (and indeed this is usually how I explain do-notation to developers familiar with async-await), whereas I think there's a quantum jump from normal bind/flatMap nesting and do-notation. However, I view the syntactic affordance given by async-await syntax to be a much larger win, especially if your language already begins with a block-inspired syntax with variable declaration statements rather than a lambda calculus-inspired syntax with let expressions.
> I brought up "evaluation order" in languages because I wanted to drive home that this left-to-right, top-to-bottom order is natural for developers (and C/C++ is a well-known outlier among imperative languages in this regard with a full stable of unspecified and undefined things, but Java, Python, Javascript, Rust, etc. all evaluate in that order). But it's not actually essential that any of the non-monadic code obey that evaluation order. You don't actually rely on evaluation order anymore than the sequencing provided by monads. All the expressions in the middle could be lazily or eagerly evaluated, it doesn't matter.
My point is that you are in much the same place (and have the same difficulties in understanding and refactoring) as if you were reading code in a language with pervasive side effects. In an imperative language all the effects happen left-to-right, top-to-bottom - but if you're using monads at all it's presumably because you find that kind of code difficult to understand or maintain.
> You bring up a good point with closures in the form of anonymous functions/objects, but that is also the only place that I would argue it's confusing for a developer coming from an imperative background (some who might expect this to be an exception to the left-to-right, top-to-bottom rule as it is e.g. with async-await in Javascript where a closure is not created and the Promise is regenerated where it is created), but you could just disallow it there or require additional annotation of the anonymous function, as e.g. happens in most languages such as Rust and JS. Even there the left-right, top-bottom rule could apply, it might just confuse some developers, who are used to it having an exception in anonymous functions.
It happens anywhere you have control flow - I thought of closures first because in functional style that's the most natural way to have control flow, but if you're looking to make this something that works the way imperative programmers expect then they'll expect to be able to use if/then/while/... . Even if you only allow expressions, many programmers would expect && and || to have short-circuiting behaviour that affects any effects in their operands.
Yes, simple, straight-through code is easy to understand in this style (although even then, I'd argue that having to scan every full line for !s slows you down a lot compared to being able to scan down a single column for <-s) - but simple, straight-through code is easy to understand in any style. The measure of a programming style is how well it handles complex, messy code.
> Indeed this distinction between operational and denotational reasoning has nothing to do with syntax (as it shouldn't since those are semantic questions), but rather your choice of monad.
Disagree. Whether something is a statement or an expression is very much a question about syntax, but also very much a question of whether you reason operationally or denotationally.
> As Conal Elliott rightfully points out monadic IO is intrinsically a dismal failure if you want denotational semantics instead of operational semantics to begin with, even were you to write out everything with bind syntax. The problem is not with the syntax but with the monad itself (Conal would probably prefer the word monad here to have scare quotes around it when referring to monadic IO).
IO values (or rather "values") have extremely weak denotational semantics (and almost no algebraic properties), no argument there. But I'd actually say that that makes it even more important to put the monad laws front and center when working with IO, since they're pretty much the only laws that IO has. If you want to change the sequence of IO actions that your program performs, you can only reason about that operationally, but if you want the program to perform the same (opaque) sequence of IO actions, you can do that denotationally, and it turns out that's enough for a lot of the refactorings you would want to perform.
All that said, IO is not remotely the most important use case for monads, and for many applications I'd actually be ok with using something like monadless solely for IO, if the codebase stuck to traditional for/yield/<- style for more semantically important monads.
> but if you're looking to make this something that works the way imperative programmers expect
I think you're misunderstanding me and I regret bringing up the appeal to "imperative programmers." That was simply to illustrate that the semantics of the syntax are immediately accessible to the majority of programmers who have never encountered denotational semantics. But if denotational semantics are what are important to you, let me back up because the whole appeal to "imperative programmers" is not the purpose of `!`, the purpose is rather more meant here in what I wrote:
> having to contort perfectly fine code into a for-comprehension when I introduce some effects simply because I didn't write my pure code with the Identity monad the first time around.
The perspective I'm coming from is based on Moggi's original paper, which is to realize that semantically monadic code is simply a generalization of "pure" (this is kind of an underspecified term here, but I think we both understand what this is meant to refer to) code, and to have the syntax reflect that generalization, rather than draw a sharp divide between monadic code and pure code that `do`-notation introduces (most noticeably by forcing A-normal form). You still get that division at the type level and through various fencing syntax, which is good for making sure you don't lose purity guarantees, but it's important to realize that these things share the same fundamentals (this is why e.g. Haskell can claim to be pure even in the presence of effects, which is confusing to newcomers who view do-notation as a completely different part of Haskell).
Indeed I'll make a perhaps bold claim. This `!` or async-await syntax is a far more faithful representation of the denotational semantics laid out in Moggi's original paper describing the semantics of monads and computations than do-notation is.
So since we're on the subject of denotational semantics, let's start with the denotational semantics of monads. When considering denotational semantics, what is a monad? We can appeal to Moggi's original presentation of this using Kleisli triples, most notably the Kleisli extension operator and the component of the natural transformation on the identity endofunctor (for Haskell programmers this is the categorical version of the reverse bind operator and pure). In particular, Moggi gives a compelling overview of how to achieve a meaningful denotational semantic mapping here here, given a well-defined monad. This doesn't come for free in syntax inspired by the lambda calculus because the categorical representation translate function application to composition (as it does everything see e.g. the categorical representation of monoids), but we can get there by thinking of "expression sections, " by analogy to tuple sections. That is, roughly speaking, you can think of `(_ subExpr0 subExpr1)` as being the function `x -> (x subExpr0 subExpr1)` and more generally of parenthesized expression in general as a series of compositions of expression sections (with atomic values such as literals being the usual morphisms from terminal objects in the category, given that we're in the lambda calculus and therefore in a Cartesian closed category we're guaranteed these exist).
Fundamentally Moggi's insight is that the denotational semantics behind
f x
can be augmented via a monad to be
f* x*
where `*` here is the Kleisli extension operator (what Moggi refers to as `_*`). In Haskell terms this is
(=<< f) (pure x)
The fact that we're using reverse-bind in one case and pure in the other case is a quirk of function application in the lambda calculus; again at the categorical level where everything including function application is just composition, these are both just applications of the Kleisli extension operator.
So what is `!` then? Well `!` is simply the absence of `*`!
f !x = f* x
Denotationally that's all `!` is! Indeed we can choose to think of `!` as syntactic sugar, but that's optional. You can instead give it an immediate denotation instead of having to desugar things first. The reason we use `!` to denote the absence of Kleisli extension rather than its presence is because most of the time you care only about its absence. Otherwise your code would be absolutely filled with `!`s.
So that's the denotational semantics of what's going on here. There was a lot there, but it's all in service of a single point: if you care about denotational semantics `!`/monadless syntax has a much stronger story than do-notation (e.g. what exactly is the denotational semantic of `<-`?). To describe the denotational semantics of do-notation you have to essentially perform the syntax desugaring first before resorting to the non-monadic denotational semantics of the desugared code. This is completely ignoring Moggi's insight!
So from the perspective of denotational semantics, do-notation is absolutely no help! We have to actively ignore our monadic context and "undo" the do-notation and resort to the underlying original denotational semantics instead of getting any semantic help from our monad at all.
But of course most programmers have not heard of denotational semantics and don't care about denotational semantics. So for them, what is the point of `!`? The point is not to force-fit imperative code. Indeed in that case, as you point out, the natural evolution is to throw in all sorts of different kinds of if-statements, while-statements, exceptions and the like, which is not what I'm advocating. The point is to realize that all that is just a reflection of the same thing! You just need to assign a different semantic mapping is all!
[Identity| f x |]
is one mapping where everything is "pure" in the way we normally think of it
[Maybe| f x |]
is one with short-circuiting failure.
[List| f x |]
is non-determinism and so on.
This desire has appeared many times in the FP community and given birth to many proposals (notably I'm basically stealing syntax from McBride's idiom brackets here, the `!` addition here essentially can be thought of as an addition to make idiom brackets work for monads instead of just applicatives, and indeed `!`-notation itself has already been implemented in Idris although in a slightly different fashion than how I've presented it).
The point of `!` is to expose the fact that monadic semantics are exactly what matters and the only thing that matters!*
That we cannot seamlessly express this relationship syntactically by just appending brackets and instead must do so through contortion to an entirely different syntax in the form of do-notation represents to me a failure of denotational reasoning! Indeed while e.g. Idris describes this notation in terms of do-notation (http://docs.idris-lang.org/en/latest/tutorial/interfaces.htm...) (one thing I like about monadless and why I use it as my go-to example is because it doesn't define itself in terms of for-comprehensions), as I've outlined above, I believe it has a far stronger standalone semantic interpretation.
Meh - I find the idea of overloading what function application means very counterproductive. You can say that function application is just a special case of Kleisli composition and the syntax should really mean Kleisli composition, but really that's just the statement that a lot of things behave as composition in some sense (which is why category theory works at all).
> Denotationally that's all `!` is! Indeed we can choose to think of `!` as syntactic sugar, but that's optional. You can instead give it an immediate denotation instead of having to desugar things first. The reason we use `!` to denote the absence of Kleisli extension rather than its presence is because most of the time you care only about its absence. Otherwise your code would be absolutely filled with `!`s.
I think it's worth tugging at this thread. Even within our effectful/imperative DSL block (which is what I see do notation as), most of the time we don't actually want to do effectful Kleisli composition. Most of the time we want our function application syntax to denote regular function application. For the same reason, I don't find Moggi's approach compelling - in theory you could understand the denotation of a do block by understanding the whole thing as Kleisli composition and then algebraically simplifying, but in practice that would be far too complex for humans. (In the same way, I felt Frank was completely the wrong approach, because I don't want to push a maximal bundle of effects in, I want to pull a minimal bundle of effects out).
In many cases I'm skeptical about using do notation at all - often using the applicative or arrow combinators expresses the denotation more clearly, putting us in a position where it's much easier to apply algebraic laws - but where I think it does succeed is in letting you mix a minimal piece of operationally-understood effect into a procedure the bulk of which is still denotationally-understood, and, ideally, lets you view that piece of code from both perspectives at the same time. So the way I'd approach it is "what is the simplest/most restricted possible set of effectful-compositions that allows us to express everything we want", and current do notation mostly fits the bill.
As you say, there's a lot of proposals and efforts to relax the restraints of do notation and offer a more expressive way of working with effects, but I've always found they lead to code that was confusing to maintain. So in the end I bite the bullet and say that yes, I will refactor the whole call chain if it turns out it needs the current time somewhere at the bottom (or else I'll decide that accessing the current time is not an effect that I care to track explicitly at all), because although it's more work up-front, it's ultimately the only way to avoid ending up with code that's confusing to read.
> For example, working with Maybe is often frustrating in functional languages. In Kind, we have a default operator: [<>]
While (<>) in Haskell already means a monoidal operator, something similar is frequently defined, such as:
(:?) :: Maybe a -> a -> a
(:?) = flip Data.Maybe.fromMaybe
(Making an alternative or supplemental Prelude out of functions like this is a common step in the evolution of a Haskell programmer, like writing a monad tutorial.)
> An "abort" operator, that flattens your code:
That particular function can be written even more concisely in Haskell:
sumFirst3 :: [Natural] -> Natural
sumFirst3 list
| (x : y : z : _) <- list = x + y + z
| otherwise = 0
And in other cases, the Either monad provides the same logic:
sumFirst3' :: [Natural] -> Natural
sumFirst3' list = fromEither do
x <- list !!? 0 `abort` 0
y <- list !!? 1 `abort` 0
z <- list !!? 2 `abort` 0
return $ x + y + z
abort :: Maybe a -> e -> Either e a
abort (Just a) _ = Right a
abort Nothing e = Left e
(fromEither and (!!?) are utility functions like (:?). For that matter, abort is just a shorter name for maybeToRight.)
> And, of course, Maybe monad:
As it happens, you can write Haskell with braces if you really want to! [0]
You're forced to be in the Either monad to use your `abort` though, which kinda beats the point (you could just use the Maybe monad...). It is refreshing to be able to just throw an abort literally anywhere in your code, and get rid of a `Maybe` without changing the structure of your code, resting assured that the `none` case is dealt with properly.
Isn't that the whole point of MTL-style typeclasses? You use e.g. throwError and can be confident that it'll be plumbed appropriately for whichever context you're currently in. Maybe shouldn't be a special case.
I don't spend much time with Haskell, only sometimes I'll read a blog post using it. But still, to me it's one of the most readable languages out there. Day-to-day I work with C++ and Rust and I'm really annoyed with the amount of irrelevant punctuation they force on the programmer. I look at the code, and instead of seeing what it expresses, first I need to peel off the noise, curly braces, colons in weird places, commas, just noise. Scheme and, to a greater extent, SML are really good here.
Similarly, even before dwohnitmok, I can read your Haskell example much faster than the curly example.
Wow. Thank you for that thoughtful response. C aside, Haskell is my primary recent experience, so everything you say resonates. Like Idris, you're doing a lot to clean up Haskell.
They are only used in case-of and monadic blocks, and allow avoiding commas and semicolons everywhere else without having the annoying issues with meaningful spaces! I totally understand your initial impression, but consider reading my thoughts below before judging prematurely. (Guess I'll end up adding a bracket-less version of case-of to satisfy the hyper demanding crowd of HackerNews...)
Nooo don't compromise your vision for hacker news commenters!
Great product design is a mix of flexibility and opinionatedness. You have to find your "users" and then build the thing they want. If HN commenters give you crap about it they're not your users, at least not yet, and that's ok. Your product doesn't have to appeal to everyone at the outset.
I've recently started using TLA+, and the mathematical syntax was a big turnoff initially. Took two weeks before the entire approach "clicked", and now it's second nature and I like it a lot.
If you need to think differently (as in TLA+), I think having a different syntax that reflects the different thinking is a good idea.
I don't know, for me this doesn't look enough like C (or anything familiar).
I would love a language like C# or Python where you could just put in preconditions using the expression syntax of the language, and it would try to prove it. If it can't prove it, it might ask for more facts, give a warning, or add a runtime assertion. As far as I know the only language that does this was the experimental Spec#.
I would even go further and have complex preconditions as part of the types, so you could have say a Person with field "age" < 18 is of type Minor. (I know this is tricky. The objects would be immutable, or you'd only be able to specify Person|Minor and only narrow it with an assertion, for example). Nim has something a little bit in this direction with distinct types, and integer range types.
That's exactly what Ada2012 and SPARK2014 did. You write your contracts (preconditions, postconditions, assertions, contract-cases, invariants, type predicates) in Ada syntax and you get both runtime assertions and proof depending on which tool you call (compiler or prover). You give more facts when proof fails. You can tag code as 'ghost' and it's only available for proof tools...
To be able to be a bit more 'terse' in assertions, they added quantifiers, expression functions, case- and if-expressions. Lots of small improvements to allow contracts, that are finally a boon to write simpler and clearer code.
Everything you're asking for here is available in Ada2012 and SPARK2014.
I think there is danger in proposing 'assume' without at least trying to prove it and say you're wrong. I think in spark 'assume' is a guide, but it is checked and it is great, because footguns.
But honestly it's not that great of a user experience, though it does seem to have been improved at least a little bit since the last time I used it in ~2016 or so.
To me, in terms of the expectations of modern programming, the weird thing about proof languages is that they’re languages in the first place.
AFAIK, a lot of proof-language code is machine-generated. And even when it isn’t, the syntax and semantics of these provers seem to be stuck in the era of “one runtime, one language”, where switching languages forces you to switch runtimes, and proof libraries written for different proof languages can’t be used together in the same project because they have different core runtime semantics.
Why isn’t there instead a “proof-checker VM” that accepts some formally-specced “proof bytecode” with a formally-specced standard semantics, that 1. is easy for machines to generate, and 2. where these different proof-languages would then be compilers emitting said bytecode?
Isabelle is a generic proof engine, can be combined with any fronted which satisfies its API (it’s a bit more complicated) so yes such thing exists.
Proof assistants aren’t whimsical like cottage industry of programming languages. Proof assistants are divided into 3 parts, the core or foundation (F) which is a very small language at the heart of the prover, the vernacular (V) the language in which humans write the code and the meta language for proof search and meta programming (M).
The V is translated into F and this translation is often proven to be correct. De Bruin criteria says that F has to be small and should be independently checked (mostly through inspection or mechanization in other framework). In Coq the proof derives is again checked from ground up by F when you write Qed.
Idris works by compiling the language to a fully specified intermediate representation that the checker acts on. The IR is described in a well-written publication and could fulfill the role of your suggested general checker. However, since proof checkers are relatively easy to write, most of the work being in the standard library, people don't seem to mind remaking them every time they design a new language.
The biggest advantage would be gained from increased interoperability between dependently typed language so that they could use each other's standard libraries, but the field may not yet be well-developed enough to standardize on basic definitions.
I'm philosophically aligned with you. Basically, I think this is because of where resources are invested - there's a tremendous amount of work on programming languages to make them ergonomic and enable composition of pieces. Much of that work is at its essence processing friendly syntax into some form of typed lambda calculus. Since dependently typed lambda calculus is a very handy way to encode proofs, it's easier to just adapt that all that mechanism than to develop infrastructure just for proofs.
Interop is hard, because all the axiom systems tend to be subtly different in ways that are hard for normal people to appreciate, though there is some work on it, including MM0.
> Interop is hard, because all the axiom systems tend to be subtly different in ways that are hard for normal people to appreciate, though there is some work on it, including MM0.
Would it be tenable to build a parameterized axiom-system-evaluation-engine for which the semantics of a specific proof-assistant would consist of a configuration for this general engine, rather than patches to its code?
If so, would it then be possible for interop to happen "between" proofs in slightly-different axiom systems, the way that regular hand-written "translational" proofs work in mathematics — i.e. "take this property we've proved, translate it into domain model X, derive a property Y, then translate property Y back out of domain model X and into domain-model Y, where we actually want to use it?" — but all under-the-covers, as the (proven) equivalent of FFI, autogenerated as glue proof-code by analyzing the differences between two "configurations" to the engine?
For the same reason that there isn't a universal ABI for normal programming languages, nobody can really agree on what the foundation should be and interoperability between different foundations is very hard.
You're touching on something that's boosted the progress of languages with proof capabilities such as Spark and Frama-C. They standardized around the use of why3, and use it as a lever to harness state of the art SMT engines (z3, cvc4, alt-ergo) but also advanced specific provers on specific theories (floating point in particular, I'm thinking of gappa, the amazing work of the toccata lab, and Altran UK making incredible strides in the last 5 years). As soon as a new proving tool is plugged in why, you can harness it. And they built bridges for interactive proof with coq and Isabelle, but also with static analysis tools like codepeer.
Why3 is a big success in the way you're asking. Just not adapted to pure maths proof like lean.
That's a great idea. Kind for example compiles to a small core language called FormCore which can be typechecked with 700 lines of javascript. In theory other languages could compile to FormCore and have good interoperability with Kind.
For the HOL family (HOL-light, HOL4, Isabelle/HOL) at least, there is such a VM: OpenTheory (http://opentheory.gilith.com/). It's a stack machine with a very simple input format (line based).
"To me, in terms of the expectations of modern programming, the weird thing about proof languages is that they’re languages in the first place."
I've got a similar, but unrelated problem.
Most proof and formal methods languages development has involved dependently typed languages (modulo Frama-C and Spark). Which are dandy. However, the formula definitions are all in low-level, function-application-ish form, while the proof techniques are in high-level tactics. The result is that plain ol' simple formal logic is not directly applicable to using the formal techniques, and the proofs are completely unreadable.
Why aren't the formal techniques syntax-sugared into something understandable?
I think the closest thing in usage is Liquid Haskell. Although not a standard proof, the ‘refinement’ types serve as a linearly solvable language of predicates on types. The syntax is relatively standard (in the way of set comprehensions), but it is limited to a subset of what can be captured with ‘full spectrum’ dependent types.
I thought SMT2 (what you can feed to z3 -smt2), is a s-expression language that is the lowest level ("bytecode") thing. If you look at Dafny/Boogie, boogie is a much more human readable IR. I'm trying to build a pythonic variant of Boogie.
I think Kind-Core should be that language. It is the simplest proof kernel, after all, by far :) And it has nothing human or inventive. Basically just the lambda calculus with the self dependent function type. But I guess it ultimately is this old XKCD kind of problem: https://xkcd.com/927/
isn't system F (with some extensions) exactly this kind of VM? some functional languages (more or less) use (something like it) as the last functional intermediate language for compilation
This is only tangentially connected to this specific project, but I'd make the general point of, if you want something to become popular (or expect it to become popular), naming your project is important, and using names that are also general words is possibly a bad idea.
Googling for things that have multiple meanings is a nightmare, as it can clash with the other meanings. I use Kind the Kubernetes project and find searching for issues related to it tricky for this reason.
FWIW: That project was started as an internal tool with a playful acronym and then went from obscure to well-known enough to making renaming a somewhat regrettable concept rather quickly just due to momentum. I think we'd have named it differently if we were aiming for widespread usage :-)
We had no reason to worry about making it popular, the effort was staffed to improve developing Kubernetes itself.
Is that really the case, though? Python, C, Rust, Ruby, Go, R, Swift, Perl. One is type of snake, the other is a type stone, and then there is a freaking letter. It seems like the less googlable the name is, the popular programming language becomes. Or, at least, that it doesn't matter that much. I swear I considered that when naming the language Kind. That could have been a terrible move though.
I think you should be fine if you call it "kindlang" or "kind lang" everywhere.
You have to understand that the languages Python, C, and even Ruby have been around for a long time, and I would wager SEO mattered much less when they were being created. This is obviously true for C, and arguably true for the others. To boot, many of them are often searched for with "(programming) lang(uage)" as an associated query.
Similar arguments follow for the rest (Go/Swfit/Rust are supported by big companies, Perl is a unique string, etc.).
What is unfortunate for your language is that Kinds are something associated with PLs theory AND it also seems to be that many people are interested in searching for "what kinds of programming languages are there?" ("kind" in the sense of "type of language").
If I were you, I wouldn't let it get to me – I think this project is cool regardless of name. But if popularity matters to you, maybe try and emphasize "kindlang".
it's not that the name means other things, it's that it's a very generic word that is used in lots of other contexts, several of which are programming and even language related. I had the same issues searching for "Nice" and "Clean" back in the day.
Of the things I'm familiar with that are out there, this looks most familiar to Idris 2. Also, while Lean is primarily used for mathematical proofs, Lean 4 is beefing up its capabilities to be used as a regular programming language as well. Can anyone speak to the differences between this and those?
I am still very new to proof theory. So please take this with a mountain of salt.
At the core all these proof assistants, is a dependent total functional programming language.
Total means all functions must be obviously be terminating.
Dependent means types can dependent on values or more generally you can have quantifiers (sigma and pi).
Both Coq and Lean have variants of a dependent type theory called “Calculus of Inductive Constructions”.
Coq and Lean have tactics which can be thought of an embedded “proof language”. It’s an useful fiction because underneath it is just functions and inductive definitions.
Agda and Idris doesn’t have tactics, so the proofs have to be done through computation (by defining and apply functions and data types)
Lean and Coq differ in how they encode mathematical objects. Lean makes it much easier to make use of classical axioms. (Defining axioms is easy in both Lean and Coq)
From a more practical POV, Coq has the best documentation, books and development tools.
Lean has the WORST development tools of all 4. However has extensive libraries for mathematics. ( It is very painful to use )
Agda and Idris are about the same (set up is a pain otherwise about as good as Haskell development)
Even though Idris is marketed for “practical programming” it doesn’t have partial functions!!
Total means that the function must always return a value. In the presence of coinductive types and corecursive functions, a total function can run forever but must always be producing values when asked. e.g. an infinite stream of prime numbers can be produced by a corecursive, total function. You can use this to embed Turing complete computations in languages like Agda or Coq[1], they just must be declared as such up front. Much like IO must be declared up front in (idiomatic) Haskell. And much like you can unsafePerformIO in Haskell, Idris and Agda have their own escape hatches as the sibling comment mentions.
I have heard multiple authors and speakers (while watching various lectures and in papers) say emphatically that no one wants to deal with coinductiion in a type theory unless they ABSOLUTELY must. I'm not sure I've ever seen a concise explanation of the complications that arise by inserting coinductive types into a type theory, but I'll take it as an article of faith that the issues exist. But, the concept is certainly intriguing. Somehow, in my reading on type theory there has been a lack of exploration of instantiation of coinductive types, hopefully the paper by McBride you linked will give me some references to chase down.
Not all proof assistants are based on dependent type theory. The HOL family is based on simple type theory + a classical axioms: you get choice, excluded middle, extensionality etc. in exchange for a less powerful type system. Maybe not coincidentally, Isabelle/HOL also has arguably the most powerful automation thanks to its ability to call automatic provers, almost all of which are based on non-dependent classical logic.
Other non-dependent type theory systems are Mizar, the B method, TLAps…
Note that Kind is NOT terminating by default. It takes the opposite stand. Expressivity is the default, so you can write recursions and loops freely, as in Haskell or any other conventional language. Consistency is a planned opt-in, which you may use to ask Kind's compiler to check for termination of mathematical proofs specifically.
Do you have a link? That might mean a set of different things, ranging from very hard to impossible; strictly speaking, this seems to go against Godel's theorems tho there are standard workarounds.
Any interval type with decidable equality is degenerate (everything ends up with a path to everything else). Having i0 and i10 be constant makes me suspect that the interval type likely has a decidable equality. This of course being on top of it missing a critical requirement to even call it an interval (transport).
I don't really understand why that section or the HIT talk is in one of the blog posts. All it seems to amount to is "yeah we can't do this." I suspect you will never be able to! There is a reason Cubical Type Theory adds the interval as a pretype with custom syntax, it is because the equality must become relevant and track exactly how paths apply. Without this relevant information I seriously doubt you will ever have any of those "possibilities."
Yes, that is the gist of it: we can't do this. Negative results need to be communicated, too! But that's not all there is to it. The fact we do present a type almost identical to Path and Interval with self-types, and the fact that such type has almost identical proofs of relevant theorems (like `funext`), yet no proof of the very important `transp`, does make a strong hint that further additions to the core are necessary. It also gives us a direction, and even some hints of how the implementation could be simplified. Keep in mind that cubical type theories are somewhat complex (in sheer amount of code), but there is no evidence they can't be simplified further. Kind has a goal to keep its core as simple as possible. Being able to express all the Coq inductive proofs in a 700-LOC core is a miracle, isn't it?
> I suspect you will never be able to!
That makes no sense! In the worst case we could just go ahead and implement CubicalTT on Kind's Core. We just don't want to commit to that until we're sure there isn't a way to simplify it, even if slightly.
Does the 700 LoC core include a termination checker? If I understood your other comments correctly, that's planned for later, no? If so, I don't think this is a very apples-to-apples comparison.
Yep, there is that, too. You lose the ability to erase types when compiling, so you must have a representation of types at runtime, and you need to pattern-match on them. That inefficiency won't show up in normal programs, though, but if, for some reason, your program depends on a program that depends on a program (...) that uses Paths and `transp`, then you'll have to pay. This is kind of unavoidable and not that bad, though. We're more worried about the complexity of the built-in `transp` that must be part of Core. Currently, our core has only two computational primitives: beta-reduction and dereference. These are orders of magnitude simpler than `transp`, which looks extremely artificial. I'm very confident there are simpler primitives that would allow `transp` to be derived.
> No, theorems are not scary things mathematicians do. For programmers, they’re more like unit tests, except they can involve symbols, allowing you to cover infinitely many test cases. If you like unit tests, you’ll love theorems.
It's a little more complicated than that. On the one hand, it is true that a theorem can be more exhaustive than a test, but on the other hand, a test can be black box on any chosen scale while a proof cannot. While a test could assert that, foo(3) = 5 and a theorem could assert that ∀ n ∈ Nat . foo(n) = 2n-1, the test case is the same regardless of how complex foo is, while proving the theorem requires examining foo's inner workings (and changing the proof when the implementation changes), and the effort does not scale kindly with the size of foo [1].
If proving theorems were really like unit tests only stronger, we'd prove our programs all the time, where in reality, the only programs ever proven correct (with respect to some specification) using proof assistants corresponded to fewer than 10KLOC of C, took years of effort by well-trained experts, and even then required extreme simplification and a limited choice of algorithms to keep proofs manageable. Research into formal methods -- using deductive proofs written with proof assistants or many other more automated methods -- is largely if not mostly about managing what you want to know about your program with how costly it is to know it, and practitioners also make similar choices on their end.
[1]: Proofs of some properties do scale well with program size, but they are special properties, called inductive invariants or composable properties (an example is memory safety, which is why it's relatively easy to show that programs written in memory-safe languages, like most high-level languages these days, are memory-safe).
I find this really interesting. The theoretical basis of the language and type theory seems sound on a brief inspection and the implementation also seems to have taken great steps toward a 'friendly' syntax with convenience features ready to be used. I am wondering if anyone with more familiarity with the implementation knows where they are handling (or if they are) proof-irrelevance and erasure issues? The language seems pretty 'flat' at the type level (no explicit Universes), but clearly with the emphasis on proof solving the theory can not be Type:Type.
TL;DR -- I think the language looks nice, and the compile to JS (from what I read of the Formcore source) looks to be well done. Also, the docs that are present are well presented in a non-academic way that I find pretty readable.
Kind has a "how I learned to stop worrying and love the `Type:Type`" vibe. That doesn't make it invalid as a proof language. It just inverts the priority: instead of consistency being the default and expressivity being opt-in (as in Agda, with the `type-in-type` pragma), it is expressive by default, and consistency is an opt-in. I strongly believe that is the right way. It is still not implemented, but the language is completely ready for that. About Type in Type specifically, keep in mind that there are consistent, interesting type theories that feature it. So it isn't problematic in itself, and removing it seems wrong.
About erasure, you can flag an argument as computationally irrelevant by writing `<x: A>` instead of `(x: A)`. So, for example, in the `Vector.concat.kind` [1] file, `A`, `n` and `m` are erased. As such, the length of the vector doesn't affect the runtime. As a good practice, you may also write `f<x>` instead of `f(x)` syntax for erased arguments, but that is optional.
> TL;DR -- I think the language looks nice, and the compile to JS (from what I read of the Formcore source) looks to be well done. Also, the docs that are present are well presented in a non-academic way that I find pretty readable.
Thanks for the kind words. We put a lot of effort on the compilers and, while there is still a lot to improve, I'm confident they're ahead of all the similar languages, by far.
> It just inverts the priority: instead of consistency being the default and expressivity being opt-in (as in Agda, with the `type-in-type` pragma), it is expressive by default, and consistency is an opt-in. I strongly believe that is the right way.
I'm cautiously optimistic, but skeptical. Type:Type typically makes compilation non-terminating, which is not great. Is my program just taking a long time to compile, or is it in a loop?
Also, I'm a strong believer in the adage, "you cannot add security, you must remove insecurity". A similar sentiment would seem to apply to "inconsistency".
I want to offer explicit congratulations on the FmcToJs.js source, it is very, very well done and the emphasis on translation to a reasonably efficient JS program is evident from that file alone. I may even try and work a translation layer from my personal programming language to Fmc just as a fun project for the weekend.
As a quick question, the self types as the primitive for induction seems to resemble the general mechanism Robert Harper exposes in PFPL, i.e. a recursive construct over a sum. If that is not the inspiration, do you have any publications (or blogs, etc) that talk in more depth about the induction principle in Kind?
Thanks for the compliment on FmcToJS! It is probably the most messy part of the whole project though. We can't wait to port it entirely to [JavaScript.kind](https://github.com/uwu-tech/Kind/blob/master/base/Kind/Comp/...). That said, we put a lot of work on it and hell it does produce some satisfying JavaScript code.
I'm not familiar with that mechanism, do you have a link? The dependent function type, `∀ (x : A) -> B(x)` allows the type returned by a function call, `f(x)`, to depend on the value of the argument, `x`. The self-dependent function type, `∀ f(x : A) -> B(f,x)` allows the type returned by a function call, `f(x)`, to also depend on the value of the function, `f`. That is sufficient to encode all the inductive datatypes and proofs present in traditional proof languages, as well as many other things.
I think they're referring to the least fixpoint operator μ, whose interface you can find in section 15.2.1 of PFPL.[0] The idea and consequences superficially seem to be very similar.
That is the correct reference. I don’t know if I’d say the similarities are only superficial however. The blog post on super-inductive types seems to bear out at least a strong isomorphism modulo syntax.
Agreed: I meant that there were superficial similarities, not merely superficial similarities. Here's the original paper (I think?) on self types; maybe there's an obvious link in there.
I've been following this project for a while. Good work, but a haphazard ranting style of promotion. But hey it's a proofs language, it's gonna get weird.
I think it was great. As someone who has tried Haskell a few times and been disappointed, it rang true. I've been working on my own pet language for a bit and wondering where to go with it, but kind may be roughly where it was going to end up.
I love the fact that this language has a short and readable interpreter. However pure functional languages are hard to use for many programmers. I've been looking for a language with a short and readable interpreter that supports some minimal form of object orientation. Something like a mini OCaml. Does someone know of such a language?
> If you like unit tests, you’ll love theorems. To learn more, check THEOREMS.md.
Can someone ELI5...
I quite like unit tests, but it's hard to connect proving bits of elementary maths like `1 != 0` to anything I do as a software developer. Not really clear why I should have to/want to.
Unit tests are theorems too! When you compute `Test(x)` and check it's `true` you have proven the theorem `Test(x) == true`. However that's only about a specific choice of `x`. Theorems allow you to use math to test all choices of `x` and cover infinitely many test cases.
In theory you could even write an unit test and offer a bounty to the first to write a function that's proven to pass it. So you wouldn't need to write it or review the results for correctness.
This is much cooler than I thought it would be and calling it a proof language seems to undersell it. It seems to be more of a general purpose FP language that can compile to multiple backends and supports UI development out of the box (!!)
Yes! We're 1 file away from supporting a Python (and any other) back-end. For example, [this](https://github.com/uwu-tech/Kind/blob/master/base/Kind/Comp/...) is the simplified JS back-end. If you just copy/paste that file and edit the syntax to Python instead, we'll have a Python back-end. That kind of contribution is always welcome.
Love the ideas! I'm coming at it from the other side - I worked on a transpiler that targets 7+ languages already.
Also started with python, to avoid javascript warts that typescript carries forward for compatibility reasons. But hey, it's the most popular programming language out there - so you'll see plenty of people wanting to write code this way.
The higher order bit is that we both want to bring transpilation after proof tech to mainstream languages.
Can anyone explain how syntax elements such as { } are a net win? This looks like they've tuned the syntax to not scare anyone off in the first week, at the expense of an economy of expression later.
Rob Pike once justified C syntax for its ability to survive pipes that mangled whitespace. That was then, this is now. To this old C programmer, anything that unnecessarily resembles C is ground glass in the eye.