Hacker News new | past | comments | ask | show | jobs | submit login
Effective Programs: 10 Years of Clojure (github.com/matthiasn)
215 points by kimi on Oct 31, 2017 | hide | past | favorite | 200 comments



Very good slide here:

https://github.com/matthiasn/talk-transcripts/raw/master/Hic...

Caption: "The problems of programming"

And you can call them problems, and I'm going to call them the problems of programming. And I've ordered them here -- I hope you can read that. Can you read it? Yeah, ok. I've ordered them here in terms of severity. And severity manifests itself in a couple of ways. Most important, cost. What's the cost of getting this wrong? At the very top you have the domain complexity, about which you could do nothing. This is just the world. It's as complex as it is.

But the very next level is the where we start programming, right? We look at the world and say, "I've got an idea about how this is and how it's supposed to be and how, you know, my program can be effective about addressing it". And the problem is, if you don't have a good idea about how the world is, or you can't map that well to a solution, everything downstream from that is going to fail. There's no surviving this misconception problem. And the cost of dealing with misconceptions is incredibly high.

I'd suggest to consider the diagram cited above, it is a good representation of where the real difficulties lie in programming.


I have some doubts about this one, despite my love and ongoing professional work with Clojure. For overall productivity, I find the issue of typos more significant than this slide leads us to believe, especially in a language like Clojure where the compiler does not catch nearly 90% of my typos. It's very easy to have a nil running through your program in Clojure because you either mistyped a keyword or you just picked the wrong one when pulling something out of a map, for example. But that is just one example, I find that I spend a fair amount of my work day tracking down silly typos at run time, which frankly disappoints me tremendously. I really like Clojure but tackling this problem has been non-trivial.


I think the mistake he's making is in that while his hierarchy is correct in that one bug from a category further up is far worse than any bug in a category below it, if you have many more bugs from lower categories they can start to become problems on the scale of a few bugs higher up the chain. Constantly dealing with typos is a great example because one or two would not be a big deal, but they can add up to significant mental effort and time loss.


IntelliJ + Cursive makes keyword typos and most other typos a non-problem (or very easily spottable problem) for me.


How does Cursive know if you're pulling out a valid keyword in a map?


I doesn't know if :resource_id is in the map, but that's not a typo if that's the case.

What OP meant is if you type :resuorce_id it won't autocomplete correctly so you're more likely to notice.


Ah, well I get that same behavior in Emacs. Doesn't always help though.


Also if you type resuorce_id when referencing it will tell you that it cannot resolve the local. Of course if you BOTH destructure resuorce_id AND reference resuorce_id then you're out of luck :). That happens rarely though. One recent IntelliJ addition which is sometimes annoying but often very helpful in many cases like that is notifying you of grammatical typos: as long as you're not using obscure one off abbreviations, etc it will most likely catch "resuorce_id", "widht" and similar typos but might also complain about valid words or abbreviations in some niche domains.


Hum, the repl catches most of those for me almost as instantly as Eclipse would highlight it if it were in Java. And the onese that are left are annoying, but really not a massive amount of wasted time. Like possibly a lot less time then the time I would spend wrestling with complex types, or having to build generic structures so I can group disparate types together.


I'm with you here, they usually come out in the repl. There's always spec too, if it really turns into a problem.

Seems like we could have some tooling that raises a flag if keywords with a short Levenshtein distance were detected.


Ya, I'm not trying to make excuses. I welcome better tooling, or even language features that help. The problem is there, but I agree with Rich Hickey in that its not the highest in my list.


I agree: one of the things I do appreciate in Java is that the compiler does catch those silly things and I don't really have to think about them any more. As a side effect, refactoring is very safe and you can depend on it. Clojure - that I like quite a bit - does not help me much along these lines; and this is an annoyance.


Common Lisp OTOH uses defined arglists with keyword parameters, where the compiler can check mistyped/wrong keywords.


I agree with this sentiment, but I find that doing small checkins to git is helpful, with a good git ui.


Ugh...

And people complain about Java's NullPointerException, but this is way worse.


But we still have teams spend endless amounts of time debating on which flavor of Javascript, where the braces go, or which NPM module is the most effective.


...and which programming language to use. One of the things Rich says more than once in this talk is that he doubts even that programming languages (including his own) are the most important aspect of doing our programming work well. And yet we talk about PLs as if they are everything.

There's a term for this: bike-shedding[1].

[1] https://en.wikipedia.org/wiki/Law_of_triviality


They may not be the most important, but they have a big impact on our ability to map the domain/problem into a solution, and it has an impact on the ability to maintain the solution when the requirements change (either the domain has changed, the users need a solution to a slightly different problem, etc.).

Being able to understand the domain and map it into a solution for a particular problem has to be taken on a case-by-case basis. That's up to the individual company/team/developer to handle.

Programming languages are useful in the sense that once a solution has been decided on, they can help to express it in a clear way.

In fact, look at Clojure as an example. Even if you have a fantastic understanding of the problem, domain, and (high-level) solution, implementing it in Java can still be problematic, as Rich discovered. Getting concurrency right without having problems with race conditions and deadlocks can consume enormous amounts of time. Having a language that makes concurrency easier (Clojure) means that you can spend more time focusing on the actual problem to be solved, than with low-level implementation details.

Also, another thing to keep in mind is that often times a good way to understand a problem and its domain is to try to write a solution, and then see where it falls short, and re-write it, etc. This prototyping phase can be helped by good languages.

So maybe programming languages aren't the most important aspect in programming, but it's definitely important enough that it shouldn't be considered bike-shedding.


Yeah I heard him say that and to be perfectly honest I don't know if I believe he truly thinks that. This is a man who devoted years of his life to create a new programming language because he found it difficult and frustrating to use existing languages. In this talk and many others he talks about how he believes Clojure's features (like dynamic typing and dynamic dispatch and the whole keyword system and others) are really key to writing good software and how many other features (like static typing) are irrelevant or harmful... and now here he's saying programming languages don't matter much?


But notice that his arguments are always focused on the tool/feature.

You can look at Clojure as a "reference implementation" for many of Hickey's ideas. (A production-ready reference implementation, of course.)

But all of Rich's ideas/"features" can be applied in your language of choice. (Immutability, functional programming, avoid-ADTs, how one should do versioing, etc.) I've employed many of these and other of Rich's ideas effectively in Java. The ergonomics would be better in Clojure, of course, but for (probably good) political reasons I have to use Java at work.

Just notice that Rich's rhetoric is nearly always focused on specific problems and solutions that transcend Clojure.


> But all of Rich's ideas/"features" can be applied in your language of choice.

Not just can, but have been and are. Clojure is one of those languages (like SmallTalk) where, even if it's not the most "successful" or "popular" language in the long run, it has had an oversized impact on how we program in general.


Genuinely curious...what are these over-sized impacts?


I think Clojure contributed to the functional and meta programming resurgence. It didn't innovate on any one feature, but it brought a whole lot of old forgotten ideas back together, and showed their value.


Immutable data structures?


Like... Singly linked lists?

I even remember using VLists (work by Philip Bagwell, that probably inspired the data structures in clojure) in 2006, and I am not even a programmer.


Yea, not VList exactly, but similar and also pioneered by Phil Bagwell. They're implemented as a trie under the hood. And provide a list or set or stack or queue or map like interface. You can not mutate the value at any node, instead edits are diffs in the tree, a bit like git commits.

So they end up being almost as fast as their normal mutable counterparts, and with similar and sometimes better memory consumption, because nothing is ever copied under the hood.

This allows them to be good enough to be made the default datastructures. Clojure was the first to make this gamble, and show that the state of the art immutable datastructures mostly pioneered by Bagwell were practical and could replace immutable ones for the 99% use cases.


No: the idea that immutable is a sensible default, and active enforcing of immutability. In Java, e.g., a linked list's cell might be immutable, but its contents are most likely mutable - so back to page one. It is definitely possible to write code for structures that are immutable by interface or convention, but the environment does not help you doing so.


Clojure did not invent immutable data structures!!


Somehow a language is only an attempt at incarnating principles. It also evolves to go closer to that.


He mentioned he liked Common Lisp but they didn't let him use it in production, so he created a Lisp that runs on an industrial runtime.


Which is a little weird, because (as I mention elsewhere), there's Armed Bear Common Lisp, which is a Lisp which runs on the JVM. Seems to me that he could have saved a lot of time an effort just sticking with Common Lisp.

He'll argue, of course, that the time & effort are worth it for immutable data structures. I'm unconvinced, but that's the great thing: we don't all have to agree! I'm glad he's having fun.


Immutable data structures were part of it, but not the whole story. From the article:

"But Lisp had a bunch of things that needed to be fixed in my opinion. It was built on concretions, you know, a lot of the design of more abstractions and CLOS and stuff like that came after the underpinnings. The underpinnings didn't take advantage of them so if you want polymorphism at the bottom, you have to retrofit it. If you want immutability at the core, you just need to, you need something different to, you know, from the ground up. And that's why Clojure was worth doing as opposed to trying to do Clojure as a library for Common Lisp. The Lisps were functional kind of, mostly by convention. But the other data structures were not, you had to switch gears to go from, you know, assoc with lists to, you know, a proper hash table. And lists are crappy data structures, sorry, they just are. They're very weak, and there's no reason to use them as a fundamental primitive for programming. Also packages and interning were very complex there."


Yeah, I understand that. But if he really didn't care much about programming languages, why wouldn't he just use C++, which he was allowed to use in production?


Rich was a professor of an advanced C++ class (at NYU, I think) in the late 90s, and wrote a very popular paper on how to get some killer performance out of that language. He's been around the block and knows a thing or two about these things.


I don't understand how this is a reply to my comment.


He talks about that quite a few times in the video.


Yes, and the things he says about it indicate he does think programming languages are important.


I think there may be a bit of people talking past each other here. I think one can reasonably say that which language one uses isn't as important as which features that language has. Also, there comment you originally responded says "that programming languages (including his own) are the most important aspect of doing our programming work well." That doesn't mean that the language or its features aren't important. They're saying that it's not the most important thing.

The hierarchy Rich refers to[0] puts domain complexity and misconception as 10× more important than anything that's language specific. I believe this is the basis of the quote I pulled. I think this is compatible with thinking that language features are important, but domain complexity and misconception are more important issues when tackling a problem with code.

[0]: https://github.com/matthiasn/talk-transcripts/blob/master/Hi...


Rich comes down hard here on statically typed languages, but before the static typers get too offended, his main attack is on the overuse of ADTs/classes to build information processing systems (i.e., what a majority of us industry programmers are doing; he takes pains to define this arena...see the slides/discussion on what Rich calls "Situated Programs").

So before the static vs dynamic typing debate gets too heated up here, I think we should focus on this point first[1].

For those of you wary of his point here, I think it's worthwhile to go through an exercise, if you have the time to explore this. Write a simple information processing app using two different paradigms:

   - Write it using a statically typed language and a relational database.
   - Then write the same app using some NoSQL store and avoid using ADTs (prefer using just maps of facts). Ideally do this in a dynamic language so the ergonomics are better for you.
And then ask yourself: Which one of these felt safer? More familiar? Which one of these offered more freedom, choice, and speed?

1. I do think that once you come to terms with the problems of ADTs, you are on your way to not needing all the static type verification, but leave that to later.


But at its best (as in Haskell), static typing just means that the compiler is enforcing the requirements of your interfaces. Why would you not want to know about these bugs immediately and fix them early in the development process when it's cheap to do? Why wouldn't you want the compiler recheck everything automatically every time you change your interfaces?

Everybody's always fixing bugs. Static typing makes it more likely that the bugs you're working on are closer to the problem domain, instead of being crap work that should have been caught at compilation time with better languages and compilers.


> Why would you not want to know about these bugs immediately and fix them early in the development process?

That's a good question. But enforcing your codebase to be a Rubix cube you have to twist and turn until the static type checker says Yes is not trivial. Its a tax.

The real question is whether that tax is worth it when your goal is purely business delivery and not something else (eg, intellectual stimulation). Is it?

Writing unit tests and code review are other means to root out bugs. But they also come with a tax. When are they worth the tax and when are they not worth the tax?

In some cases we judge they are. In other cases not.

A statically typed language typically asks us to turn off our brain and apply type verification everywhere. In this way, statically typed languages wield type verification more like a religion than a tool.


Honestly I've never understood this argument that choosing types is too complicated. Most often I've seen it come from people without much experience in statically typed languages and imagine that having to determine the type will be difficult.

After becoming familiar with the code base or library, particularly with modern IDE's it's rarely taken more than 10 seconds to choose a type. When reading another person's code, it does take some time to map out the relationship between the classes, but unless your code is in one giant file/function/object you're going to have to figure that out anyway.


I've programmed professionally in statically typed languages for > 15 years. (C#, various VBs, C++, F# (which is rad), and lately TypeScript) I've also dabbled in Haskell, OCaml, Rust, and a whole lot more.

In fact, outside of Clojure, I've never liked a dynamic language (and I've used a few professionally).

So, I'm someone with plenty of static-language experience, and I am in the boat that types are more complicated, in some ways. Just today, I had to figure out how to wrangle TypeScript into a shape I needed (and wound up with a solution that I didn't like but which got me past the compiler errors and on to more important work). There is a cost. There is also a benefit. So the question is, what does the cost/benefit analysis yield?

Regardless, Clojure is worth learning. It's an amazing language.


> Just today, I had to figure out how to wrangle TypeScript into a shape I needed (and wound up with a solution that I didn't like but which got me past the compiler errors and on to more important work).

It would be really interesting, and indeed probably the most valuable contribution to this whole discussion, if you could write up what the problem was and why it wouldn't have occurred in Clojure!


From Rich's talk: "Now at this point I was an expert C++ user and really loved C++, for some value of love."

Rich has had many years of experience in statically typed language.


Please don't use C++ as an exemplar of static typing. There are much more powerful and congenial alternatives.

It would be just as unfair if one were to claim that dynamic typing is bad because assembly language tends to be buggy and expensive to maintain.


The type system of something like C++ and Java are so far removed from the type systems of Haskell or Idris, that it is silly to compare these two.


You can fix your interface bugs early in the development process with the aid of a compiler, or you can fix them later manually at much greater expense. When the "tax" is so much cheaper than the consequences, not paying the tax is crazy.


But "interface bugs" are the least of our problems. As evidenced by most bug queues that I've peeked at.


No true. I have to try and reverse engineer types whenever I use JSON APIs. The documentation provides only "samples". I literally have to experiment to find out what works with no guarantee it will keep working.


I don't know about that. How about the simple interface of "shouldn't be null".


If null pointer exceptions are the most of your problems then I think you're doing pretty well.

In my experience NPEs are the easiest bugs to root out and fix.


The point is that NPEs are a solved problem, just not in Java and Clojure. I am better able to stay focused on the real problems, as I never have to root out and fix bugs like this.


Not to mention that the ease of fixing the bug has little to do with the cost of the bug showing up in production.


I can do all kinds of things to prevent bugs showing up in production. I once worked on a mission-critical piece of software in which the entire team was brought in to do a page-by-page code review for all newly written code.

Of course, the cost outweighs the benefit in this case for most industry software that can afford a few bugs at the extreme expense of (this kind of) code review.

Now the question is: are the static verification methods you're using (null-checking, type-checking, range-checking, what-have-you) worth the effort it takes to apply them over time?

And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion? Most static programming languages force you (or make it hard for you not) to do this verification across the board.


> And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion? Most static programming languages force you (or make it hard for you not) to do this verification across the board.

Yes, actually, I don't find these things to be a particularly high burden, and the benefit, in my experience, easily outweighs it.

When you're writing code, you have to think about what kinds of things will be passed to a given function, whether the compiler checks it or not. So I find having the compiler check mundane things like this lowers my cognitive load, because I don't have to worry so much about what to do if I get a null (fail, handle it in some way, etc.).

I will agree, though, that the style of programming with maps so prevalent in Clojure makes a lot of sense in some programs. However, even then, there is a typing discipline that fits (row types).


> Now the question is: are the static verification methods you're using (null-checking, type-checking, range-checking, what-have-you) worth the effort it takes to apply them over time?

Yes!

> And if you say yes, do you believe they are so worth it that they should be applied across your codebase always, without discretion?

Oh god yes! Simple Hindley-Milner typing is so cheap to use it's almost free!

> Most static programming languages force you (or make it hard for you not) to do this verification across the board.

Well, not really. In Haskell, for better or for worse, you can still use `fromJust` even if it is considered rather naughty.


Honest question: have you ever written a large computer program in a programming language descended from ML (SML, OCaml, Haskell, &c.)?


No I haven't. And, depending on how long life turns out to be, maybe I'll get to.

But.

I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.

These two problems are a function of the experience and training (and value system) of the programmer building them, and programming languages can do very little to save a system from these plights. (It might be that a static programming language can help here inasmuch as it slows down the programmer from producing too much code, but I realize that's arguable and also that I'm making a wicked joke.)

Still.

All computers ask is for semantic precision and you don't need a static type verification to get precision. So clearly static type verification is unnecessary for producing programs that work. And clearly statically-typed everywhere PLs are asking the programmer to do extra work. That's prima facie true. So the burden is really on the MLer/Haskeller to prove that that extra work is giving overall delivery throughput to the programming team. Maybe it is, maybe it isn't. But I'm waiting for the clearly thought out justification. Haven't heard it yet.


> And clearly statically-typed everywhere PLs are asking the programmer to do extra work.

I'm not convinced this is entirely true. The kinds of information that you encode in types is the kind of information that's useful to anyone reading the code. And if that information isn't written down somewhere, then the person reading the code (maybe you sometime later) has to reconstruct it themselves. And if it's useful to write that kind of information down, why not have the computer check that it's consistent?


> All computers ask is for semantic precision and you don't need a static type verification to get precision. So clearly static type verification is unnecessary for producing programs that work

That's a bit contrived. No matter which language you write in, at some step a type check will happen. For dynamic types its at runtime and for static types its at compile time.

> And clearly statically-typed everywhere PLs are asking the programmer to do extra work. That's prima facie true. So the burden is really on the MLer/Haskeller to prove that that extra work is giving overall delivery throughput to the programming team. Maybe it is, maybe it isn't. But I'm waiting for the clearly thought out justification. Haven't heard it yet.

Here are just a few of the real world accounts of using Haskell in production, you can check out:

- The Joy and Agony of Haskell in Production: http://www.stephendiehl.com/posts/production.html

- Haskell is Not For Production and Other Tales: https://www.youtube.com/watch?v=mlTO510zO78

- Production Haskell - Reid Draper: https://www.youtube.com/watch?v=AZQLkkDXy68

If these didn't convince you, there are tons of Haskellers that will attest that the type system in the long run has quite substantial benefits.


> I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.

This is a wonderfully, astonishingly, gobsmackingly interesting comment for me to read! In my experience of large systems (Haskell-style) types are exactly the thing you need to tame coupling and imprecise semantics.

To take one arbitrary but notable example, in a previous job we found several bugs in large industry XML schema^ by converting the schema to Haskell types and noticing, via type errors, that some things didn't match up.

^I forget if the bugs were in the schema itself or in the Java implementation that another team was using. I think a little bit of the former and a lot of the latter.


> I have built plenty of large software systems. And the source of cost in those systems is always -- by at least a factor of a 100 -- was coupling and imprecise semantics.

One thing I find interesting about this is that a good type system, combined with something like OCaml's module system, is that you can force decoupling. In OCaml, you can create an abstract type, which cannot be used in any way that isn't specified through the interface. You don't have to wrap the runtime value in anything, it's purely a compile time abstraction.


> A statically typed language typically asks us to turn off our brain and apply type verification everywhere

No, it doesn't.

People are not using Haskell because they want to turn off their brain.


Let me rephrase:

A statically typed language typically asks us to redirect our brains from asking the question, "Is type verification really needed in this particular instance to deliver my business value?" to instead solving the type-checker puzzle du jour.


Plenty of modern statically-typed languages have dynamic escape valves (and plenty of dynamically typed languages have opt-in static validation.)

It's true that choosing to use a static language without a dynamic escape valve for a component constitutes making the decision that, for the pieces of that component, dynamic freedom isn't an option that needs further considered.


This is effortless once you have experience[1]. Besides, understanding the type of your data is beneficial, and so is it to be able to use types as an exploratory tool in a codebase you don't know.

[1] Just like having experience in editor shortcuts bear an initial cost to make you more productive forever...


Which part is effortless? Solving the type checker puzzle, or asking whether type verification is truly needed in a particular given instance?


Thinking in types. 95% of the time, the type checker silently validates my input and the remaining 5% of the time, the trouble I get is understanding valuable information.

There is no such thing as type checker puzzle once you have learned types, just like you don't stop on each word once you're literate.


Lots of valid code, won't pass the type checker. That is the problem with static type checkers. You have to adapt your code to satisfy the type checker even when you know your logic is right.

Static type checking also significantly complicated meta programming, macros and code generation.

But hey I like statically typed languages too. A big fan of Swift which has borrowed many functional ideas. I just don't think static typing is the silver bullet that many of its proponents think.

It works well for some problems, but not all. E.g. I can't think of any statically typed language which can compete with Julia for numerical work and data science.

Haskell e.g. is just terrible at doing numerical work, as it poor at mutation. You can work effectively with lots of big matrices if you don't allow easy access to mutation.


> Lots of valid code, won't pass the type checker

Perhaps you could give us an example of code you want to write that doesn't pass the type checker.

We already know about the case of maps, but it's been pointed out that is solved by row types. A different example would be good.


> Lots of valid code, won't pass the type checker. That is the problem with static type checkers.

I fail to see this as a problem. It's pretty much a feature, just as you don't expect object oriented languages to tolerate any shit you throw at them when you use the OO model.

> I just don't think static typing is the silver bullet that many of its proponents think.

You're the one making the case here, before answering it.

> Haskell e.g. is just terrible at doing numerical work

I've had good experience with https://wiki.haskell.org/Numeric_Haskell:_A_Repa_Tutorial


> There is no such thing as type checker puzzle once you have learned types, just like you don't stop on each word once you're literate.

Your analogy doesn't apply.

I can read a book in O(n) time, where n is the number of words in the book. I can read a single word in O(1) time.

When I get a type error, that is not a O(1) fix. The type error is connected to my program in the large and I have to understand/process more than just the single line where the error is called out. The worst case, obviously, is O(n).


Using a static type checker means that an error doesn't creep across all my code. The worst case you describe pretty much never happens, and the smaller instances I get are reified to a generic form I am trained to analyze and solve quickly.

> Your analogy doesn't apply.

One would think I am well placed to describe my own experience of interacting with a type system.


The worst case for the number of places I have to go fix is O(n). I.e., the verification half is O(n). The solution search side ("solving the puzzle") may be worse, because I have to generate/find the code that fits the type checker.


> turn off our brain and apply type verification everywhere ... type verification more like a religion than a tool.

A statically typed language can allow us to use dynamic typing if we want. For example, using a map of variants as a query result. Programming with maps of variants is even perfectly possible and pleasant in Haskell. Dynamic typing is not unique to dynamic languages!


I see the value of statically typed languages as I've used Swift quite a lot and quite like it. It is way more strict than say Java and C++. However when I compare my coding with Swift to the dynamic language Julia, it is way more fun and fast to write Julia code. Up to a certain point of course. As I pass about 1000 lines of code I start feeling a bit uncomfortable about the lack of static type checking to catch my changes.

However this is mostly down to me being lazy and not bothering to write tests.

I've spend some time with Haskell, and tried writing some code for a simple geometry library in Haskell. Quite cool language, but I could write the same code in much fewer lines in Julia and much faster.

Haskell has just too much mental overhead. It requires far more investment than other languages to learn and I've learned plenty. Nothing has required as much effort as Haskell. Despite the effort I never felt like I could write any real and practical programs with it.

I honestly think the Haskell fan club is slightly deluded. If Haskell was truly as amazing as they think, we ought to have seen a lot more amazing software being made in Haskell. We don't. Far more interesting things seems to have been done in newer languages such as Go, Rust, Swift and Julia. That ought to be a hint that Haskell isn't quite the silver bullet that the fan club thinks it is.


> Up to a certain point of course. As I pass about 1000 lines of code I start feeling a bit uncomfortable about the lack of static type checking to catch my changes.

This is indeed where statically typed languages come in to shine - refactoring and maintenance. With a powerful type system you can be much more assured that your refactors or small/big changes here and there don't break the whole system or unexercised parts of it.

> It requires far more investment than other languages to learn and I've learned plenty. Nothing has required as much effort as Haskell. Despite the effort I never felt like I could write any real and practical programs with it.

That's often attributed to the fact that Haskell isn't just another syntax slapped on an Algol-like language. It's not just different syntax (ML-style), but an entirely different approach to programming. This often leaves experienced programmers frustrated with the experience that they are starting from scratch (without realising that, that is what they are doing).

Some nice advice is giving by Gabriel Gonzales here http://www.haskellforall.com/2017/10/advice-for-haskell-begi....

> I honestly think the Haskell fan club is slightly deluded. If Haskell was truly as amazing as they think, we ought to have seen a lot more amazing software being made in Haskell

We have a lot of amazing software in Haskell already, e.g. Facebooks Haxl, Pandoc, QuickCheck, Yesod, Servant and many more.

The argument that Haskell isn't doing stuff right because we don't see it everywhere is quite silly. By that metric PHP would be the best designed language ever...


Can I just touch on the relational vs nosql comment? I'd like to point out that data almost always has a schema, and it's almost always relational. Most of the time when people use nosql with map of maps, they are simply refusing to recognize and formalize the structure of the data that is already there. This absolutely leads to errors and performance problems. There is a small class of problems where specialized databases do excel, but it's definitely not the common case.


"and it's almost always relational"

this is a bit backward, and not very accurate

there is no such thing as relational data and non relational data, its the other way around, there is a relational model, which promises to model any set of related data

so you can fit, any set of related data into a relational model, and this is a feature of the model, not the data

the key value, of relational models, is the promise of integrity, once you model your data into a relational model, and then only access your data through it, you are guaranteed integrity

the key issue with relational models is that they are static, so if your data change, your model need to change, and once your model changed, now you have to deal with two models .. and this is where things start to get hard .. and this is when people start to use other models .. it is not because they cannot fit their data into a relational model anymore, its because they dont want or dont know how to deal with two relational models


You are conflating two separate things. First, your comment on changing data relates to schema, not the relational model. Using a nosql database where you don't bother writing the schema down certainly means that you don't have to worry about updating your schema as the data changes. But you still need to worry about updating your code, and since you have no schema written down you run the risk of data inconsistency as you write updated code.

As for the relational model, you are correct in that it's a model for storing data. However, I will go further and also argue that it can be a property of the data itself. Consider a simple case of users and posts. This data is fundamentally 'relational' in the sense that there are two related items. No matter how you try to store this data, the relationship between users and posts will _always_ be there. You can either recognize it explicitly via the relational model or otherwise, or you can pretend it's not there which is what many people using nosql do.

Say you stored users and posts each as maps in a key value store or document store. One key per user and one key per post. Now, imagine getting all the users and their posts. In a relational database you'd do a join. In the nosql scenario you need to do multiple queries. The latter is essentially doing what the relational database is doing, but you are doing it manually via a for loop.

In short, what I'm arguing is that relationships in data are a property of the data.


I think Rich is talking more about data structures than data abstractions here. In particular, he's saying that he favors sound data models (RDF, datalog, relational) to manipulate information in our programs over specific data structures.

To me it is related to Alan Perlis's saying on the power you get from uniformity:

"It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."

I totally agree with him on this point and I also agree that for some of us a lot of our programs are about manipulating static information structures. But let's not reduce programming to that.


Aren't statically typed languages "sound models"? Aren't ADTs a form of data structure?

I think his point is much more in the realm of abstraction. RH always seems to be trying to minimize the delta between thinking and coding/problem-solving. ADTs are in most cases cumbersome. Static languages introduce more impedance and coupling than benefit to the industry business problems we're solving via code. These are arguments against certain attempts at abstraction, it seems to me. But I may be missing what you're saying.


Actually, this talk mostly convinced me that dynamic typing is not just an anachronism and really is the right choice for some domains.

Then again, now I wonder if statically typed languages could get the benefits Hickey talks about just by having convenient Map and Keyword types like in Clojure and using them where appropriate.


An interesting thought I got out of this talk is the reminder that most (nearly all?) statically typed languages are really just "mandated type verification" (not RH's terminology, btw). But, in the talk, Rich isn't anti- type verification. He says, like any tool, type verification should be used consciously, it should be a la carte (his term); that is what clojure.spec is all about, as I understand it.


>I do think that once you come to terms with the problems of ADTs, you are on your way to not needing all the static type verification, but leave that to later.

Excellent thought.

Also, as a side note, people that criticize Clojure for its lack of static type enforcement, forget the fact that Rich Hickey had good previous experience in C++, so it's not as if he hadn't tried a statically-typed language before.


While that is true, it's not like C++ and Java are the torchbearers of static typing.

Much better type systems exist than those of C++ and Java and basing your worldview of static typing just on the experience with these two is shortsighted.


Right, but most of the statically typed code in the world is written in C++, Java, etc. If you're presenting Clojure to your organization, the choice isn't going to be Clojure vs Haskell/Idris, it's going to be Clojure vs Java. The sort of boosters for static typing in this domain are the AbstractSingletonProxyFactoryBean people. I think defenders of static typing have at least some burden to defend "actual existing type systems" rather than immediately falling back to "well, in Haskell..."


Even if we assume that your point is true, Hickey wasn't comparing Clojure's type system specifically with only C++/Java. He was making a general point about static vs dynamic typing. He even mentioned Haskell and C++ in the same sentence to make a point about statically typed languages.

When you are comparing the best of dynamic typing, you should compare that with the best of static typing else the competition is lopsided by specification.

And it's not a fallback on "well, in Haskell". It's not like more "hip" languages can't adopt some of the best ideas from Haskell and OCaml to make their type systems better. Modern languages have already started doing this - Rust for example.

It's about learning from research and making the consumer product better. It's a fact that Haskell/OCaml will never be on the top of the TIOBE index, but they can certainly influence those who are.

Making ill-informed (I can only assume ignorance at best here) blanket statements like Hickey did in _this_ particular talk doesn't help the programming language community.


What makes you assume Haskell is somehow more niche than Clojure? I would say it depends on the domain. Haskell is certainly more mature (~30 years) and has a lot more tutorials and literature.


I totally agree that it depends on domain, but I think that's part of the point -- making these comparisons, i.e. Clojure vs Haskell in the abstract aren't particularly interesting, as it seems obvious that no general purpose language is perfectly general for every purpose.


It runs on the Java runtime and interops seamlessly with Java which are two big factors in its favor.


> Haskell is certainly more mature (~30 years) and has a lot more tutorials and literature.

and yet, still, nobody outside the gated community has any idea wtf a monad is. :(


I was surprised to see Haskell in widespread use at several large enterprise companies in Singapore.


Yes 40+ fulltime Haskell devs in London and Singapore at Standard Chartered, with many other users writing Haskell.


Many universities outside of the US teach functional programming and Haskell. The community is open, international and a healthy mix between industry and academia.


C++, Java, and C# if I recall correctly


I do agree that there are problems where the "maps of facts" approach is better suited than ADTs. I'm not sure if those are the majority of problems we're solving. You can still do all that in a typed language, using dictionaries. You can be as precise as you want. In some sense you lose that without a type-checker.

I only have my experience (which is about 1/4 of his) to lean on. And I've only had problems when protocols are not well understood by all parties, never the opposite. Talking in the level of ADTs (or some representation that is equally precise) tends to either clear things up, or uncover problems that were not obvious in informal discussion.


> Talking in the level of ADTs (or some representation that is equally precise)

I will argue that ADTs offer no more precision than first-class properties and first-class properties are much more aligned with human beings natural way of thinking.

I.e., "name" is semantically strong/precise, I don't need to enslave it to a Person class. I can give a Cat a name, a building a name, etc.

ADTs are trying to bundle properties together when those properties should/could have strong enough semantics on their own and be much more "composable" (RH's term) and freer to use.


Maybe we have to dig into the specifics: there's a lot of things that can be done with Name, regardless of where the name belongs, is that what you mean? If so, most typed languages let you say that Person can be a thing with a Name. I don't have Clojure experience so we might be talking past each other here.


What resonates most with me is that traditionally ADTs make the properties second-class citizen to the ADT/class.

First-class properties are strong enough semantics and bundling them together in an ADT is almost always introducing coupling and rigidity with no real benefit.

Why do we need the Person class? Why do we need to taxonomize like that? The answer is we don't. All the Person class brings to the table is straight-jacketing and a false sense of order and security.


I haven't watched the full video, but here are some things that a hypothetical Person class brings to the table that you might be missing:

1) A Person class can guarantee that a Person can not exist unless it is fully specified. E.g. you prevent Persons from being created that don't have a date of birth.

2) A Person class provides a syntactically nice place to put pieces of code that rely on the instance fields. E.g. a function like Person(...).computeAge(DateTime.now)

3) A Person class can be a nice place to put inter-field validations. E.g. Maybe you don't want to allow Persons to be constructed if their last name is the same as their first name

4) A Person class can bring clarity to function signatures that depend on it. Basically resolving the problems that Duck-typing introduces


What's a "first-class property"?


   class Person {
      def age;
   }
There is a second-class "age" property: the only way to specify it or use it is to first declare the ADT/Person class. (In this example, I use a dynamic language; that is, the property can be given any type of value.)

Here is the usage of a first-class age property:

   public boolean isOld(Map<String, Object> entity) {
      return entity.getAs("age", int.class) > 30;
   }
Note that in this example, age's semantics are independent of any Person/ADT. Now a statically-typed language that would let me define a vocabulary that include "age" and its type independent of an ADT would be supporting first-class properties.

I could cheat and do this in a typed language like Java like so:

   class Age { int val; } // using class to define a property
   class Entity { Age age; ...and list all other properties here... }
But this is absurdly cumbersome and obviously Java isn't conducive to this.


> Note that in this example, age's semantics are independent of any Person/ADT. Now a statically-typed language that would let me define a vocabulary that include "age" and its type independent of an ADT would be supporting first-class properties.

What you're referring to is known as row polymorphism or structural typing and it exists in languages such as Elm, Purescript, Scala and Go (and I believe Haskell has these as an extension). For instance, this is valid Scala:

    def isOld(thing: {def age: Int}): Boolean =
      thing.age > 30

    case class Person(name: String, age: Int)
    case class Dog(age: Int)
    println(isOld(Person("Jim", 25))
    println(isOld(Dog(31))
And here's Elm:

    isOld : {age: Int | a} -> Bool
    isOld thing = thing.age > 30

    Debug.log (isOld {name: "Jim", age: 42})
    Debug.log (isOld {age: 10})


This is pretty cool. The Scala version is a little weird. Why do I need a separate Person and Dog class? Can't a dog have a name as well? Why do I need to declare that upfront or ensure that no other part of code can give a Dog a name? That's the problem with ADTs; there's all this impedance.

The Elm version is much better. Except of course I have to upfront at the method signature declare my property requirements -- while that is also a burden, it is a much lesser burden than having ADT/taxonomy of Person, Dog, etc.


I think the Scala version does exactly what you want it to do, but you seem to be confused about something that I can't really pin-point.

> Why do I need a separate Person and Dog class?

You don't. It's an example.

> Can't a dog have a name as well?

It can.

> Why do I need to declare that upfront or ensure that no other part of code can give a Dog a name?

I have trouble understanding the meaning of that sentence.


> The Elm version is much better. Except of course I have to upfront at the method signature declare my property requirements -- while that is also a burden, it is a much lesser burden than having ADT/taxonomy of Person, Dog, etc.

Elm (and Purescript) should be able to fully infer the type of that function w/o forcing you to declare which fields you want to use up-front.


How would it infer:

   isOld(parseJson(json))
When parseJson may or may not produce an entity with the "age" property?


Sorry, I wasn't clear. My point was that you don't have to declare the type of `thing` up front in that example. It would be able to infer that it's a record with an `age` field from the function's definition.

In your example, `parseJson` would have to return a record with an age field, otherwise the compiler would complain. Personally, I think that represents a reasonable tradeoff in ease of use for safety.


> I believe Haskell has these as an extension

Not really ... Haskell has libraries that claim to implement this but we Haskellers have to admit Haskell's support for row types is pretty poor. A Clojurist's objections to Haskell on those grounds are valid (in practice though not in theory).


Does this meet your criteria of "first-class" property? It works on any entity that "has" and "age", at least according to my definition of those things!

    -- The class definition can often be entirely elided
    -- with Generics or Template Haskell.  Instance   
    -- definitions can generally require minimal or no
    -- implementation either
    class HasAge r age | r -> age where
        getAge :: r -> age
        setAge :: r -> age -> r
    
    isOld :: HasAge r => r -> Bool
    isOld r = getAge r > 30


Firs-class generally means something you can put in a variable and compose with higher order operations. Think first class functions.


> I do agree that there are problems where the "maps of facts" approach is better suited than ADTs. I'm not sure if those are the majority of problems we're solving.

This also seems like the kind of thing where row-types would allow for statically typed, fully inferred programs that are written in the style Rich is advocating.


Classes and ADTs are both examples of Nominal typing, but Structural typing would offer a better fit for your example. Structural types can give types to queries without requiring any declaration or naming of the result type. Correctness in terms of the assumed fields can still be verified. Most of the anti-modular and boilerplate criticisms of static types do not consider structural typing, perhaps because industry has yet to discover them.


TypeScript is structurally typed and is pretty popular. The industry has discovered this. It's still more painful than ClojureScript in many ways (and less painful in other ways). To me, having written a fair bit of both, it's not clear which is best. Both are good. ClojureScript is far more elegant.

PureScript is probably the most interesting statically-typed language I've seen in a long time. But for some reason, powerful static typing never seems to gain much traction.

So, if I had to choose between the most popular statically typed languages or Clojure, Clojure wins hands down.


I am a dynamic language fan but I would actually disagree. I prefer the Julia programming language and there I see a lot of benefits from working with types.

I've always had a fascination for LISP, but every time I've tried using it after learning Julia, I've found the lack of type focus limiting and in the way.

Types makes it easier to build abstractions. I can write several functions which look the same but act different depending on the type of the arguments.

The way in which types get in your way in statically typed languages is pretty much gone in Julia as you can so easily and fluidly deal with types as they are just objects at runtime which you can pass around freely.

Julia already has most of the things I like about LISP: everything is an expression, and it is really easy to write macros.


> I've always had a fascination for LISP, but every time I've tried using it after learning Julia, I've found the lack of type focus limiting and in the way.

> Types makes it easier to build abstractions. I can write several functions which look the same but act different depending on the type of the arguments.

You can do this in Lisp:

    CL-USER> (defclass my-class () ())
    #<STANDARD-CLASS COMMON-LISP-USER::MY-CLASS>
    CL-USER> (defmethod identify ((mc my-class)) "This is my class")
    #<STANDARD-METHOD COMMON-LISP-USER::IDENTIFY (MY-CLASS) {1005BDFFB3}>
    CL-USER> (defmethod identify ((n fixnum)) "This is a fixnum")
    #<STANDARD-METHOD COMMON-LISP-USER::IDENTIFY (FIXNUM) {1005C984F3}>
    CL-USER> (identify (make-instance 'my-class))
    "This is my class"
    CL-USER> (identify 3)
    "This is a fixnum"
> The way in which types get in your way in statically typed languages is pretty much gone in Julia as you can so easily and fluidly deal with types as they are just objects at runtime which you can pass around freely.

Did you notice how DEFCLASS returned a class object earlier? It's a first class value in Lisp:

    CL-USER> (defvar *c* (find-class 'my-class))
    *C*
    CL-USER> (type-of *c*)
    STANDARD-CLASS
    CL-USER> (make-instance *c*)
    #<MY-CLASS {1005E38AA3}>
    CL-USER> (typep (make-instance 'my-class) *c*)
    T
    CL-USER> (subtypep (defclass derived-class (my-class) ()) *c*)
    T


I would recommend taking the time to actually watch the talk [0]. It's well worth it.

[0] https://www.youtube.com/watch?v=2V1FtfBDsLU


Thanks for this.


> So fundamentally, what is Clojure about? Can we make programs out of simpler stuff? I mean, that's the problem after 18 years of using, like, C++ and Java, you're exhausted. How many people have been programming for 18 years? Ok. How many for more than 20 years? More than 25? Ok? Fewer than 5? Right (?), so that's really interesting to me. It may be an inditement (?) of Clojure as a beginner's language or may be that Clojure is the language for cranky, tired, old programmers.

That's about right :) Yes, I'm tired, cranky, and old.


I was a Java dev for about a decade, and I was seriously considering a career change before I discovered Clojure. Working with Java nearly convinced me that I did not enjoy programming. Moving to Clojure was pure joy for me. I've been using it for over 8 years now, and I still enjoy working with the language immensely.


I know what you mean. I wish I could find a job that allowed me to use Clojure. I've just accepted a new position... as a Java dev. Oh well, the good tools are for side projects only, I guess ;)


I ended up converting my team to Clojure. I started by making some internal tooling we used for support, got a few devs interested. We started using it for prototyping, and the prototypes worked to well that we saw no reason to rewrite in Java. It kind of just went from there. Obviously that won't work everywhere, but we have a few similar stories at our Clojure meetup. It's definitely an approach to consider. :)


I've been coding for forty years and I'm not going to go anywhere near a dynamically typed language ever again for any job bigger than a ten-line bash script.


Which dynamic language did you use? Like statically typed languages there is a huge variety. I don't group languages whether I like them based on whether they are static or dynamic.

I hate JavaScript, PHP and Perl which are all dynamic. While I like Python, Lua and Julia. But I also like static languages such as Swift and Go. I have a certain fascination for Haskell, but I can't be bothered to invest the time to get any good at it. Which kind of suggest to me that Haskell is a dead end. If programming language geeks like me can't invest the time, why would the average developer ever bother.

My work language is C++, which I actively hate, but I can stomach it because we work on interesting software.


I've been programming for close to 20 years. I hate dynamic languages. Except Clojure.

If you enjoy programming, and you ignore Clojure, you're missing out on a fun and interesting language.


I love this talk, but it does throw out a lot of complicated ideas somewhat loosely, so I get why reactions and interpretations are all over the place.

I think a good companion to understanding this better is Rich's talk on "The Language of the System" (https://www.youtube.com/watch?v=ROor6_NGIWU&t=2810s).

I interpret his overarching thesis as this:

Most "situated" programs are in the business of processing information according to complex and changing rules, in cooperation with other programs (i.e., a system). Many languages, though, are overly-concerned with how they represent data internally: classes, algebraic types, etc. This "parochialism", he calls it and "concretion" about how data aggregates should work, make them hard to adapt when the rules, data, or other parts of the system change, and make it hard for their programs to work in systems. At some point your Java class or Haskell ADT has to communicate its data to other programs that have no notion of these things. So you end up w/ a ton of surface area in your code with mappers and marshallers totally unrelated to the content of the data and purpose of the program.

The idea behind Clojure is to provide easy access to simple, safe, and relatively universal structures for holding data, and a library of functions to manipulate those structures. Its "big picture" design bits are about providing semantics for multiple "programs" (from threads to services) in a system to operate on data robustly and reasonably (concurrency semantics, time and identity models, pervasive unadorned data, etc.) At some point you're going to be sending this program's data over a wire to another program, and things like "a map of strings and numbers" is pretty straightforward to transport, while a sum type implementing functor with a record data constructor that contains a Maybe SSN is not. It overly couples the underlying data to the language's representation.

The plus side of doing this is that the language can check internal consistency for you. The downside is that you're carrying a lot of baggage that you can't take with you over the wire anyway. Communication in systems is also why Rich thinks making "names" for data first class is important. Existing strongly typed languages can sort of accommodate this, but don't really privilege names.

So I think a lot of strong typing advocates are upset because they think Rich is saying types don't have value within programs. I don't think that's right. I think he's saying they have very limited value in open systems, which makes their costs often overwhelm their benefits in the individual programs within those systems.

In general, I feel like the debate has been about examining Rich's claims in the context of programs (is Maybe String good or bad, etc.), whereas he's really interested in what works in systems. I think that's indicated by his focus on the term "parochialism" which I have not seen a lot of folks address.


> Most "situated" programs are in the business of processing information according to complex and changing rules, in cooperation with other programs (i.e., a system). Many languages, though, are overly-concerned with how they represent data internally: classes, algebraic types, etc. This "parochialism", he calls it and "concretion" about how data aggregates should work, make them hard to adapt when processing rules change, and make it hard for their programs to work in systems.

It doesn't if the systems are well-designed sytems, comprised of loosely-coupled components, something like you'd get if you used 1970s structured analysis and then actually modeled the implementation closely on the DFD with communication over a message bus with a fairly neutral messaging format.

When you start tightly coupling components (e.g., by using a messaging format tightly bound to an internal representation), using ad-hoc component-to-component integration rather than a common message bus that is abstracted from the individual components, and generally do the system engineering badly, then you have a whole pile of problems, some of which are exacerbated (but not caused) by static typing, sure.

By static typing is not the problem here.


I think type systems that try to "close" aggregates (i.e. saying "an Employee is these fields that have these types and no more") kind of do contribute to the problem. I sort of agree that static types are not "causing" such problems. They don't "cause" bad design, but they tend to make it too easy to set bad designs in stone (and most designs are bad in some way). It's not so much about types causing problems or being bad, but having costs, and thinking hard about those costs vs. the benefits. Different folks will, and should, make conclusions for their problems.

I read him as exaggerating his critique a bit because types are often oversold (static type people can be really dismissive of dynamic languages). But I think he's mostly making a "no silver bullet" kind of argument.


> I think type systems that try to "close" aggregates (i.e. saying "an Employee is these fields that have these types and no more") kind of do contribute to the problem. I

They are part of the problem if such types are shared among components; perhaps because of a design in which messages or data transfer object types are tightly coupled with the working representations in components.

But that's an unnecessary form of coupling.


Agreed that's bad. But then if the transit/messaging/persistence components of your systems are independent of the type system (good), to really use your type system you have to do work pushing things in and out of it, in return for type safety (and sometimes not much) that only lasts until the border of your program. It's really easy to over-engineer your types because you want really want to pin down the representation of your problem in the idioms the language gives you. ORM (ab)use is a good example of this, I think.

I've often made the mistake myself of architecting a too-clever type or class system for my problem, and then been faced with writing tons of crap to wrestle it in and out ofprotobufs, etc. that needed to be more general than my problem. When my program was running, it was like, woo, I made some illegal states unrepresentable, which felt great! But I could almost never do that in a way that didn't quickly reveal itself as too brittle.

I like types (mostly). I wish gradual/partial typing was a better solved problem. Clojure's goal is to make it so that you don't over-engineer and tangle up your systems by passing around simple immutable data. If you keep your system nicely decoupled, the types, which are good at finding when I've forgotten a coupling in my code, seem less valuable to me.


> It doesn't if the systems are well-designed sytems, comprised of loosely-coupled components

I've been going down a similar line of thought. But I went the other direction. That perhaps in "poorly designed" systems where there is lots of coupling, static typing at least gives you the "maintenance" benefit that is one of the bigger justifications that the static type apologist tends to give. You actually hear this a lot: "in large systems, static typing is a must..."

So it's interesting to me that you're going the other way and saying that actually in big, messy systems that static types may hurt you. That's not a common position.

When I walk through some of the big problems in "poorly designed" systems it almost always comes down to coupling: I can't touch one part of the system without having an effect on other parts of the system.

Interestingly, Rich Hickey criticizes the common static typer's idioms (like pattern matching and ADTs) as coupling. And he's right. What always surprises me though is that the static typer doesn't disagree -- they look at this coupling as a feature! They usually say something along the lines of "I choose static typing because if I change my Person class, then the compiler reminds me of all the places in my code that I need to go fix." What's remarkable about this is that it's not a reminder...it's an obligation that your choices plus the compiler are burdening you with: you must go update all those places in the code. This is the very definition of coupling.

There is a way to architect code such that you don't have to revisit 100 places in your architecture when some new data model decision is made/discovered. There is a way to build systems wherein you only have to touch one place in your code when some new feature or data information is needed.


> So it's interesting to me that you're going the other way and saying that actually in big, messy systems that static types may hurt you.

In an overly-coupled late system, static typing increases the potential effect of excessive coupling in forcing changes to remote parts of the system when making what seems to be a point change. But that effect, while magnified by static typing,, is a product of coupling.

And static typing in that situation, OTOH, mitigates (as to out note static proponents are quick to point out) the chance of missing a change that will produce incorrect behavior.

On the gripping hand, reducing the excessive coupling gets to the root of the problem, while static v. dynamic is just choosing how to allocate pain that could be avoided with better architecture.

But languages are sexier than architecture.


> They usually say something along the lines of "I choose static typing because if I change my Person class, then the compiler reminds me of all the places in my code that I need to go fix." What's remarkable about this is that it's not a reminder...it's an obligation that your choices plus the compiler are burdening you with: you must go update all those places in the code. This is the very definition of coupling.

> There is a way to architect code such that you don't have to revisit 100 places in your architecture when some new data model decision is made/discovered. There is a way to build systems wherein you only have to touch one place in your code when some new feature or data information is needed.

The way to avoid that has nothing to do with static or dynamic typing, though. If you change a protocol then you have to change anything that relied on the old protocol if you want your program to keep working, regardless of your language's type system; in a statically typed language it will tell you where those places are, and in a dynamically typed language it's up for you to find them. If your change doesn't break a protocol that old code relied on, then you won't have to change old code. The only changes dynamic typing "saves" you from making after you break a protocol are bugfixes.

If your code is tightly coupled so that changes ripple through the entire codebase, using a language that doesn't tell you where those changes have to ripple for things to keep working won't solve that.


I don't see the difference between sending a Person instance and sending a map of keywords about a person. The coupling is the same.


If my function only needs to know the "age", then why am I having to fill out my Person class with all the other stuff? Why, if I have facts about a Cat in hand, must I coerce it to a Person? These are hoops you're typically jumping through when you're dealing in ADTs.


If "age" is an important property in your system shared among different kinds of entities then you need to have an Interface or a Protocol to retrieve the age of an entity.

The same way you would create a keyword in Clojure to represent the age of an entity (e.g. ':entity/age') that can be put in a map describing a person or a cat.

In both cases you minimized the interface between your modules and you have less coupling.


Not in OCaml. You can have a function like that:

let printAge object = print object.age

And it'll just check if anything passed to printAge has a field "age".


Well, in Haskell, this seems like a case where you'd want a typeclass for getting the age out of your type.

More generally, though, it seems like row-types might be a form of static typing that would fit Rich's preferred style of programming.


That doesn't seem to describe any hoops I've ever had to jump through when using Haskell. Can you give concrete examples?


I just did. Having a Person vs. Cat taxonomy. The claim is about ADTs, not Haskell. When a "name" property will do, why do we need to introduce an ADT? Why do we need to taxonomize?


Then you can use a "HasName" typeclass. Admittedly that adds a bit of boilerplate (in one single place).


I think the constant replies of "Oh there's a way to deal with that." Miss the point. You should keep asking yourself, "Am I fixing a problem that didn't need to be there?" Sometimes, the answer is: No, I do want this structure, and it's worth it overall to write interfaces, etc. to add some polymorphism or dynamism to it where needed. In lots of cases, though, you're just writing stuff to accommodate the language. In lots of languages I feel like I'm fighting an internal battle between static-ness and dynamism. Start with static types or classes, then add interfaces or typeclasses, oh and overload these functions. Now make sure these other things things implement this new interface so they can participate, etc.

Sometimes it feels like a real burden for not much gain over just passing around the basic data (a name, an age) I wanted to deal with to start with. Clojure's proposition is that in many many cases, not getting fancy with the data or over-engineering your problem representation will lead to simpler programs that are easier to maintain, giving you an alternative route to safety and maintenance instead of type-checking.


> If my function only needs to know the "age", then why am I having to fill out my Person class with all the other stuff? Why, if I have facts about a Cat in hand, must I coerce it to a Person?

If your function only needs to know the age, then why would it take a Person or a Cat at all, instead of just accepting an age parameter? But assuming you have a reason, who says you do need to coerce anything or add any dummy data? You don't even have to go very niche to get that functionality, eg in Typescript:

    class Person {
      age: number
      constructor (age: number) { this.age = age }
    }

    class Cat {
      age: number
      constructor (age: number) { this.age = age }
    }

    const printNextAge = (thing: { age: number }) => {
      console.log(thing.age + 1)
    }

    // These all work
    printNextAge(new Person(12))
    printNextAge(new Cat(23))
    const someRandomObject = { age: 10, colour: 'green', weight: 'heavy' }
    printNextAge(someRandomObject)

    // These don't:

    const lady = { name: 'carol' }
    printNextAge(lady)
    // error TS2345: Argument of type '{ name: string; }' is not assignable to parameter of type '{ age: number; }'.
    //  Property 'age' is missing in type '{ name: string; }'.

    const caveman = { age: 'stone' }
    printNextAge(caveman)
    // error TS2345: Argument of type '{ age: string; }' is not assignable to parameter of type '{ age: number; }'.
    //  Types of property 'age' are incompatible.
    //    Type 'string' is not assignable to type 'number'.
Now, if the function takes a Person, then the reason you need to fill out the rest of the stuff is because it probably wants an entire Person, not just their age. The fact that the function can tell the compiler it needs an entire Person (and not a Cat) and have it ensure that it only gets valid Persons doesn't stop you from doing anything a non-buggy program should do, it just makes the language more expressive. Even in a wordier language with a less powerful type system like Java, which obviously isn't the gold standard for static typing (and where for some reason your function was still taking an object instead of just an age int and leaving it up to the caller to extract it), it's as simple as saying:

    interface Aged {
        int getAge();
    }
and adding 'implements Aged' to your Person and Cat classes.


> So it's interesting to me that you're going the other way and saying that actually in big, messy systems that static types may hurt you. That's not a common position.

I didn't interpret it that way. I interpreted it as "If you have a big, messy system you can tame it into a nice, loosely couple system by adding some types".


> At some point your ... Haskell ADT has to communicate its data to other programs that have no notion of these things. So you end up w/ a ton of surface area in your code with mappers and marshallers totally unrelated to the content of the data and purpose of the program.

No it doesn't.

> things like "a map of strings and numbers" is pretty straightforward to transport, while a sum type implementing functor with a record data constructor that contains a Maybe SSN is not.

Yes it is.

Has this guy ever heard of Generics?


This guy was a professional C++ programmer for a couple of decades so he probably came across generics.

I think it's possible you didn't catch the parts where he talks about what he wants from his data structures. There were 2 key pieces:

+ that he can transport them between environments, possibly remotely, possibly written in different programming languages

+ that parts of the system only need to know about parts of the data structure. More, that as the data structure is passed around the system, only the producer and consumer of changes to the structure are affected by the change.

I'm not aware of any static type system, with generics or otherwise, that would meet these goals. At least not post-facto and highly artificially.

Whether or not you agree with the priority he gives to these goals is, of course, a different matter.


> that he can transport them between environments, possibly remotely, possibly written in different programming languages

This is pretty easily solved in static languages with "serializable" interfaces, which can usually be automatically derived. E.g., in Rust, you can use #[derive(Serializable,Deserializable)], in OCaml, you can use [@@deriving sexp]. This also allows you to know at compile time which types can safely be serialized. In Clojure, if you have a type that contains an InputStream of some sort, it's not reasonable to serialize it. But you won't find out until runtime, when you happen to have an instance of that map with the InputStream.


I'm fairly certain the comment you are replying to isn't talking about "generics" but "Generics" with a capital _G_.

This bit,

> while a sum type implementing functor with a record data constructor that contains a Maybe SSN is not

indicates Hickey was talking about Haskell's type system, where you in fact can derive a Functor for you data type by using Generics.

> + that he can transport them between environments, possibly remotely, possibly written in different programming languages

There exist tools to generate types between languages.

> only the producer and consumer of changes to the structure are affected by the change.

If your function doesn't alter the data type, it needs no info on the structure of it. Perhaps you can expand on what you meant here?

EDIT: Ehrm, why the downvotes? If you disagree with the above, explain what.


Generics can make serialization easier in Haskell, but that's not exactly the point. The point is, once your Haskell program is done with that data, it's getting tossed into a message queue or database, or whatever, that doesn't really care or have any concept of what typeclasses it implements, whether one of its constructors is an Either, etc. In open systems you don't really get to decide who consumes your data or how--your program can't communicate anything other than data to them--and so you often don't have a way of enforcing your types on eventual consumers. Haskell has strong opinions about how it thinks data should be represented and aggregated. But in large open systems, as the saying goes, "opinions are like aholes; everybody has one."

When I think about the popular tools for moving data around large open system: the message queues, key-val stores, pub-subs, etc. --- it seems to me that the idea moving and communicating about types and objects over wires has largely been a dud. Thinking RMI, OODBs, etc. It's just hard to get other people (tools, services) to care about how you've decided to organize the entities in your program. It's a lot of work, and the benefits over throwing mostly "plain" data may not be compelling enough.

Again, I keep coming back to his term "parochialism" and why he's focused on it. I think it's an under-appreciated point amongst all the language wars.


I feel like this thread[0] in the discussion sorta delves into that. It's certainly an area with differing opinions and I can see why some might prefer having it simply be strings the whole way down, but at some point you are going to need to interact with the values you have, and at that point you need to know what type you are dealing with, so I really feel the serialisation argument is a bit weird. In databases you also have types on everything, albeit often less powerful. If not caring about types is really what you want, nothing stops you from treating everything as a String in Haskell. Heck, you can even do dynamic programming in Haskell with Data.Dynamic and Data.Typeable if you wanted to, but that sorta defeats the whole point of a nice and powerful type system.

I think it's kinda ironic for you to bring up "parochialism" or narrow-mindedness when that is exactly what I was thinking throughout Hickeys talk.

[0] https://www.reddit.com/r/haskell/comments/792nl4/clojure_vs_...


> In open systems you don't really get to decide who consumes your data or how--your program can't communicate anything other than data to them--and so you often don't have a way of enforcing your types on eventual consumers. Haskell has strong opinions about how it thinks data should be represented and aggregated. But in large open systems, as the saying goes, "opinions are like aholes; everybody has one."

True, and edn is equally as parochial as any Haskell serialization format. I don't see how Hickey can claim primacy here.


> you often don't have a way of enforcing your types on eventual consumers

A type isn't something you enforce on consumers. It's something you enforce on yourself to help shape your code.

Regardless of how you put something onto the wire you're giving it a specific format that your consumers need to know about. This is the same whether it was serialised from Haskell or Clojure or Coq or assembly.


I didn't really understand his talk I think he was too abstract

It is possible I am missing a lot of the background information to understand it, but still .. the talk wasnt that technical, just too abstract

I think he could have shown examples, code examples .. to make a more concrete statement


I'd argue this is the type of talk a 10x developer can make when he tries to talk on "his level"; as he mentions, he had 18 years of experience already when starting with Clojure, so that's the level he started at, and it's ten years later now. He could probably make simpler talks but he doesn't. A lot of the talks he does can be taken and studied (and all the underlying subjects) as a project that could entertain you for weeks.


I agree it'd be nicer if he could dive down deeper into tangibles. However that can take a lot of time in a talk like this. Many of the points he makes could almost warrant a whole entire talk/article on its own. For example, he rails hard on ADTs/classes -- I think he's right here, but this point could be elaborated on at length.

Still, I look at Rich as someone who:

   - has had many years of hard-won practical experience
   - whose value system is ultra-pragmatic[1]
Now that doesn't mean his ideas should be taken without discretion or inspection. Even hardly-mortals like are RH are subject to the flaws of man.

But if you're willing to think that he might be speaking some truths, then he gives a lot of threads that we can follow to find out for ourselves.

For example, are ADTs really so horrible has he claims? That is something you can start diving into and asking yourself: What if I got rid of all my ADTs and starting programming more with maps and a la carte type validation instead of static types everywhere? That is something we can go put to practice and find out if it is really so liberating as RH is saying.

[1] This is important. In tech, a lot of value systems are oriented around other things: what is cool and what is fun and what is intellectually stimulating. Versus what is most pointedly addressing our business needs.


> I didn't really understand his talk I think he was too abstract

Then you better stay away from another talk at the same conference, by Guy Steele [0] !

[0] https://www.youtube.com/watch?v=dCuZkaaou0Q


I did watch half of it ... i left when i realized, that he will not move to another topic, he is just truly talking about notations, and comparing them, and telling stories about how they evolved

I watched in anticipation, that he will use this, to make a point about clojure or programming models or techniques, but he wasnt gonna ..

It was truly just a speech about notations, and i dont really care about that


> That was not written in C++, that was around the time I discovered Common Lisp, which was about 8 years into that 15 years. And there was no way the consumer of this would use Common Lisp, so I wrote a Common Lisp program that wrote all the yield management algorithms again out as SQL stored procedures and gave them this database, which was a program.

> Eventually I got back to scheduling and again wrote a new kind of scheduling system in Common Lisp, which again they did not want to run in production. And then I rewrote it in C++. Now at this point I was an expert C++ user and really loved C++, for some value of love

> [Audience laughter]

> that involves no satisfaction at all.

> [Audience laughter]

> But as we'll see later I love the puzzle of C++. So I had to rewrite it in C++ and it took, you know, four times as long to rewrite it as it took to write it in the first place, it yielded five times as much code and it was no faster. And that's when I knew I was doing it wrong.

Seems like a pretty good argument for Common Lisp here.

> So, when I discovered Common Lisp, having used C++, I said that, "I'm pretty sure to the answer to this question is, 'yeah, absolutely'". And can we do that with a lower cognitive load? I also think, "yes, absolutely". And then the question is, "can I make a Lisp I can use instead of Java or C#?". Cuz you just heard my story, and I used Common Lisp a (?) couple of times, every time it got kicked out of production, or just ruled out of production, really not kicked out, it didn't get a chance. So I knew I had to target a runtime that people would accept.

Pity he hadn't heard of Armed Bear Common Lisp (http://abcl.org/), which runs on the JVM.

> And the old Perlis, you know, quip about, you know, "any sufficiently large C or C++ program, you know, has a poorly implemented Common Lisp", is so true.

That's actually Philip Greenspun: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

I (unsurprisingly) don't think his list of problems with Lisp are convincing. I don't mind mutable state, since the real world is not functional: real programs are all about side effects. I don't mind that not everything is a list: as he notes, lists aren't the perfect data structure. I think the package system is very clean & understandable.


> Pity he hadn't heard of Armed Bear Common Lisp (http://abcl.org/), which runs on the JVM.

He was certainly aware of SBCL. He had a number of reasons for not wanting to use it. [1]

> I don't mind mutable state, since the real world is not functional:

I hear this argument often, but the "real world" is not linked lists or hash maps or mutable or 0s and 1s either, but they are all useful representations.

> real programs are all about side effects

Clojure has plenty of facilities for side effects, and without having to resort to monads (though you could if you wanted).

[1]: https://www.youtube.com/watch?v=cPNkH-7PRTk


>Pity he hadn't heard of Armed Bear Common Lisp (http://abcl.org/), which runs on the JVM.

First public release of ABCL seems to be after first release of clojure.


Was it? Ward's Wiki indicates someone was using it in 2004: http://wiki.c2.com/?ArmedBearCommonLisp (as an aside: why is JavaScript required now?), while Clojure dates back to 2007.


> (as an aside: why is JavaScript required now?)

Because it was "improved" (rebuilt) recently.


was it really about static typing vs dynamic typing

or more about C++ vs Lisp/Clojure


Not sure it was really about either.

It's really about what experiences led him to make the decisions he made both in deciding a new language was needed and what attributes the language should have.

Clearly the decisions were coloured by his experiences of C++, Java and Common Lisp. But they do appeared much more affected by his views on information structure, flow and evolution.


I found the discussion on /r/haskell to be much pleasant and a good take on the points Hickey make https://www.reddit.com/r/haskell/comments/792nl4/clojure_vs_... (it's actually a discussion on a piece discussing his talk).

I honestly don't find Hickeys arguments to be very valid, and his take on type systems honestly make him sound like he never touched something like Haskell (maybe OCaml) with much more powerful type systems and type inference. He does not sound like someone that invented a language, and more sounds like a blind evangelist for dynamic typing.

> static type systems yield much more heavily coupled systems. And that a big part of that time aspect of the final diagram of what problem we're trying to solve, is dominated by coupling when you're trying to do maintenance, right? Flowing type information is a major source of coupling in programs. Having a de-, you know, pattern matching of a structural representation in a hundred places in your program is coupling, right?

His arguments about coupling seems silly to me, especially since in FP you usually like generalisations and abstractions that allow you to delay specialising to a specific type until quite late. Take all the different typeclasses that exist in Haskell and things like Monads, Applicatives etc that allow such general abstraction.

> Names dominate semantics, a to a, list of a to list of a [talking about `foobar :: [a] -> [a]`], it means nothing, it tells you nothing

It tells so much! It tells us that it takes in a list of any element and the only operations it can perform are something that alters the structure of the list (replicate elements, drop elements etc) and no operations on the elements themselves!

> How many people like UML? How many people have ever used a UML diagramming tool? Right? It's not fun, right? It's like, "no, you can't connect that to that", "oh no, you have to use that kind of arrow", "no, you can't do this", "no, you can't...", it's terrible. OmniGraffle is much better, draw whatever you want. What are you thinking about? Draw that. What's important? Write that down. That's how it should work, right?

I chuckled a bit here, since Hickey just talked about Simon Peyton Jones in the previous slide, I'm assuming he at least Haskell in mind somewhat. If Hickey is seriously comparing Haskell's type system with UML, he needs to go back and take another serious look at Haskell.

> Yes, IntelliSense is much helped by static types and performance optimization, which he didn't list, but I think is one of the biggest benefits. We loved that in C++. And maintenance, I think it's not true. I think that they've created problems that they now use types to solve. Oh, I pattern-matched this thing 500 places and I want to add another thing in the middle. Well thank goodness I have types to find those 500 places. But the fact was that thing I added, nobody should have cared about except the new code that consumed it and if I did that a different way I wouldn't have had to change anything except the producer and the consumer, not everybody else who couldn't possibly know about it, right? It's new.

Honestly this just comes off incredibly stupid and Hickey quite clearly shows he's never programmed in a language with a type system more powerful than C++ or the likes. One of THE BIGGEST advantages of a powerful system is exactly maintenance. He makes the straw man that you have to pattern match every place you lead a type through in your code base, and that you can't "not care" about certain parts of the constructor, but that is just so incorrect. He was half onto the point of, with the compiler telling you all the places you'd have to refactor thought. And that IntelliSense is the only thing he seems to come up with as valid arguments for types...I think that says more about his understanding of types than anything.

I'm gonna leave the rest of the comments up to the discussion I posted in the top, else I would just be rehashing it.


As someone who really likes Haskell, I've found the response to the talk from Haskellers, like in that thread, really disappointing. The common refrains indicating that Rich doesn't understand Haskell, types, etc. are patronizing and likely incorrect. (I realize he's trying to hit a few targets, from C++ to Java to Haskell in one go, so it's not always clear which he's complaining about.)

The other response I see is that if he were only aware of feature X (mostly row polymorphism), then that solves his issue. Often feature X is some immature Haskell extension, or exists in research or still niche languages. I don't think switching to Purescript is going to solve more than one of his issues, if even that. And the last thing I've seen is a bunch of folks trying to torture the crap out of Map to prove him wrong (not really) about some offhand point or another.

By and large I've seen a lot of (pedantic) sniping at specific phrasings without much attempt to grapple with the larger points.

The casual dismissiveness of users of dynamic languages (programmers use them because "dynamic types are easy" says one commenter -- about a talk from a guy who has famously thoroughly dissected the notion of "easy.") along with the inability to actually engage with the broader ideas has kinda turned me off the Haskell community.

Lastly, it seemed clear to me from the video and transcript that he was saying performance optimization, not (just) Intellisense, is a clear win for static over dynamic types. And "Intellisense" is kinda just a shorthand for static analysis generally. Again, the point is not that types don't have benefits, its that those benefits come with costs. I think Haskellers often underplay the costs associated w/ dealing with the type system, underestimate how little reach the type system has in an open system where data's flying around arbitrary services, and overstate how much preventing internal inconsistency bugs solves all problems.

I like Haskell, OCaml, F#, etc. I think they make a lot of hard things simpler and help me reason about certain programs better. But they're not panaceas.


> As someone who really likes Haskell, I've found the response to the talk from Haskellers, like in that thread, really disappointing. The common refrains indicating that Rich doesn't understand Haskell, types, etc. are patronizing and likely incorrect. (I realize he's trying to hit a few targets, from C++ to Java to Haskell in one go, so it's not always clear which he's complaining about.)

I can see why you would feel that, if you hadn't seen Hickey's talk, being having seen it he was honestly quite patronising himself towards static typing, so no wonder he would risk getting some of the same tone back. That said, I don't think it is far off to say he doesn't understand Haskell, either that or he deliberately ignores the solutions that Haskell offer to the problems he's complaining about. Also, one commenter from the thread mentioned,

    I do know from interviews that he's done (MS Channel9, i think it was) that he has at least a passing familiarity with Haskell as of 5-10 years ago, but that's a completely different beast to what exists now
Including he himself mainly focusing on having been a C++ programmer.

> underestimate how little reach the type system has in an open system where data's flying around arbitrary services

There are certainly times when dynamic programming is nice and all, but I feel your statement is quite disproven with e.g. something like Haxl and Facebooks spam filter, which is an incredibly large scale open system, unless we have some different definition of that.

Finally, I agree Haskell and the like are not panaceas, but when a person goes out with incorrect/invalid points talking down about the effectiveness of a system, when in fact the users of it would agree it is highly effective, I feel like it does no benefit to the community to simply let it stand just because the person speaking is someone kinda famous.


> I feel like it does no benefit to the community to simply let it stand just because the person speaking is someone kinda famous.

That's not really what's happening. As I explained in my comment, I've found the technical rebuttals by Haskellers to be lacking.

Yes, of course Haskell can be and has been used to make large, high quality systems. I can say the same for C++. That doesn't mean there aren't costs associated with those languages, and good reasons why someone might want to make different decisions about how to design a language. This is literally what the talk was about: why Clojure was designed the way it was. Not why Haskell is a bad language (it's a great language), but why Clojure was designed differently. Pretending like the type system has no costs and only benefits is not serving the Haskell community well.

> I can see why you would feel that, if you hadn't seen Hickey's talk

Not sure what this comment is. I've clearly seen the talk. This pattern of assuming someone who disagrees with you must have less information is off-putting.

What I'm taking away from Haskeller rebuttals is: Haskell has solutions to all your problems, if only you're smarter than Rich Hickey. This is not, to me, a compelling sales pitch.


> What I'm taking away from Haskeller rebuttals is: Haskell has solutions to all your problems, if only you're smarter than Rich Hickey. This is not, to me, a compelling sales pitch.

As one of the Haskell rebutters, let me give my point of view.

Firstly, "Haskell has solutions to all your problems" is a perfectly good overstatement of the case. No need to bring Rich Hickey's intelligence into it.

Secondly, Haskell gets you at least quite far towards Hickey's goals. Haskell does have solutions, or at least partial solutions, to all of the problems Hickey raises. Where the Haskell support is particularly weak, for example with row types, we've admitted that. Yet neither he nor his proponents here have shown any understanding of the Haskell (partial) solutions. If he or they had said "I know about all this great stuff, parametric polymorphism, generics, Dynamic, -fdefertypeerrors, ... but it still doesn't get you close to Clojure because of <specific reasons>" then he would be presenting a useful argument. As it is all we can do is disabuse Clojurists of their basic misunderstandings of how Haskell works.

Am I being hypocritical? After all, I do not know the intricacies of Clojure. But I'm not the one up on stage being filmed making claims about a language I don't seem to be familiar with.


I get why the tone of the talk is ruffling feathers. If I were a serious Haskeller, I think I'd be a little miffed too. But I think it misses the forest for the trees, and I think, as I commented before, it's hard to understand the context of his issues without understanding his interest in systems. So I get why a lot of rebuttals have been focused on his somewhat glib representation of certain features, but it's still a little frustrating, because I don't think it's a particularly interesting debate.

I also don't think he referred to Haskell specifically at any point, and really just spoke about algebraic type systems generally. It wasn't in the scope of the talk, and I don't think it'd be a very interesting talk, to compare Clojure and Haskell features. I bet he thinks Haskell is a great language. Clojure takes a lot of inspiration from Haskell: default immutability, core seq functions that look like Data.List; STM, etc. There probably wouldn't be Clojure without Haskell. His whole point is that types, like any other design feature, come with costs. They can be quite heavy and constraining compared to their benefits in certain contexts, and that may not be worth it.

That being said, I don't think it's a compelling rebuttal to say: "If you use Dynamic and fdefertypeerror, Haskell addresses his issues." You'd be run out of town writing Haskell code like that.

Re. parametric polymorphism, he explicitly talks about parametricity, and his take seems to be that he doesn't find parametric types that useful for conveying information or guaranteeing interesting things (to him) about your program. I think he's exaggerating, but I get that it's a response to a lot of breathless advocacy about how informative type signatures are.

Again, regarding his tone in the talk, I get it. But I think this should provide Haskellers a good opportunity to examine how casually dismissive they are of other languages, especially dynamic ones. IME, statically typed FP proponents are much more dismissive of dynamic languages than dynamic language proponents are of types. It's often "Your language is unsound garbage for lazy programmers" vs. "Sometimes the type system becomes an overly-complex constraint on my problem."

As someone who does like types, I'm nonetheless glad that there are folks designing sound dynamic languages and arguing for their usefulness.


Part of the reason why I personally was so upset by the talk was that it felt as if there was no room for discussion or debate on the points raised. In addition there was what felt like a lot of sniping towards features of statically typed languages that felt designed just to get a reaction from the crowd. The fact that there was an entire slide designated to tearing down a series of videos by SPJ felt not only irrelevant, but also disrespectful. There seemed to be a lack of willingness to meet halfway and concede there was anything useful from the other side.

Perhaps most frustratingly, I know that RH is capable of much better, much more informative presentations. There might have been something worthwhile in here, but the tone, style, and majority of the content didn't make it worth digging out in my opinion.

Regarding Haskellers' attitudes, I'll add that I haven't seen anything like what you describe at least on the Haskell subreddit. It could be happening in other forums but by and large it's been a welcoming community even to those that come in skeptical.


It's a keynote talk, not a panel discussion. Most keynotes are expressions of strong opinions.

> The fact that there was an entire slide designated to tearing down a series of videos by SPJ felt not only irrelevant, but also disrespectful.

From the transcript:

"Simon Peyton Jones, in an excellent series of talks, listed these advantages of types." ... "And I really disagree just a lot of this. It's not been my experience."

How is that tearing down or disrespectful? I get there were a lot of glib bits in the talk, but as you point out, he's talked about these issues with more nuance at other times. It's a shame that hyper-focus on a couple of thrown off jabs at the costs associated with types is distracting folks from the very useful larger point he's discussion about levels of problems in programming, contexts of programs, and how languages that impose strong opinions about how to aggregate information can be counterproductive.

I think the Haskell community is, overall, very good and welcoming, but smugness does creep in a lot, IME. But if you want to talk about meeting halfway, I find that it's much less common to see static FP folks concede any benefits of dynamic languages (besides that they're "easier" in a kind of condescending way).


> but it's still a little frustrating, because I don't think it's a particularly interesting debate

I agree this level of debate is not very interesting. Hickey did not give concrete examples. He caricatured a strawman Haskell rather than presenting Haskell as it is, that is as a language with at least partial, if not complete, answers to most of his complaints.

I've asked here about a very specific example that could enlighten the debate and I hope I'll get an answer:

https://news.ycombinator.com/item?id=15599451

> I also don't think he referred to Haskell specifically at any point

Ctrl-F "Haskell" finds three places he explicitly mentioned Haskell (and as it happens in all three places he was promoting misconceptions about Haskell). He also quoted Simon Peyton Jones. This talk contained a significant anti-Haskell component and it's clear he doesn't understand Haskell.

> I bet he thinks Haskell is a great language

I don't, given what he said about it. He talks dismissively that you have to "prove" things to the compiler, that pattern matching is a terrible idea, that some programmers think 'that "correct" just means, I don't know, "make the type checker happy"'. I don't think he likes Haskell at all. But that's fine, because he doesn't understand Haskell at all either.

> That being said, I don't think it's a compelling rebuttal to say: "If you use Dynamic and fdefertypeerror, Haskell addresses his issues." You'd be run out of town writing Haskell code like that.

Firstly, I didn't claim that Dynamic and -fdefertypeerrors address his issues. I said that unless he mentions them explicitly he's not demonstrating sufficient familiarity with Haskell's dynamic capabilities to be critiquing them.

Secondly, maybe you'd be run out of town for writing Haskell code like that, maybe not. If that level of dynamism genuinely adds business value as Clojure proponents are claiming in this discussion then why on earth wouldn't you write code like that?

> But I think this should provide Haskellers a good opportunity to examine how casually dismissive they are of other languages, especially dynamic ones. IME, statically typed FP proponents are much more dismissive of dynamic languages than dynamic language proponents are of types. It's often "Your language is unsound garbage for lazy programmers" vs. "Sometimes the type system becomes an overly-complex constraint on my problem."

That's an entire other discussion. Maybe Haskellers need to "examine how casually dismissive they are of other languages". Everyone and every community should be self-reflective. But I hardly think self-reflectivity is inspired by people promoting misconceptions about you. If anything it would be a Haskell apologia that promotes misconceptions about Clojure that should inspire the Haskell community to self-reflect.

> As someone who does like types, I'm nonetheless glad that there are folks designing sound dynamic languages and arguing for their usefulness.

Likewise. I just don't want them to spread disinformation and I'm not surprised when that disinformation is challenged.

Personally I've got a lot out of this discussion. I agree wholeheartedly with Hickey's criticism of "place oriented programming" and I'm a proponent of adding row types to Haskell. I've also opened my mind to reconsidering the benefits of programming with `Dict String Variant` or `Map String Dynamic`. I've worked in a place that used that style and I don't think it worked well! But I will turn it over in my mind a few more times.


Like I said, I get why the talk is ruffling feathers, and I think it's fair to feel he's being glib or exaggerating; he is. I've also seen Rich and others discuss these point with more nuance and detail elsewhere so I think I'm willing to interpret them in that more nuanced context.

Re. the Haskell mentions, they all relatively offhand and are usually combined w/ mention of another language, like C++ or Java. (I think this also doesn't help outside understanding since he's trying to talk about disparate static type systems in one fell swoop.)

Re. Rich's feelings about Haskell, there are not only the obvious language influences I cited, but also this very explicit response in the Clojure mailing list (https://groups.google.com/forum/#!msg/clojure/DUKeo7sT4qA/TU...)

"There is no purpose to a message like this.

Everyone, please refrain from language wars. Haskell is a tremendously ambitious and inspiring language, for which I have the highest respect. There is zero point in bashing it (or its users) in this forum.

Thanks, Rich"

Again, I see a lot of what looks like "bashing" to you as a response to an over-fetishization of "correctness," and over-hyping of types. I mean, how many times do you think Rich has heard someone complain that Clojure doesn't have types?

It's also nuts to keep saying things like "he doesn't understand Haskell at all."

Re. dynamism in Haskell, I still don't see this as a compelling point. Those things are unidiomatic and not really comfortable to use in Haskell. I mean, you can put type annotations all over your Clojure code, use typed Clojure, and spec and schema the crap out of your program and get all kinds of safety guarantees. I wouldn't make that an argument that Clojure can solve all your type safety concerns. It's just not a comfortable way to write the language. You can write functional code w/ immutable values in C++ and Java, but nobody does it, because the language makes it hard.

I raise the point about dynlang dismissiveness from Haskellers because I think in large part, the tone of the talk is response to a long history of Clojure being criticized and dismissed for being dynamic. I've used both Haskell and Clojure a lot, been in both communities, and really don't think this talk was the opening salvo of glib dismissals.

I think we both agree the sniping about pattern matching, etc. is not really that interesting, and my sense from the talk is that this is really a very secondary issue. (Indeed I think he makes the point that many features are good if you can get them a la carte, but having the language impose them on all your data can be costly.) It would be great to see Haskellers and Clojurians discussing bigger picture problems that go beyond language features, because the two communities share a lot of values.


I just wanted to say that your tone and approach in this whole discussion has been utmost pleasant, especially considering these topics are usually some that get quite a lot of people up their seats :) Props for that!


Yes, thanks to everyone for keeping it informative and civilized!


That message to the Clojure mailing list is very reassuring, thanks.

> It would be great to see Haskellers and Clojurians discussing bigger picture problems that go beyond language features, because the two communities share a lot of values.

Yes it would be great. The Haskell and Clojure paradigms share too much and their communities are too small to have infights. We've got the whole mutable, OO, world to oppose, for example!


The one point about coupling in type systems is the names.

While some languages (Typescript comes to mind) has structural typing, languages like Haskell tend to have nominal typing. So even if your data _looks like_ {'amount': 100, 'currency': 'USD'}, becuase you've wrapped it in a "Money" ADT, now its treated as a different object.

Now sometimes this is valuable! There's a reason we have "newtypes" in a lot of these languages. But other times you end up having to do a lot of wrapping/unwrapping for little reason beyond the type system.

There's a lot of compiler-related reasons for this stuff as well, but if you work a lot in duck-typed languages you can end up getting used to writing stuff that works across data types without any fuss.


I can certainly see how that would bring frustrations, especially if you come from a dynamically-typed language (heck, even if you start out in Haskell it'll annoy you). It's worth noting though, that row polymorphism/structural typing also exist in Elm and PureScript, and there are some libraries in Haskell[0] that attempt to solve it using various approaches (admittedly not as nice as first-class support).

I guess my larger problem is that he uses it as an argument static types, whereas it's a solved problem in several existing type systems that has implemented row polymorphism.

[0] https://www.reddit.com/r/haskell/comments/5p307s/row_polymor...


I've worked with Purescript and agree it helps solve it, though I am willing to cut some slack to Hickley to not being super aware of row polymorphism.

It's been an academic thing for a while, but only fairly recently has it been a thing in even very small languages with use like Elm.

I also feel like a lot of these tools still have some difficulties. There's some conflicts where you can't declare typeclasses on pure Object records in purescript (instead having to go through a wrapper class).

Still agree that forward progress is happening and interesting


> > Names dominate semantics, a to a, list of a to list of a [talking about `foobar :: [a] -> [a]`], it means nothing, it tells you nothing

> It tells so much! It tells us that it takes in a list of any element and the only operations it can perform are something that alters the structure of the list (replicate elements, drop elements etc) and no operations on the elements themselves!

Yeah this is really instructive about how little Hickey undestands about good type systems.


>I mean, why was I unhappy as a programmer after 18 years and said, "if I can't switch to something like Common Lisp, I'm going to switch careers". Why am I saying that? I'm saying it because I'm frustrated with a bunch of limitations in what I was using.

>So, when I discovered Common Lisp, having used C++, I said that, "I'm pretty sure to the answer to this question is, 'yeah, absolutely'". And can we do that with a lower cognitive load? I also think, "yes, absolutely". And then the question is, "can I make a Lisp I can use instead of Java or C#?". Cuz you just heard my story, and I used Common Lisp a (?) couple of times, every time it got kicked out of production, or just ruled out of production, really not kicked out, it didn't get a chance. So I knew I had to target a runtime that people would accept.

Poor Rich Hickey, he wanted to escape C++, Java and C# and rather use Common Lisp, but his employeers wouldn't accept it. Thus he invented Clojure, and now he can't escape Java, because, sadly, Clojure itself and its libraries are very coupled to the Java libraries and the JVM. Clojure makes really easy to invoke Java classes and methods; thus, rather than creating a comprehensive Clojure standard library, the choice was to mostly just use the Java libraries. Thus, migration to another VM (like the CLR runtime) means that a big amount of the Clojure code that runs under the JVM does not work anymore.

Clojure is very coupled to the JVM even in it's choice of keywords, for example an "atom" in Lisp has a very different (and traditional, established) meaning than an "atom" in Clojure, which is directly tied to the java.util.concurrent.atomic class.

Mind you, i am glad Clojure exists; next time a customer wants a project that should (or needs) to be heavily integrated with existing Java/JVM code, it would be my first choice; in fact, I pitch Clojure as a #1 alternative to Java. But the coupling to the Java and JVM is a problem.


I'm not a fan of Java, but the JVM is an incredible piece of software that centuries of man hours have been spent optimizing. And while Java might not be a pleasurable language, the enterprise masochists have written millions of lines of battle tested code in libraries. The deployment and monitoring model for the JVM is very well understood, which means Clojure is uniquely positioned to be introduced into large organizations.


I agree with you (and i'm surprised at people downvoting my post). Thus, i repeat: Clojure is an excellent (and for me, preferred) alternative to Java on the JVM. However i would have liked to see it progressing into a language where you could have the same code run without changes on many platforms -- at least the JVM, CLR, LLVM, and if possible the javascript engines.


People have tried, and failed, countless times to create thick abstractions over runtimes in order to allow code to easily port across them. They always fail, normally because they become too slow due to the added indirection and lost opportunities to optimize using platform-specific features. Moreover, it's not even that useful; how often do you want to move a large application from the JVM to the CLR? The code sharing capabilities between Clojure and ClojureScript exemplifies a better middle ground; you write .cljc files and use reader conditionals to write platform-specific code when necessary.


> Clojure itself and its libraries are very coupled to the Java libraries and the JVM. .... Clojure is very coupled to the JVM

This is not true. Clojurescript has been a very productive language for me and for many others.


I never said Clojure wasn't productive, i'm pointing out that it is too tied to the underlying platform.

ClojureScript is an excellent replacement for Javascript, however code is not 100% compatible between Clojure and ClojureScript for the reasons cited in the paragraphs above (plus the limitations of Javascript).


You're right that code is not 100% compatible between Clojure/ClojureScript/ClojureCLR. Clojure from the outset has embraced the fact that it's a hosted language and leverages that to provide easy interop with the host. Providing a way of writing portable Clojure* code is the motivation for reader conditionals.

https://clojure.org/guides/reader_conditionals

You may choose to view Clojure as inextricably tied to the JVM given that a lot of new development on the language starts there and that it does consciously break from some traditional Lisp terminology and syntax. I don't think that's fair, however, given that these are conscious decisions in an effort to make a more pragmatic language. The project itself wouldn't consider these limitations (e.g., host interop, divergent syntax, use "atom" to describe a concurrency primitive as opposed to an aspect of language/syntax), but strengths.


>Providing a way of writing portable Clojure code is the motivation for reader conditionals.*

Yes, this is a great idea. Reader conditionals are available in Common Lisp as well, however in CL, unlike Clojure, there is a big (some would say huge) standard library available, so when you find reader conditionals in CL, it is usually because the code uses a very specific implementation-dependent feature like sockets or C interface. In Clojure, the language spec and 'standard lib' is small, thus Clojure libraries often rely on Java libs for many tasks. This often can prevent Clojure code from running straight away on ClojureCLR.

More than reader conditionals, Clojure needs a "batteries-included" standard library for successfully becoming a language that can produce portable code.


Certainly having more portable libraries available is a good thing. With the adoption of reader conditionals, people are increasingly writing portable (granted, mostly clj/cljs) library code. I think it's fair to say that, in part, the many years the CL ecosystem has on Clojure contributes to the larger CL standard library, as well as different language goals. You acknowledge that CL also uses reader conditionals for implementation-dependent features: one difference between Clojure and CL is where the line is drawn as to what is considered implementation-dependent.

Clojure is 10 years old. According to Wikipedia, work on CL started in 1981, 36 years ago. It's not my goal to convince you that Clojure is equivalent to Common Lisp. I do think you're being unnecessarily harsh in your characterization of Clojure with respect to Common Lisp and faulting it for features that the designer consciously chose to embrace, not through lack of thought. Having a preference for a language is great. Let's try to keep things in perspective, however. I'll sign off now as this looks increasing like a language war rather than a reasoned discussion.


No, i'm not wanting to raise a language war; i just want to point out which direction Clojure should follow to keep improving.


Keeping Clojure close to the host language was a design decision that was made at its inception. Choosing the JVM and Javascript as the target runtimes (for lack of a better term), has provided Clojure/ClojureScript a huge amount of reach, and I credit these decisions as the primary driver for the success of the languages. I think that saying that the languages are too tied to the underlying platform represents a misunderstanding of the goals of the languages. If you want a self-hosted language, there are plenty of options to choose from, and each simultaneously carries with it the benefits and disadvantages of a platform that's built from the ground up.


CL has had ABCL and Parenscript to integrate with Java and JavaScript fairly nicely for a long time, so I don't see why Clojure having the same abilities should be the primary driver of success, except maybe long term. Clojure piggy-banking off of Maven definitely helps though. To me it was more about finding a nice middle ground of immutability that is better than most language's defaults and not as strict as Haskell, having nice lazy sequence-oriented abilities to do data processing, and offering a fresh syntax to traditional Lisp code, even if it's arguable whether it's "better", with that syntax having nice (and normal wrt other modern languages) data literals for vectors and maps out of the box. In other words I would have tried it and probably liked it without all the JVM stuff, I think it hooks a lot of people for similar reasons (i.e. its actual features, not just its runtime platform), and the JVM integration just adds momentum and can propel it into enterprise situations that traditionally reject anything that's not Java or C++. CL has even more great features, but it's not able to draw new users just because of the "hip new thing" status languages like Clojure have enjoyed because it's so old, and if you have native deployment available chances are you're going to use it and not always consider a JVM option, which in turn leads to never taking Lisp into the enterprise world.


There's a dotnet clojure too: https://clojure.org/about/clojureclr


Notably, according to GitHub, it's had precisely 1 substantial contributor in the 8 year life of the project.


True, but he's maintained the heck out of it. It's more or less a 1:1 translation of the Java -> Dotnet bits. Any pieces implemented in clojure are just copy pasteable at that point.


Don't feel sorry for him for his use of the JVM. Seems he's quite the fan as it sits quite neatly with his views on code dynamism.

He's less supportive of the CLR runtime, calling it an example of static thinking, though he doesn't elaborate.


Every time I try to start with Clojure I find the syntax jarring and go back to LISP. Clojure is Not about me though, is it? ;)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: