Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Haskell: The Bad Parts, part 2 (2020) (snoyman.com)
258 points by anuragsoni on Feb 9, 2021 | hide | past | favorite | 170 comments


I'm enjoying the read. Having written a fair deal of Haskell, I'm really grateful for what leaning Haskell told me about programming at large. Snoyman has written amazing libs and acute articles, and I appreciate this series a lot.

My interest though are: what's next?

Can Haskell evolve? Can we move to a better standard lib? Here Snoyman has put forward a great effort by releasing his classy-prelude, but iirc he also stopped using it.

So what can be done? Could we come up with a new Haskell-spec version that fixes some of these, flips some pragmas to be on-by-default? I can imagine that laziness is not going out, it too much at the heart of the language: or am I just assuming that? But besides laziness there is a lot to fix by just setting a new standard. It will help newcomers, an eventually even old codebases may make the jump.

Some part of this article talks about partial functions. To what extend can we do without? Can we build a checker [1] for that and simply disallow it?

1: https://stackoverflow.com/questions/42151927/what-is-haskell...


Laziness - or more precisely, non-strictness - is the essence of Haskell. Haskell was created because there were a bunch of non-strict languages out there, and folks saw a need for one non-strict language to rule them all.

Often a defining feature of Haskell is thought to be its purity. Purity arose from non-strictness. Monadic IO arose from non-strictness (other solutions to IO were also tried.)

This is best discussed in a paper: https://www.microsoft.com/en-us/research/wp-content/uploads/...

I guess you could make Haskell strict by default...but why? It wouldn't be Haskell anymore. Non-strictness is the defining element of the language. In my view a strict-by-default Haskell is...something else. Maybe it's better for certain things. Maybe it takes things to the next level. But it isn't Haskell. Don't call it that.


I understand the historic context of laziness, and what it "brought" or "taught" us. But is it really that important now, or moving forward? I side with Snoyman in the "strict by default would probably be better".

Sure it's no longer Haskell then. But could there not be a language like Haskell leveraging strict by default? Possibly even on the GHC?


Making Haskell strict-by-default would also break the rewrite engine which is the core of Haskell's high performance with functional combinators.

The whole haskell project is about allowing programmers to write code that is declarative without making concessions for performance. Laziness makes this possible because it abstracts out evaluation order and thus allows extreme optimization by the compiler


I don't see how the rewrite engine would break if Haskell would be strict. Can you expand on that?


Idris for instance is strict "by default" but still supports laziness through the Lazy data type.


Forgot about Idris.

Elm, PureScript, OCaml (Reason/ReScript) and F# did come to mind.


Isn't that basically every other ML variant?


Not exactly. Syntax-aside, MLs in general: have better module systems than Haskell, no typeclasses, and aren’t pure.


I think what’s being proposed around “strict data fields by default” (or, an alternative or “some new concept that evokes strictness where it makes sense”) could let us continue to use Haskell and its laziness.

This isn’t really obvious to me but I think you could build a mixed reality without splitting a community up.


"strict-by-default" isn't really the proposal here.

Only "some things' laziness is more a footgun than a feature, maybe we should consider fixing those".

The foldl/sum/product example in part one is maybe clearer about that.


But once you have purity you don't need strictness - Idris takes that approach.

Implicit, pervasive laziness makes performance non-compositional and more or less impossible to reason about. AIUI the largest industrial deployment of Haskell uses a strict variant for this reason.


IMHO Haskell is missing three features that would really improve programming in the large:

1) better structural types. For example, polymorphic extensible records and variants. These could even be the basis for all algebraic data types. Current encodings have poor syntax and poor type inference (due to non-injective type families). The need to make all records nominal types really gets in the way when dealing with structured data. Python is the main competitor here; and so we could also just use strings and maps, but we can do better.

2) a better module system. Even Miranda had a better module system than Haskell. OOP has first-class modules.

3) a better commitment to backwards compatibility. There have been controversial and breaking changes to Haskell's standard libraries that have done more harm than good. This has likely damaged industrial adoption of Haskell. For example, my employer, a prominent Haskell sponsor, is stuck on a 7-year old version.

Unfortunately the above problems are difficult to retrofit for and so I think a new language may ultimately be needed.


You realize 3 would make 1 and 2 harder or less valuable? And 3 would have also prevented a gazillion 1s and 2s from being still with us?

I think breaking changes are highly underratted and people only shy away from them because the tooling expectations in all languages are so rock bottom.


> You realize 3 would make 1 and 2 harder or less valuable?

Yes that is why I think (1) and (2) need a new language.

> And 3 would have also prevented a gazillion 1s and 2s from being still with us?

I disagree. Most new features have been implemented as extensions that must be enabled with language pragmas. This is not the same thing as large breaking changes in the standard libraries.


Isn't constructing a new language the ultimate breaking change :-)


I don’t have a dog in this fight other than interest in ML-family langs generally and Haskell as a pervasive interest everywhere I go outside of Haskell. But...

No not really, if your language is intentionally designed to be a foundation for other languages. So many lisps are trivially host languages for other languages at their core. Even JS is at this point, with the widespread use of Babel and various bundlers. Those languages themselves are seldom very different underneath that.


Making a "language laboratory" for typed languages, (as opposed FFI with lowest common denominators like C or so-and-so untyped, garbage-collected language) is an open problem. I think it can be solved, but until it is, it's wishful thinking to pretend a multitude of similar languages doesn't result in tragic fragmentation of a small community.


TypeScript seems to be doing alright? Maybe there’s something I’m missing but TS is basically a type checker on top of everything JS compiles to, including the whole Babel universe and compiler transforms that support macros and arbitrary AST manipulation. Even its standard config offers a multitude of similar languages.


Having a type language on top of an untyped one is fine (except for perf). It's having multiple typed ones that compose well and aren't just reskins of the same basic type theory that's the harder part.


Isn't any breaking change the same as creating a new language?

There will be two similar yet different languages in use until everyone migrates to the new version. This process could happen quickly or not at all.


Yes, but there is much more likely to be tooling to migrate in the same name / same community case.


Only if Haskell stops being maintained, which I very much doubt would happen. Therefore no one is forced into the change.


No one is forced to upgrade to a newer version either.

Now, perhaps the problem in your eyes is the old version isn't being maintained. But I see the opposite: maintaining two versions indefinitely is the problem. The Haskell community is not infinitely big, nor growing fast enough, that if e.g. 1/2 went to Idris and 1/2 stayed, it wouldn't be catastrophic.

Making the changes we need to make, and then making it as easy as possible for everyone to migrate, I think is the best way.


I wish more people would realize this!


People have made languages to address these issues (including me), and even put them in critical systems, but it is very difficult to get people to sign on to a new language.


The practical problem is that backwards compatibility and making real improvements to the language are fundamentally at odds. There's just no way around it. If you can't break compatibility, you can't make meaningful improvements except for adding features and capabilities—and most of the problems in this article are about things that need to be taken away, not things that need to be added!


I was responding to "Can Haskell evolve?", I was trying to say "I think it can, but I would prefer it to evolve in a new language". The standard library is riddled with partial functions, like head and tail. We should not take them away and break thousands of projects, research papers, books and blog articles.


> We should not take them away and break thousands of projects

How about the middle ground of marking them as deprecated and not to be used in new code?


And I mean, they're perfectly suitable for futzing around in ghci. I think we need different preludes for different contexts.


Yes, deprecation is fine, as long as they are not removed.


> Can Haskell evolve?

Should it? Haskell has already shown an impressive ability to evolve. Evolution is how it got where it is today. Language design analogues of the blood vessels being on the wrong side of the retina and all.

If the goal remains to have an effective research language with a more-or-less unsurpassed capacity for evolution and pushing the state of the art in interesting directions, I'd say Haskell is still great just as it is. If, on the other hand, the goal is to have a productive industrial programming language that encompasses all of Haskell's great ideas, but without quite so many troublesome language warts, evolutionary vestiges, or {-# LANGUAGE EverythingInterestingThatsHappenedOverThePastThirtyYears #-}, it might be better to make a clean break.


GHC is a production-ready compiler (for Haskell) suited for industry use. I work on a growing, healthy team and we ship an application written in Haskell several times a day. It has decent tooling, which is improving, and the language itself is what makes it possible to tolerate the pace of those developments.

Haskell isn't a pure "research-only" language. I realize people like to point out that it's original goal was research... however the nature of that research has changed over time. Take Galois' Crucible project where they used advanced dependent-types in order to write their security analysis tool [0]. They could have used research-grade dependently-typed languages that are much more expressive and easier to use... but they chose Haskell because they needed an industrial-grade compiler capable of producing fast code to run in a production setting. The kind of research being done in GHC and Haskell these days is bringing innovation and advancement in functional programming into industrial applications.

Every compiler is going to have warts. Especially with people using it and depending on it for industrial use. That's a good thing. You could be like Lean 4 which will make no promises of backwards compatibility or consideration for industrial users in the name of staying purely for research.

Although I agree that Haskell is impressive in its ability to evolve and grow! Linear types just landed in GHC 9.0.1 and many fine folks are improving the compiler to make way for dependent types. It's good stuff!

And to see languages like Java, C#, C++, and others pick up on the low-hanging fruit of FP languages is a sign that the paradigm is gaining popularity and adoption: ADTs, lamdbas, type inference, pattern matching... Maybe in 20-30 years will see these languages adopting higher-kinded types, rank-n types, GADTs, and more?

[0] https://www.researchgate.net/publication/334751646_Dependent...


Growing healthy Haskell team? Checks notes oh I already applied :)


Have you tried Digital Asset?


Yes, general rule is if there’s a Haskell job I’ve at some point applied for it.


I want to work in Haskell, do you have any other suggestions for companies?


Bitnomial, College Vine, Sentenai, Mercury, and Co-star are Haskell companies I've applied to recently. There's a different set if UK or EU is an option.


What options do you know about in the EU and UK?


IOHK, Lumi are two others


Thanks!


Thanks!


Has anyone that you know of started using linear types for anything interesting yet?


Is there anything of any significance written in Haskell outside academia?


Several large financial firms have significant Haskell codebases, plenty of random cryptocurrency startups are Haskell based, some of the Google TPU chips are/were written in Haskell/Clash, Facebook has a big Haskell codebase, Starbucks and Target use it, there are several national security/defense firms extensively using Haskell, etc. etc.

Just because you don't see a bunch of blog posts like "We rewrote our <useless product> in <language du jour>!" doesn't mean it's not being used. In fact, more serious companies tend to lean less heavily on the developer social media publicity circuit.


> Starbucks and ... use it

I have heard of Haskell used in companies mentioned by other comments, but I am surprised, pleasantly, that Starbucks uses Haskell.

Do you have a pointer (job posting, blog post, etc.) to Starbucks using Haskell? It could help me push for using Haskell by my employer, for example.



They also occasionally contribute libraries like Haxl [0] and HsThrift [1]

[0] https://hackage.haskell.org/package/haxl

[1] https://engineering.fb.com/2021/02/05/open-source/hsthrift/



Starbucks uses a system with a Haskell frontend and backend for generating their personalized offers. I have it on good authority that it's probably been responsible for more than a billion dollars in revenue lift for Starbucks.


This is new to me.

I have heard of Haskell used in companies mentioned by other comments, but I am surprised, pleasantly, that Starbucks uses Haskell.

Do you have a pointer (job posting, blog post, etc.) to Starbucks using Haskell? It could help me push for using Haskell by my employer, for example.


Starbucks themselves didn't use Haskell. They used a system built by another company that used Haskell.


Depends your definition of "significance" but Cardano is written in Haskell, and its main smart contract language is also Haskell.

https://cardano.org/


CircuitHub is apparently built with Haskell. It's the most polished PCB manufacturing quote system I've seen, but "significance" is relevant. (Plus I've never actually used their service, since it's not priced for hobbyist manufacture of a small number of boards).


Pandoc comes to mind.


It might help if you define "of significance".


> Maybe in 20-30 years will see these languages adopting higher-kinded types, rank-n types, GADTs, and more?

it's possible to express those in C++ pretty much since templates exist.


Haskell needs its own Elixir: a language different enough to fix fundamental problems but similar enough to inhabit the same world. I'm imagining a language with its own frontend and some of its own features that could still reuse GHC (maybe from Core down) and share libraries freely in both directions, while also able to address some fundamental issues without being constrained by backwards compatibility.

I personally love Haskell: I think it's in a great place as a language (even if it isn't wildly popular) and on a positive trajectory. But I definitely feel the same pain points Snoyman brings up, and maybe some others. (Template Haskell >.<) I don't see any way of fixing many of the biggest issues if we care about backwards-compatibility at all, and a language with 30 years of existing code can never get away from that. I also don't think te community could weather a Python 2 → 3 transformation (much less Perl 5 → 6). So the only path I see towards simultaneously keeping what's great about Haskell and improving all that isn't would be another language that can coexist simultaneously, just like Elixir and Erlang.


PureScript would almost fit :-). It would just need a modern Haskell backend but having a strict language built on a lazy-one is probably not the best idea :-).


You can avoid lazy behaviour entirely in Haskell if you want to, so your purescript to haskell compiler can just only generate contrived haskell code that doesn't exhibit laziness


Idris maybe? If Idris has easy interop with the Rust ecosystem, we'd have a great backend (and with wasmer possibly also fontend) option.


I was also thinking as Rust as a backend for a FP language in the last day. I found some Ocaml binding for Rust. It would be interesting to see how it fits.


or like Perl and the Raku Programming Language https://raku.org :-)


GHC already supports strict-by-default

https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/stri...

Obviously making it the default would be backwards incompatible. But, as long as lazyness remains supported and simple to use on a per-case basis, it doesn't actually seem like that big of a change to me.

Also, the fact that GHC supports so many extensions actually gives a good path to updating the language. Since extension are enabled on a per-file basis, in most cases old libraries would just work with new-Haskell as long as the build system knows to build them in old-Haskell mode.


I worked with Haskell for a few years and have a book on immutable data structures (Okasaki) in my book shelf here. From what I remember, with immutable data structures strict by default is actually a pretty bad default (please correct me if I remember wrong). I can only imagine this is one of the reasons Haskell is lazy by default (seriously, it's more than just that they wanted to try this crazy thing).

Just enabling strictness everywhere in Haskell isn't just inherently better. Laziness is actually quite good (and the OP even says this to an extent). It's more nuanced than it seems.


Data should be strict, logic should be lazy. It's the programmer's job do use judgment to decide which parts of which data structures are data vs logic. One heuristic is that anything recursive should be lazy, and the rest strict

    (Tree = Leaf !Int | Node [Tree]))
and if you really need a lazy int in there, delay it yourself using partially applied functions.


Yea, I agree with this and it's a good pattern in the case you know the type is "Int" or some other primitive. But Haskell thrives on type variables and data structures are usually defined generically making it more complicated.

From what I remember to get around this one would use a type class to define a data structure's operations, and then provide specific implementations of that data structure for specific strict data types. But that is more complicated than just having the compiler apply strictness everywhere.


Having just read that book, it's not that laziness is bad or good for data structure performance. It's mostly that you can't use a data structure designed for one paradigm with the other and expect the same performance characteristics. Laziness can make things faster (I think the CatenableQueue was a good example of this), but it can also make things slower (take your pick of examples).

In all I guess I'm not disagreeing with you at all, I just wanted to provide a bit more context and get some more use out of that book.


For small amounts of data, strictness is better then laziness. There is a simmilar lesson even in non-pure languages: copying small amounts of data is almost always faster than fancy scheme you want to do to avoid copying.

For large amounts of data in an immutable structure, laziness is pretty much essential. The theory behind strict-by-default is that most of the time, you are not dealing with large amounts of data; even if you have pointers to large data, or pointers to pointers of large data. Even though laziness is vital to a pure language, you only need to use it in a relatively small number of places.


Wait why is laziness vital to large data structures? Even with strict functions on strict, but persistent functions, you almost never copy the entire data structure.

For example a strict function on a strict persistent list that swaps the head of the list doesn't have to copy the entire tail of the list.

The thing that laziness enables is short-circuiting, so e.g. a fold over the list can bail out early if necessary. But this is actually a minority of use cases for large data, since we often bound the size of data structures with something like a `take n` function for some integer n, which only processes n elements in a strict language.


The OP addresses your question, saying he is not advocating for removing laziness, but does have some specific suggestions.

> I'm not advocating for removing laziness in Haskell. In fact I'm not really advocating for much of anything in this series. I'm just complaining, because I like complaining.

> But if I was to advocate some changes:

  * Deprecate partial functions
  * Introduce a naming scheme for partial functions to be more obvious
  * Introduce a compiler warning to note partial function use (with a pragma to turn off specific usages)
  * Warn by default on partial pattern matches
  * Advocate strict data fields by default


Change all partial functions from returning a to returning Maybe a. Then we rename the fromJust function into the $%&!# operator (name intended to suggest the exclamation made by the programmer when function fails).


If StrictData were simply made the new default (with exceptions for lists and some other things in the standard lib), I bet very little code would break. It certainly wouldn't change the flavor of the language. I think it's a good idea.


Keep Haskell as a research language imho, but perhaps create an industrial-oriented subset of the language, similar to Ada’s SPARK [1]. Make it strict if necessary, flip whatever pragmas support an industrial use case, etc., and create a separate brand but which still references Haskell.

[1]:https://en.wikipedia.org/wiki/SPARK_(programming_language)


>Some part of this article talks about partial functions. To what extend can we do without? Can we build a checker [1] for that and simply disallow it?

You don't need an advanced checker for partial functions - the link you use is discussing proving termination, but total vs partial functions just requires checking whether the function is defined for all arguments, which is easy and something that GHC can already do.

And to be honest, I'm not sure we can do without. Partial functions and `error` (or `undefined`) are necessary escape hatches for the programmer who knows something the compiler doesn't. Partial functions should probably not end up in any APIs (and definitely not in the Prelude), but they are still an important part of the language.

Only the introduction of dependent typing has the possibility of really eliminating partial functions for good.


The issue is not "undefined" and "error" but accidental partiality.

`head` absolutely doesn't know something the compiler doesn't, because there is no guarantee the user is using it correctly!

The fact that https://doc.rust-lang.org/std/vec/struct.Vec.html#method.pop returns Option<T> really says it all. We're not trying to be perfect, we're just trying to be no worse than Rust, which shouldn't be a high bar at all.

I wrote https://github.com/ghc-proposals/ghc-proposals/pull/351 to address just these sorts of issues, and not the harder ones people confuse them with.


> The issue is not "undefined" and "error" but accidental partiality.

I don't understand what you mean by this.

>`head` absolutely doesn't know something the compiler doesn't, because there is no guarantee the user is using it correctly!

But the user can use `head` when they know that the list they are handling is non-empty (but then they can also opt for `Data.List.NonEmpty`). I never said the partial function itself carried any knowledge - only that it was a necessary escape hatch for cases where the programmer has the knowledge to use it correctly.

>We're not trying to be perfect, we're just trying to be no worse than Rust, which shouldn't be a high bar at all.

I'm not opposed to that - and I don't see why you would think that I am. I'm just arguing that, in response to the question made by the other commenter, we can't "do without" partial functions entirely. That doesn't mean I believe they belong everywhere, and certainly not in most APIs.


> I don't understand what you mean by this.

I think that GP tries to say the escape hatches are not the problem. "error" and "undefined" are fine (we can check for undefined not being in production code, and errors, well, could be implemented as a "die" or "halt").

But head though... It should return a Maybe wrapped value! And as you say the other option is nonempty lists a.k.a. "(a, [a])".

> only that it was a necessary escape hatch for cases where the programmer has the knowledge to use it correctly.

Well the docs do not present head as an escape hatch out of Haskell iron type system! It's documented rather innocently [1] (search for head). Unlike unsafePerformIO[2] and the likes, which have whole epistols detailing their danger.

> I'm just arguing that, in response to the question made by the other commenter, we can't "do without" partial functions entirely.

Please consider escape hatches differently from basic stuff like head and you see we argue for the same end result. Head is plain bad, should be removed or annotated as the smelly thing it is. maybeHead and non-empty-collections are the way fwd there, would you agree?

1: http://hackage.haskell.org/package/base-4.3.1.0/docs/Prelude...

2: https://hackage.haskell.org/package/base-4.14.0.0/docs/Syste...


>Please consider escape hatches differently from basic stuff like head and you see we argue for the same end result. Head is plain bad, should be removed or annotated as the smelly thing it is. maybeHead and non-empty-collections are the way fwd there, would you agree?

I don't get what you're trying to say. I never said that head was good. The discussion about head is a sidetrack to the original question, which is "To what extend can we do without [partial functions]?", and to which my answer is "we can't do without". head is not a good example of a partial function, and its existence doesn't conflict with the idea that good partial functions are a necessary part of the language.


Do you think there's any hope of Haskell community moving to Idris? It's years since I did either but I think Idris is just a Better Haskell without several of the issues brought up.


For small programs, but not in general. Haskell has too many more language features and libraries.


> Can we move to a better standard lib? Here Snoyman has put forward a great effort by releasing his classy-prelude, but iirc he also stopped using it.

He mentioned https://github.com/commercialhaskell/rio in the 1st article, it's interesting, I wasn't aware of it. (I am using classy-prelude but I might try it out.)


FYI there was a proposal for a "Haskell 2021 Language" Extensions (similar to Haskell 98 and 2010), which was approved. This enables a bunch of extensions (and I guess will become default): https://github.com/ghc-proposals/ghc-proposals/blob/master/p...

That being said, while there are alternative preludes, I'm not sure how or when (or ever) the current prelude will be changed or replaced.

In terms of language evolution, the community seems to focus on further extensions of the Type System (linear types, quicklook impredicativity, dependent types), where I think there is a bigger need in better tooling. Of course, there too, lots of things are happening, like haskell-language-server, ghc-debug etc. etc. But still lots of work to do. Would be great if more people worked on modern tooling and improving GHC. And even better if more companies would fund that =)


I have a https://github.com/ghc-proposals/ghc-proposals/pull/351 in the works for the stupid partiality issue.

As for the library ones:

1. Rust's std is better now, but will eventually sink to the level of base as ideoms improve because neither language has a good process for making breaking changes to the standard library

2. Rust has the core---std split, which Haskell desperately needs. Vector absolutely should be a separate package, but just the way alloc and hashbrown are separate packages in Rust land.

Trying to work on that too.

Now that Michael Snoyman thinks that exceptions in pure code are bad, and likes Rust, I wish he would think that exceptions in IO code are also bad, and we should use more EitherT like Rust.


> neither language has a good process for making breaking changes to the standard library

What language/ecosystem in your opinion does have a good process for making breaking changes? I'm curious as that's what I see one of the main remaining problems of language design, I don't really see any viable solutions but if there are some, I'd love to learn about them!


The solution is simple: Don't have standard libraries!

The compile repository should just contain the absolute bare minimum of primops and what not, regular libraries should do the rest. Yes there should be a batteries-included starter pack, and it can be bigger than most standard libraries even, but it should be maintained separately.

Ironically, C, with it's terrible stdlib, gets this right. Neither GCC or Clang is in the business of maintaining libcs. C does this for Conway's law reasons with kernel vs compiler competing for ownership, but still, it's a good result.

People squirm at the thought of this. I think the issue is not that this is inherently bad UX, but that Cargo/Cabal/etc. are simply too shitty. I think people's expectations of those tools is too low, and if those are fixed, this problems with this will evaporate.


Could you mention what you're lacking in cargo? Feel like it's not lacking in any features, and it's really easy to create plug ins for new functionality.

If your issue is that the Rust standard library is too large, that's not a view I've ever heard before. What I hear consistently is that packages that could be in the stdlib like serde, rand, regex are crates pulled in by cargo.


Ok, I understand your proposal. It definitely makes sense from the language evolution perspective, although to some degree it's a cop-out - codebases can still be stuck on an old version (of the non-standard lib) and have difficulties upgrading.

I use Python for my day job, and the experience there is illustrative - it has a stdlib which I will use with very strong preference, except extremely good non-standard libraries (e.g. numpy and requests). The ecosystem can still get stuck on that (2-to-3 transition was painful partially because of ascii/unicode distinction, and partially because of numpy taking time to migrate).

But in principle I agree! Disentangling language/compiler from stdlib is strictly better, as it allows the free market to take over, competition to flourish, and people to "vote with their feet".

> inherently bad UX, but that Cargo/Cabal/etc. are simply too shitty

More strong opinions :) I like that! What are some examples of good package managers in your opinion? Some people praise Node/NPM, and I'm personally quite happy with conda for Python, but then my needs are fairly vanilla and even I can see some pretty obvious improvements...


> I use Python for my day job, and the experience there is illustrative - it has a stdlib which I will use with very strong preference, except extremely good non-standard libraries (e.g. numpy and requests). The ecosystem can still get stuck on that (2-to-3 transition was painful partially because of ascii/unicode distinction, and partially because of numpy taking time to migrate).

I think this is a problem of Python a) having extremely poor dependency management and b) consequently favouring large frameworks. It's hard to migrate something the size of numpy from Python 2 to 3 in one go, but since it's a single giant library they didn't have any choice.


Doesn’t Rust mark deprecated functions as, wel, deprecated?

Also, the edition system (with some small extensions) in theory enables removing stuff from future editions

Finally, there is incredible value in sharing common vocabulary for some core objects like basic data structures


> Doesn’t Rust mark deprecated functions as, wel, deprecated?

Yes.

> Also, the edition system (with some small extensions) in theory enables removing stuff from future editions

In the language, but not the standard library.


OCaml does this and it's moderately annoying.


Rust is making breaking changes in the language (called editions), but different versions of the language can use each other as libraries.

They use a common IR format, just like languages targeting the same VM.


editions sharing std good, but that also means this doesn't help with the need to change std at all.

You have to design the institutions assume you will constantly be making mistakes and they will accumulate. Wishing away braking changes is like pretending buildings don't need vacuuming and dusting. The second law of thermodynamics would like a word with you.


I like Clojure's process: thoughtfully design things to be simple, then commit to never breaking compatibility. It works amazingly well from what I can tell.


It's amazing and I love the language but I also did not see any great innovations around Clojure in the last years, after ClojureScript for example.


That is true for me as well. Every year I quietly hope that Rich Hickey will soon reveal a new breakthrough.

On the other hand, all the “new hot” features in many other languages have mostly been available as libraries in Clojure for a long time — there’s nothing specific so far that I strongly wish existed.


Yeah. Rust's standard library already has cruft (e.g., std::error::Error deprecations), and some potential sore spots (no custom allocators, Pin/Promise unsoundness: https://internals.rust-lang.org/t/unsoundness-in-pin/11311).

I do wonder if there's some way to hide parts of the standard library with the Rust "edition" mechanism. It'll stay there, technically, for backwards compatibility, but any one who opts in to the new edition wouldn't see deprecated stuff exported from the std prelude.


> Pin/Promise unsoundness

Note that there was only a brief window between the bug there being reported and the fix landing. These days the only way to exploit any unsoundness there is via using unstable features, and as long as there's a soundness exploit those features certainly won't be stabilized.


Thank goodness. I didn't realize that the unsoundness was now contained to unstable features.


> I do wonder if there's some way to hide parts of the standard library with the Rust "edition" mechanism. It'll stay there, technically, for backwards compatibility, but any one who opts in to the new edition wouldn't see deprecated stuff exported from the std prelude.

Maybe in the future one could introduce pub(< ed2018) to indicate that an item is available only before edition 2018


This has been discussed, but it's got pros and cons, and is semi-controversial. We'll see.


Right, that’s what folks said the last time I asked about it. That’s why I put a maybe there :P


Custom allocators are coming soon.


Right, but there was some frustration expressed in the discussions (IIRC) because you can't break backwards compat. So you can't just pass in an allocator to functions that might allocate. I don't have any skin in the game and don't really care how custom allocators will work (I'm glad the feature will exist, though).


> And if I write a "Rust: The Bad Parts", believe me, I'll be mentioning panicking.

This is why `panic=abort` should be the default, and the whole unwind-and-try-to-keep-the-world-sound path should be opt-in. Then panic is truly like `assert` and I'm guessing most of his objections would be gone.

My guess about default-panic behavior being unwind is rust's origins in the servo project. When you're part of a very large monolith that should try very hard not to crash (a browser), you will put some work in to try to make this unwinding okay. Yes, tests still want panic to unwind, but you could opt in to this to, or change the default in a `[test]` context, or a bunch of other things I'm sure smarter folks could argue in an RFC. But getting correctness right in prod should be goal #1 IMO, so it should bias toward abort.

For most places rust is probably actually used today (server-side), crashing is the safer and simpler behavior, and things like lock poisioning are not things you need to reason though.

I know the article is about Haskell, so not trying to derail it, but I have a really similar Haskell -> Rust path in my background, so a lot of the rest of Michael's reactions here are just +1 for me. For example, yes, exactly this about partial functions.

And, IMO, laziness, which he hints at in this section. The default should be the other way. Nothing worse than an `error` that's fired in some unexpected place/time due to lazy evaluation, and some thunk landed somewhere technically correct, but infuriating. Trying to figure out what the heck is going on can be really challenging/frustrating (as of my prod experience in Haskell 8-11 years ago, not sure what's gotten better/worse since then in ghc-land.)

I learned a ton from Haskell, and am so glad I used it in depth for awhile (ditto ML). But these days, to actually build something I want to build with long-term value in mind, either individually or as part of a team, I just use Rust. I get most of what I loved about Haskell without the annoyances.


I'm 80% sure that the OP's problem with panicking isn't to do with the unwind/abort choice. If I had to guess, their actual complaint is about the fact that the index operator panics on out-of-bounds accesses. I say this with confidence because, other than that, the bottom type is extremely useful, partial functions are often extremely useful (sometimes you do just want to shut up the type system), unwinding needs to exist in order to properly support being a guest in a host process, and unwinding-as-default is the correct default for encouraging people to think about unwinding inside of unsafe code blocks.


I think it has more to do with the early language design than Servo per se; it took a lot of inspiration from Erlang. Green threads and “let it fail” were the error handing strategy.


> the whole unwind-and-try-to-keep-the-world-sound path should be opt-in

The unfortunate thing is that if this was the default even less code would be ready for it. The only way to make catching panics have any hope of working is to have it default-enabled.


Speaking as an outsider to Haskell, I have to say that while its core purely functional ideas are a little hard to wrap my head around, what daunts me the most is the incredible number of different ways there are to do everything, some recommended, some relics.

You have to ask so many questions when you start learning Haskell:

Should I use an alternate prelude? What string type should I use? Should I use lens? What package manager and build system should I use? What IDE plug-in works the best? What language extensions should I use? Should I make use of laziness, or try to avoid it? Are linear types a thing yet and should I use them?

And on and on the questions go.. these aren’t questions that move forward the product, but just an endless list of boring details to figure out.

I’d love a new version of Haskell with all the incredible power of GHC but without the standard library cruft. With all the best extensions picked out and on by default. With a wonderfully thought out stack and set of recommendations, along with a clear guide describing all of this, similar to what the Rust ecosystem has.

In short, Haskell to me seems like a playground of interesting ideas rather than a coherent ecosystem for building software. Which I think is true since it’s a research language, but that’s what stops me from using it.


I’ve been meaning to learn Haskell. I appreciate this post for being honest that there are warts and that there’s a body of “community” knowledge about “the right” and “the wrong” way to do Haskell that might not be immediately evident.

Anyway, a good read, even for an outsider to the ecosystem.


In case you're looking for a resource to learn more about Haskell, I would highly recommend http://dev.stephendiehl.com/hask/. I started recently to learn the language and tooling and found this guide randomly on Twitter, and it's by far the best codex of knowledge on the language I've seen so far. No bullshit, straight to the point, everything in one page (so easy to Ctrl-F around).


Easily the best reference to this I've found. I wish every language had one of these.


Amazing find, thanks for sharing.


This is worth a separate submission.


This is probably just personal taste, but I found the Haskell Wikibook [1] the single most useful resource when first coming to the language from a typical OO background.

It keeps things simple and practical, how to solve typical problems, etc, without getting overrun by theory.

Eg, it talks about 2 types of code, normal declarative and the imperative-like 'do' style, when to use them and how you can make them interact. Just accepting and working with that without having to understand monads etc reduces the cognitive load a lot. Once you're comfortable with it, it then goes through the underlying theory. I appreciated that. YMMV

[1] https://en.wikibooks.org/wiki/Haskell


Definitely check out the part one of "Haskell: The Bad Parts" which is more relevant to the beginner to average Haskeller: https://www.snoyman.com/blog/2020/10/haskell-bad-parts-1/


Coming from a math background, once I had the syntax sorted and could write a few simple file parsers in Haskell, “Category theory for programmers” helped me understand some of what the Haskell type system is trying to accomplish. (Endofunctors of the category of Haskell types). I didn’t and still don’t understand the template C++ in the book but still my favorite Haskell resource. Many of the examples are written in both Haskell and C++.

Caveat: I just started learning this last year.

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


I struggled for a long time to get past the basics presented in Learn you a Haskell for Great Good. Then someone on HN proposed Graham Hutton: Programming in Haskell and it definitely cleared some things up for me. I recommend it!


I like this book as an intro: https://haskellbook.com/

(since others were recommending)


I’m a relative newcomer to Haskell. I still haven’t used it for anything serious, but I’d like to.

I had experienced all of these problems. Initially, I was attracted to Haskell for the promise of “if it compiles then it’s probably correct.” I quickly discovered that isn’t true, for the reasons discussed in the article.

But I also had issues with Cabal. I couldn’t get Snap to install. I tried installing it in a container, still didn’t work. I finally figured out something that would let me build with Snap, but for some reason LSP in Emacs couldn’t find the snap libraries, so it couldn’t provide me correct feedback. And then the build times. Wow. I gave up on writing that program in Haskell and wrote it in Go instead.

I think Haskell has a lot to offer. I’d be open to trying it again. Hopefully these shortcomings improve.


Haskell has a real tooling problem.

At this point, I do not allow any Haskell IDE engine to manage project data. Things work much smoother if you call cabal directly.

But that's actually a low ball. I don't let any Java or Javasript, or Python (except Conda) IDE manage it either. And, of course, I've long gave up on anything integrated for C, C++ or Perl.


This series of article should be turned into a linter. I'm only half joking!


I'm not sure I understand the relationship between partial function and exception handling. Aren't partial functions just curried? One or more arguments are bound, but not all? At least in Python, if you partial-ify a function that raises an exception, it still raises. I don't understand if the author likes that behavior or doesn't. Maybe this is some Haskell implementation detail that I'm not aware of.

Last time I wrote any non-trivial Haskell was in 2014, so a long time ago, but I found that my biggest problem with it at the time was the really huge variety of Haskell in the wild. If you're doing simple stuff, you probably stick to the prelude and you'll be happy. But if you're doing anything that's a bit complex, you'll end up seeing hundreds of mini-dialects of Haskell in the wild, so much so that I found it really difficult as a newcomer to understand code on the net. In many cases it's almost like a different language completely, what with the user-defined infix functions, tons of currying everywhere, laziness, and the like, made it very difficult to follow code paths.


> I'm not sure I understand the relationship between partial function and exception handling. Aren't partial functions just curried?

You're thinking "partially applied function", which uses very similar terms but means something completely unrelated.

A partially applied function is a function which is applied to a subset of its formal arguments, yielding a function taking the leftover arguments.

A partial function is contrasted with a total function and the term is about the relation between inputs and outputs, namely does every possible input value yield an output. The example of `first` used in the essay is pretty common because it's quite clear: given `first :: [a] -> a`, what happens if you call `first` with an empty list? Well it can't succeed, it can't just give you an `a` out of nowhere because it doesn't have anything to do that. So despite an empty array being a possible input value, there is no output for it: it is a partial function, it only partially maps its inputs to its outputs.

`first :: [a] -> Maybe a` would be total: in the case of an empty input it returns `None`, otherwise it returns `Some a`.


Ah thanks, I'm used to this being called a "complete function".


`Just a` / `Nothing` in case of Haskell :)


"Partial function" is an overloaded term. In this context it means "a function which does not map its entire domain to an output". This means that there are some inputs which return "Bottom", which happily gets propagated through the system until it is needed as input to some function and then your application explodes.

The downside here is that rather than blowing up your application immediately upon a bug it blows up your application somewhere else depending on your logic.

This can be a useful thing. You can write powerful and elegant algorithms that avoid error management because the bottom values never actually get used. But most people aren't doing that and instead these are time bombs.


> This can be a useful thing. You can write powerful and elegant algorithms that avoid error management because the bottom values never actually get used.

Agreed. This would be with some escape hatch function, maybe even from Unsafe.

But having head in Prelude, without huge warning in the docs, without deprecation warnings, it just, well, not very Haskelly, I'd say.


when bottom is actually reached that point of evaluation, does haskell provide any indication on where it came from? stacktrace or something?


Partial functions are not the same thing as "partially applied functions". Partial functions means that not every element of the domain is mapped to an element of the range, for example:

    divTenBy :: Double -> Double
    divTenBy n = 10 / n
If you actually call the above function you get a runtime exception. We really don't like functions that do this; they are called partial.


Am I missing something subtle? Why would you get a runtime exception if you call this function?


The parent missed a part. If you call it with 0 you get an exception, because division by zero obviously.


No, you don't. You get Infinity. That's how floating point works.


In most languages you get an exception


Which languages would those be?


  $ python3
  Python 3.9.1 (default, Feb  3 2021, 07:04:15)
  [Clang 12.0.0 (clang-1200.0.32.29)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> 10 / 0
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  ZeroDivisionError: division by zero
  >>>


Although Python seems to be the exception to the rule here, your example isn't exactly right. In your example, you're performing integer division, not floating point division. In most languages, integer division by zero is indeed some kind of exception (or UB, which will most likely result in a hardware exception, causing a signal sent to the process). However, floating point division by zero is well-defined, as per definition of floating point arithmetic (in some RFCs), and most languages don't throw exceptions in this case. Python indeed seems to be the exception to the rule.


In Python 3, 1/0 is floating point division. 1//0 would be integer division.


Right, Python2 is still the default on Debian, so I didn't notice.


I guess I'm thinking of Python mostly. I thought OCaml did as well but apparently not.


If you call it with argument 0 you get a runtime exception, because it results in a division by zero.

It's "partial" because it's not defined for 0.


If Double is IEEE 754 compatible then it should be perfectly defined for 0, you'd get +/- infinity. There are algorithms which rely on this.


So it should have been: Double -> Maybe Double

That would be the "right way" to fix this.

(or create a type NotNilDouble, lol)


Or, controversially, define 1/0 = 0.[1]

[1]: https://www.hillelwayne.com/post/divide-by-zero/


Enough controversy in defining std libs as it is :)


The "right" way for division by zero is controversial, though maybe that would be a solution.

It's clearer for partial function "head :: [a] -> a", which takes the first of a list if it exists, and explodes without dignity if the list is empty (this is what makes head partial).

A proposal is "head :: [a] -> Maybe a", so head returns Nothing when the list is empty.




Ah, so partial functions aren't onto (i.e. surjective).


Not exactly, partial functions can be surjective, eg

f :: Int -> Bool

f 0 = True

f 1 = False

f n = f (n + 1)

is surjective onto Bool but also partial (doesn't return for n > 1). In Haskell we say that when a function doesn't return, the output is bottom, written as ⊥.

You could say that a function f : A -> B is partial if f^{-1}(B \ ⊥) is surjective.


No, they are not functions in the mathematical sense. It's not that they don't cover the output space, they don't cover the input space.


>Aren't partial functions just curried? One or more arguments are bound, but not all?

No; a partial function is one that isn't well-defined for the whole of its domain. So, as per the article, head is a partial function because its type signature of [a] -> a implies that all arrays have a head value. But head [] does not. It's a partial function.


What approaches and tools do Haskell developers take to guard against this? I assume in the head case a Maybe would be a better return type? But then why doesn't the Haskell core do that in the head function?


In my experience (which is quite dated at this point; my Haskell usage is back to the turn of the millennium), the usual approach was pattern matching. i.e. if you knew you were going to use a function that might not be defined you would write an alternate case.

The type system isn't helping you at all there.

The feeling I had was that much of the prelude stuff was there to provide for beautiful, terse examples of functional programming and less to protect a software engineer.


Few interesting things in this comment, thanks! What you say makes sense; it sounds like Haskell can't literally hold my hand for me which is fair enough.

I've just started my Haskell journey and the undefined paths through partial function implementation caught me by surprise.

Just skimming Wikipedia it looks like I would want to use "total/strong functional programming" but apparently "total functional programming is not Turing-complete"

https://en.wikipedia.org/wiki/Total_functional_programming

Also found this on the Haskell site after some more googling:

https://wiki.haskell.org/Partial_functions


> Just skimming Wikipedia it looks like I would want to use "total/strong functional programming" but apparently "total functional programming is not Turing-complete"

It isn't, but usually Turing completeness isn't what you want. Take a look at Idris, where functions may explicitly be total or not-necessarily-total.


A partial function is a function where all inputs have an output. For example, calling head on an empty list will throw an exception. To make this a total function you’d need to return a Maybe instead.


*not all inputs


Exactly, meant to say that :)


I'm curious about the tweet that says

> True mastery of Haskell comes down to knowing which things in core libraries should be avoided like the plague.

and one of the examples is foldl. foldl is used in Racket and other lisps, is Haskells implementation poor? Or are there better alternatives? I admit it's tricky to grok at first

Nevermind

I read further down the page and my question was answered. Should have actually read the article!


Typically, you want to use the strict form of foldl, foldl'. The lazy version is susceptible to collecting a large number of thunks. See here: https://wiki.haskell.org/Foldr_Foldl_Foldl%27


So if I'm a PHP developer that actively uses lazy evaluation patterns in his day to day that means I'm... best of both worlds? (Hint: I am!)

Lazy values are a very powerful tool in the context of responding to requests as an intermediary between a client and a database due to how neatly you can reduce your peak memory usage - a lot of web-stuff follows the basic pattern of

1) Accept request

2) Figure out query to send to DB

3) Sent results to the user

Since results aren't actively scanned by the server in many cases the goal of being able to pass-through data without directly exposing any of your internal guts to the client is a noble one to pursue.


Yeah the critique about laziness here lives in a different context. Lazy eval is great, don't worry about it. :)


What does it mean for a type class to be "law-abiding"? I'm coming from Rust so I'm curious what the "issue" is with FromIterator.


I don’t know if there is an “issue” with fromIterator, the comment is just that it is ad-hoc: there is no law for it to satisfy.

Law-abiding means that some equation holds, for example a monoid instance should be associative, so ((a <> b) <> c) should always produce the same result as (a <> (b <> c)). There is nothing checking that my implementation of <> abides by this law, but other programmers (and optimisers or compilers, maybe?) can make use of it, and might write incorrect code if my instance does not abide by the law.

Another example would be functors: it should be the case that (fmap f) . (fmap g) is the same as fmap (f . g), for all functions f and g that make sense in the equation (output type of g must match input type of f).


Wow. It's almost like Rust has 20 years of programming language development experience on Haskell.

(Note: Haskell and Rust are different languages, for different uses. Haskell has many advantages over Rust (some of which the author (and I) have probably complained about at some point). But many of the things that Rust does right and Haskell does wrong, Rust gets right because Haskell did them wrong.)


Not written in same spirit but his reminds me of the evergreen classic C++ FQA Lite. FQA of course stands for "frequently questioned answers".

https://yosefk.com/c++fqa/


The Good Parts book by Crockford had its' focus on the language features (as opposed to prevailing conventions or the failures in the standard library). Not sure if this lived up to the spirit 100%. But outside of that, great stuff! (:


As a long time Haskell user (from 1998 - my career started at 1989) and professional Haskell programmer, I should say that Snoyman is quite responsible for what I do not like in Haskell infrastructure.

At 1998, Haskell was a joy to play. At 2008, it was still a joy to play - I really played through ghci (interpreter) implementing MIPS core. And, later, implementing a eDSL that can compile Haskell description of MIPS interpreter into VHDL.

I also ghci'd through VHDL compiler prototype.

At about 2009 or so there comes a cabal, a tool that "helps building large application", which took all the fun away.

I had to manually download and put into local version control all packages I needed for a project. I also mandated, being the team lead, that we do not use cabal packages, we use stable sources of packages downloaded into our version control for easy inspection and problem fixing.

That was my last attempt to bring fun (not "fun" as in "functional", but "fun" as "joy to play with") into my programming duties.

Then there was a stack by Snoyman. It was purported to bring easiness back into cabal-based Haskell world of building applications. The problem is that easiness does not equal joy. You can use Roomba with easiness yet Roomba mopping floor would give you any resemblance of joy first one or two times it works on your command.

That stack thing did not brought back the joy of use of Haskell. It did not provided you with ALL sources (only interface package authors intended) for you to consume, play and learn. It was all that boring cabal all over again.

He was able to fix that - he had the power. He just does not care about learning from other's code, I guess.

Me, on contrary, I do not have power there. But I have my goals set to bring the joy back into Haskell programming.

And now let me somewhat relate that with the article.

The "partiallness" of functions he referred there quite often, can be learned, through REPL interpreter and often quite useful. I used partiality quite fruitfully in development of the MIPS core I mentioned above - and yes, I was in full REPL control of all of my code.

For that fruitfullness, one need not only compiled and installed package(s), but sources. To see, to learn, to modify.

This is where stack by Snoyman fails as miserably as cabal it wanted to replace.

One would see that article as a critique of Haskell. I see a failure to see and acknowledge one's shortcomings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: