Hacker News new | past | comments | ask | show | jobs | submit login
New Haskell Foundation to Foster Haskell Adoption, Raises 200k USD (infoq.com)
160 points by antouank on Dec 28, 2020 | hide | past | favorite | 157 comments



This article is a little bit incorrect. We raised $400k+.


Isn't the best course of action to rewrite popular components in Haskell, and show that its better? I'm thinking httpd/nginx/jetty, redis/memcached, postgres, and so on. You could even make a text editor. Or a kernel. Or anything, really.

I really like functional programming, but Haskell turns me off and without at least one major example of a beautiful, popular solution written in Haskell (and I don't think pandoc counts), I'm not going to make the effort to push through that barrier.

Other languages that occupy a similar space ("superior but tragically underused") have these examples. Erlang has...well a lot but Matrix and RabbitMQ come to mind. Clojure has...well it has datomic, but also Jepsen uses it, and heck I've used it and its fine.

The real question is, who's fingers itch to write haskell, and can you please pay them $400k to rewrite nginx in it?


PostgREST is written in Haskell and quite popular: https://postgrest.org/en/v7.0.0/

Shellcheck has become a very important tool to many people writing shell scripts: https://github.com/koalaman/shellcheck

GitHub's semantic analysis is written in Haskell: https://github.com/github/semantic

I'm not sure why Pandoc doesn't count.

I think it is definitely the case that most examples of successful Haskell are technical programs of interest to hackers. I don't think that's much of a problem myself.


I think the key issue is not that successful Haskell programs are technical. I think it's that Haskell is not good for heavy-IO programs. I'm not sure how PostgREST works, but shellcheck and pandoc, for example, are really well suited for "IO on the edges" style: you feed bytes in once, and then at the very end you spit some other bytes out. Everything in the middle is parsing and transforming data.

If you're writing some boring CRUD app, or a GUI program that makes a bunch of syscalls, Haskell sounds like a giant pain in the ass.

From that POV, I think that Haskell's relative lack of popularity is perfectly fine. Not every language has to be maximally general.


I used to write a lot of Haskell and tbh I find the idea of restricting IO to the edges of an application to be equally as important in traditional imperative programming languages as well. It's just good design, not a PITA.

It seems to be a common view point that Haskell makes IO difficult. I actually personally found that it was more powerful because I was being more explicit about it. It also allowed for higher ordered functions to abstract common patterns in IO. The only difficulty in Haskell was not due to monads or type safety, but rather in getting my brain to understand lazy evaluation by default.


I agree that keeping IO sequestered is generally good advice. And it's something that I also do in imperative and OO languages as a rule of thumb. But having the type system signalling at IO is happening doesn't end up being useful in non-lazy languages where purity doesn't actually help an optimizer.

I found this blog post to be kind of interesting: https://blog.ploeh.dk/2017/02/02/dependency-rejection/

Skim that and then take a look at the back-and-forth in the comments.

Sometimes going out of your way to write mostly pure (or just non-IO) functions can make your code more difficult to reason about. In my experience, you're sometimes left with two choices: either do a bunch of IO upfront that you might end up throwing away (if there is an `if` statement in your pure function that determines whether you need the result of some IO op), or pull out logic into the naughty impure edge of your program.

If you do the former, you're wasting performance. If you do the latter, you're not actually fixing anything that pure functions are supposed to help you with, because you still have logic in impure functions.

In my experience there are these edge cases where finding the cutoff between "the edge where I'm allowed to do IO" and the "pure business logic" is not that clear or easy.

Also, as the old saying goes: monads don't compose well.


Haskell doesn't force you to make the separation either (nor does it trivialize software design). You can write all your pure code in IO if that is what you'd prefer. I look at types as more of a way to communicate to others using my code as "here be dragons, this has side effects" or vice versa. The developer using your code doesn't just have to trust a comment block, they can see it directly in the types.

I agree monad composition isn't amazing. When I was writing a lot of Haskell (I was working on real time 3d reconstruction algorithms from video feeds using camera input and OpenGL output) it wasn't a very large roadblock though. But effect type systems seemed like they might be cleaner.

I don't have time to read through that article right away but I saved it for future reference - thank you for sharing.


IO at the edges also gives your program the Functional core, imperative shell[1] design by default.

[1]: https://www.destroyallsoftware.com/screencasts/catalog/funct...


Most CRUD apps and UI still has IO on the outside pure/domain stuff on the inside.

Most CRUD server handlers basically boil down to fetch and parse incoming HTTP request, decide to fetch some external resources (db/cache/service), prepare a response, and reply to the client. Fits the IO-on-the-outside model perfectly.

React and friends have shown that UI views are mostly pure functions. Some local state and external resource can be fetched in some runtime hook and passed into your pure renderer.


For the most simple CRUD apps, there is almost zero business logic. So your whole application is basically IO. What's the point of Haskell's IO type when every function will have to be IO?

For the less simple apps, IO can be hidden behind interfaces for complex business logic. Here's an imaginary example:

    def getPriceForShoppingCart(product: Product, user: User, getDiscount: (User) -> Percentage): Price {
        if (user.isPremium) {
            product.price * getDiscount(user)
        } else {
            product.price
        }
    }
How do we put the IO on the edges? Does this function just disappear and we have an `if` statement in our impure outer shell? That smells like business logic in the impure shell to me. Do we go ahead and call `getDiscount` in the outer shell and pass in the discount to the function? That also sucks because what if we made that trip to the database when we didn't need to (because the user is not a premium user)?

I feel like every time I try to be religious about IO at the edges and not using impure dependency injection tricks, I end up running into a bunch of cases like this.

On the other hand, if I stand back and look at a function like this objectively... It's easy to understand, it's still quite easily testable. What's the problem?


This function is receiving a product and a user as arguments. Why would it do any IO?

Anyway, CRUD does really not lead to clear "is IO"/"isn't IO" boundaries, and the business logic isn't a good candidate for separating from IO into a pure core. Instead, it is a great candidate for separating into a "with business data" context that is neither simple IO nor pure.

On the implementation of that "with business data" context you will find plenty of opportunities to push IO to the boundaries of complex code that will become pure. But your intuition is correct, what you are missing is that the code can be broken on more dimensions than the ones you are assuming.


> For the most simple CRUD apps, there is almost zero business logic.

Sure, which is why there is no reason to code them specifically at all as unique applications. Instead, the domain isn't the business domain of a particular application but far more general, and then you code something like PostgREST in Haskell where the domain is mapping between database and API calls, and solve the problem generally, and define specific applications just with database schemas.

“Domain logic” is only “business logic” when you've decided you need a bespoke app built for a specific business problem, which makes sense when you have a complex business problem, but is simply wasteful when you have a problem for which a general solution already exists. It's like asking who is the best designer go go to for a one-off custom car for perfectly average daily commuting use: you've chosen from the outset an inefficient solution to your problem.


The Integrated Haskell Platform (IHP) open sourced at GitHub.com/digitallyinduced/ihp is one example of how to do crud in Haskell very efficiently.


I think those that you listed are genuine success stories, but “technical programs of interest to hackers” helps very little with broad adoption.

Facebook also wrote parsers, analyzers, and linters in PHP, and those were deployed to production. PHP let you write these things, but also all of the cruddy stuff non-Hackers like to write.

I am by no means advocating the use of PHP (even modern post-fractal PHP), but where’s a boring application that Haskell is a success in? You know, a little game you could download and have fun with? Or a billing system?


Here's a "boring" haskell app: https://dill.network/ It's a vegan product scanner. You can find the story behind the project here: https://twitter.com/larsparsfromage/status/13211859357117358...

Here's a "boring" internal management system: https://twitter.com/fegundersen/status/1325534608079941633

Disclaimer: Founder of digitally induced, the company that helps make boring things with IHP+Haskell :)


This is great! Thanks for sharing.


As a PHP developer who did some nice parser to get strict schema & data validation out of XMLs of potential fluid (over time) structure, I can assure you that PHP do not bring any ergonomic solutions to the table.

It's possible and even very probable that PHP devs... stolen plenty of ideas from Haskell just to write "utility" libraries with witch they could write those parsers/analyzers/linters you are talking about :)

However, I also can't see how PHP could be used to efficiently re implement that spam library Facebook developed.

There whole promise lies on separation of effort into two teams: * spam engineers write pure code that happily merges data from different sources * backend engineers write integrations with Facebook systems and optimization passes over code spam engineers wrote to make sure that most optimal use of costly I/O was made

PHP brings nothing, literally nothing to the table here. While Haskell have all the mechanisms already built in.

Your argument about PHP sadly boils down to "its all Turing complete anyway".


We build an amazing web framework called IHP with haskell, imagine the productiveness of rails combined with the typesafety of haskell: https://ihp.digitallyinduced.com/

IHP has around 230 monthly active projects at the moment, and it's growing fast. It was released 8 months ago and now is already the second biggest haskell web framework :)

I think haskell has been previously tragically underused because it was missing good documentation and pragmatic solutions that help you to get things done. Atleast in the web dev space this has changed this year. Watch our IHP demo video and you'll get it: https://www.youtube.com/watch?v=UbDtS_mUMpI

Disclaimer: Founder of digitally induced, the company that makes IHP.


So if erlang and clojure are superior but tragically underused in spite of having some big projects/tools to show, how is it helpful for haskell to try to have one as well?


I think I understand the sentiment of the above commenter. I feel it with Lisp too.

There are amazing claims about how Haskell (or Lisp or Erlang or ...) give you these amazing superpowers and allow you to write immensely more correct programs immensely faster. That could be true (and I personally do believe it’s true). But then where are all of these immensely-correct fast-written programs? I don’t buy the pg-inspired secret-weapon lurking-in-the-shadows argument unfortunately.

In C, C++, Java, and Python—even Pascal!—it’s not even funny how many examples there are. It’s nearly limitless. The superpower languages struggle to come up with just a handful of examples.

What I personally observe in the Lisp and Haskell world is that people like to write purportedly useful “reusable” libraries, and nobody likes to write applications. A very select few are successful at these libraries (where success = broad adoption), but most of these libraries are intellectual games and puzzles that benefit almost zero “working programmers”. Lisp has plenty of libraries to do pattern matching (great!) but basically zero for putting up a native GUI.

(As a self-deprecating anecdote, my first instinct writing Haskell many moons ago was to create a library of all of the major abstract algebraic structures found in pure math. You can imagine how many people that library would benefit.)

A lot of this also has to do with not the language, but the tribal nature of programmers and their acceptance of other tools in their chain. The Linux kernel will never accept a patch written in Common Lisp, and a lousy middle manager will never let you write your domain logic in Prolog. Nobody wants to figure out how to integrate GHC into their CI/CD. Most people have little patience to figure out how to write good C code that integrates with Erlang’s FFI.

I’m being a little flippant in my description of the state of affairs, but I think the spirit of it is correct.

I think the commenter is right. If any of these languages will be successful, there have to be many, many successful programs written in said language where the language’s benefits tangibly pay off. Having one or two Hail Mary projects doesn’t really mean anything.


What's funny is that Rust, in its short life so far, is already popping up in important projects. Firefox, obviously. cURL now optionally uses Rust for... something. Big companies are using it: Microsoft, Dropbox, etc. Some Linux libraries (there was an image library for Gnome/GTK, IIRC). Even the Linux kernel may allow certain kinds of modules to be written in Rust.

But everyone still shits on "Rust Evangelists" when they bring it up. You just can't win! :D


I think the biggest reason for that is that Rust is fundamentally a systems programming language that is significantly more productive and safer than C/C++.


I don't think "more productive" is clear (in the sense of building software on less money). Measuring engineering productivity is basically impossible at this point. But it clearly eliminates a whole class of footguns that are tolerated in C++ just because we need a fast systems programming language.

This is easy to measure because it refers to properties of applications (performance and vulnerability-density) rather than properties of the development process.

Rust also seems to reduce the need for a ridiculous guru at your company to help keep programs correct. That is easy to identify and measure. Haskell doesn't seem to do this. If I imagine running a team of 80 people on a Haskell application vs a Java application, do I feel like I'll need fewer Haskell gurus? No.


>> What's funny is that Rust, in its short life so far, is already popping up in important projects.

Rust was sort of co-developed along with Servo with the intent to be used by Mozilla. For all the others using Rust, it solves immediate problems that people have been having for years. When you're going up the learning curve and fighting the borrow checker, you can rest assured that the pain is in exchange for eliminating entire classes of bugs. That is an unusual situation in the world of languages.


> there was an image library for Gnome/GTK, IIRC

librsvg is one.

I think it was originally written in C, and it apparently approaching being fully Rust.

Rust claims very loudly, repeatedly, and apparently with honesty that "gotta go fast" and safety are language design goals and features. So I'd imagine even a developer who just wants to get their hands dirty with Rust would be willing to do battle with the borrow checker since there's a good chance they'll eventually hit those goals that they share in common with the language itself.

Does Haskell have "gotta go fast" as a central language feature? If not, that'd be a real gamble for an SVG renderer. After all, what's the use of a library that produces the output you'd expect, but also at a speed you'd expect for a kludgy, over-engineered, text-based vector image format?

> But everyone still shits on "Rust Evangelists" when they bring it up.

Yes, but the language forces them to shit on it for reasons other than "not being fast." :)

Edit: clarification


> cURL now optionally uses Rust for... something.

You can optionally choose to use a Rust-based HTTP stack.

https://github.com/curl/curl/wiki/Hyper


Well, yes, to become popular a language has to become popular. It's a tricky knot to cut, but we're working on it.


Even the way you state this I feel is being dismissive of an important aspect of reality. I don’t think popularity is the root issue. It’s not that it has to become popular, it’s that the people who use the language have to write useful things in it.

Almost zero people writing lens libraries in C, almost no people are writing state-of-the-art asymptotically optimal Cool Tree data structures in C++.

Almost all users of these languages are writing an application that isn’t designed to serve the intellectual needs of the programmer themself.


My impression of Haskell is that (1) people who use Haskell as a hobby often write libraries to show off (2) people who use Haskell professionally write proprietary software.

One of the problems with languages like Haskell, Erlang, and Lisp is that they’re so flexible that you often don’t know what you’re getting into when you read somebody else’s codebase. You often see projects written in these languages with a small core team that understands e.g. SBCL and the various macro libraries they use to get work done, or Haskell and the monad transformer stack they use in their app, and it’s very hard over the long-term to on-board new team members and bring them up to speed at a rate which keeps the team healthy.

Maybe this is an inherent problem with “flexible/powerful” languages.

There are a few Haskell tools around that people do regularly use—Git Annex, although Git LFS seems to have edged it out in popularity, and Pandoc, which I’d say is powerful and irreplaceable.

Maybe zero people are writing lens libraries in C, but C does have a fair number of different string libraries around, exception / error handling libraries, and a bunch of people churning out terrible “single-header” libraries which only exist because average C programmers aren’t good at working with common, mediocre C build systems.


Hmm, well, it seems to me that if a language is not globally popular there will not be many jobs in it, so thing things that get written in it will be those things that are personally interesting to the authors. I don't think that necessarily speaks to the unsuitability of that language for personally uninteresting things.

I thought your point about bootstrapping to popularity was a good one. It's not easy but I'm convinced that Haskell is good enough that it's possible (in fact I think it's inevitable that either some Haskellish language will become popular or some popular language will become Haskellish).


>But then where are all of these immensely-correct fast-written programs?

Ah, the Fermi paradox of programming languages. I wonder what the appropriate terms should be for the Drake equation of language adoption? I definitely think there is a factor for users originally attracted to shiny new objects, who then discover shiny... Oooooh, Idris!


“Science advances one death at a time.” ;):)


Haskell is very hard and enables you to solve hard mathematical problems precisely. When getting results exactly right matters, Haskell shines. Crypto nerds like it, bankers like it, hardware designers like it. Web devs don't like it, despite a plethora of dmeos of how it's possible to use Haskell, because being exactly right just doesn't matter and isn't worth the extra effort that Haskell mandates.


First, I've made no pretense about speaking for Haskell's interests, I am speaking strictly about my own wishes. While I think our interests are aligned, I'll let you be the judge. What would be good for me as a curious developer interested in the language, and who has had a years-long low-level FOMO about it, I would like to be able to wade in and do something with it, like I did with Clojure and Erlang, confident that I could, in the end, make it do stuff (since others had already made it do stuff, first).

I do really honestly believe that Haskell needs a "paragon project" like Matrix or Jepsen that has proven itself useful to the world without respect to its Haskellness, and then show that this solution was much easier in Haskell.

I keep getting down-voted on this thread, and I expect it to continue. Friends thought I was nuts to give up on Lost after season 1, or The Hobbit after movie 1, or the GoT books after book 3. After years of hearing much talk and seeing no walk, I'm giving up on Haskell. More than that, I'm going to call out anyone talking it up to give specifics because literally everyone I've every spoken to about it gives it a very high regard, but have never, ever written a line of it.


> literally everyone I've every spoken to about it gives it a very high regard, but have never, ever written a line of it.

It's fine to make silly hyperbolic statements in a language war, but when you are trying to present yourself as the voice of reason.


> Isn't the best course of action to rewrite popular components in Haskell, and show that its bette

No, the best course of action is to solve problems that haven't been solved, not to do marginal reinventions of the wheel.

A killer solution is definitely a help in getting a language ecosystem to take off, but those are rarely simple reimplementation of an existing tool. Occasionally they are in a recognizable existing category, sometimes they are category-defining.


Why doesn't pandoc count?


I'm not OP but I have only a basic knowledge of Haskell, and after giving it my best effort, I wrote my own parser rather than modifying Pandoc. If Pandoc didn't sell me (user of many languages over many years, a true believer in functional programming, and someone that writes solo projects for personal use) on Haskell, I don't think it can be done. The Pandoc source is literally just a big blob of uncommented gibberish to someone that's not already a dedicated Haskell programmer.


"show that its better"

I've being thinking a lot about this, and I think this is better addressed by using the same standards as in other fields. Namely, a scientific study. I imagine an environment like repl.it, with just a simple text editor, and some problem description and a set of compressive tests that serve as a spec. Then we measure time to produce a solution and number times the solution makes the tests fail (bugs). With enough people doing this, and in enough languages, we might be able to reach some conclusions. Until then, we will be stuck in the dark.


There was a study done by Yale/US Navy/DARPA back in the 90s [0] but it was much more limited than you're describing.

[0]: https://www.cs.yale.edu/publications/techreports/tr1049.pdf


I see a lot of comments on usability of HS in Prod. We @ juspay.in use it to power a bulk of UPI payment transactions. We also use PureScript to power our payment page SDKs.

API (CRUD/Auth) code written in HS becomes a beauty once you start building experience with HS. I think the most advantage of FP comes from how it changes the way you model your solution for a problem. With a strict type system, it's easier to anticipate edge cases and with currying building abstractions becomes natural.

Having said that, HS is not all sunshine. It took me an inordinate amount of time to setup an IDE like environment. If I recall correctly, HLS would take > 20GB of RAM in few hours with our code base. Eventually I had to remove that and extensively use only the editor features to jump around code.


Been exploring this more for my next project. Do you think using Purescript both on backend and frontend is viable?


The headline is a bit too generic. This organization is being founded with the support of Simon Peyton Jones and has corporate backing. It appears that the intent is to focus on pain points in the Haskell toolchain and libraries.


That is great news. Haskell toolchain could use some help. I wonder if they'll work on the documentation and error messaging too. I think people are scared off when they realize that some of the libraries have no docs and you just have to read the type signatures.


Definitely agree with you that more documentation, especially with examples, would be great.

However, reading just the types of a well written Haskell API probably gives me greater inside in its usage than most documentation I’ve read on npm packages. (To name an example I’m familiar with)

There are so many very general patterns and idioms in Haskell being used all over the place that reading the signatures in the API docs can be extremely valuable.


I program roughly equal amounts in Python and Haskell and every time I read Python documentation I wish it were as good as Haskell's. Having type signatures goes a long, long way to understanding an API. Python replaces that with rambling, unclear, prose. The same goes for Python's standard library and popular libraries like flask, pandas and matplotlib.

(Haskell's direct links to source are also good, but there's no fundamental reason Python couldn't have those. I've never seen it though.)


> It appears that the intent is to focus on pain points in the Haskell toolchain and libraries.

Good. I set myself the challenge of compiling a Haskell program [1] during the Christmas holidays. It was meant to be a "one mince pie" challenge, but after an hour I discovered the VM I used didn't have enough RAM (during compilation we were approaching 4GB), then I ran out of disk space as stack approaches 5GB & I had other stuff installed. Once a few hours had gone by (this program isn't fast to compile) I had a working program. I now have to figure out if I can distribute just the resulting binary to other servers, or if it needs other software like GHC installing. Having finished the pack of mince pies, that can wait to another day.

I know when I first started compiling C/C++ software there was a learning curve and it took hours the first time, but I found it easier to get started. With Haskell, the way one version of GHC is installed first and then Stack installs a completely isolated version is confusing; plus the inscrutable error messages (haven't got it to hand, but one means OOM but doesn't say that - it takes a Google to find the GitHub issue to work that out).

And this is before I try and experiment/decide to learn some Haskell. Apart from the error messages they're not issues with Haskell per se, but they contribute to the experience of it.

1. https://github.com/facebook/duckling


When I started, I took the command I used for compiling c programs and substituted a letter.

    gcc -Wall -O3 -o ./main main.c

    ghc -Wall -O3 -o ./main main.hs
I still compile that way every now and then.


Good to know, thanks. The README says to use stack, so I've stuck with that so far. I'll look for instructions for using ghc


I upgraded my personal VPS from 1GB to 2GB RAM when I couldn't compile some Haskell standard library modules for a simple project.


To me, the major major unsung pain point with Haskell and any non-strict language that makes it unsuitable for most applications is it's inability to use dynamic libraries as we're all used to.

For an Haskell implementation to have any efficiency, it must re-order the order of execution inside of function calls as it sees fit, which it can do due to the non-strict semantics. This is fine in theory, but dynamic libraries aren't built on that assumption; they're, naturally, built upon the idea that they are a black box and that the consumer simply calls the function, obtains the result, and does not care nor has access to the insides.

A Haskell implementation must have knowledge thereof, thus, the practical effect is that with Haskell it's impossible to, say, fix a security problem in a dynamic library and simply have all consumers seamlessly benefit from that fix — every consumer must be recompiled with the new library's source.

And this problem is very seldom raised in discussions — dynamic libraries are fundamentally designed on the assumption of strict semantics.


I'm afraid I don't get the connection between non-strict semantics and dynamic linking at all. As far as I know they are completely orthogonal. I didn't follow your reasoning. Could you expand?


They're very related.

Due to Haskell's non-strict semantics, if a dynamic library, written in Haskell, that some Haskell program is linked to is recompiled, then the code that is linked to it must also be recompiled.


Why is this a necessary consequence of lazy execution?


I'm having trouble grasping that. Why would non-strict semantics imply that code must be recompiled if a dynamic library changes?


With non-strict semantics, function execution is re-ordered and turned inside out.

For instance let us say we have pseudocode:

   let list       = [1,2,3,4,5];
   let new_list   = do_something(list);
   lew newer_list = do_something_else(new_list);
Both these functions consume and return lists. In a strict language, this process of iterating over the entire list internally in these two functions would happen twice; this would be wasteful.

A non-strict language however is permitted to take the side-effect-less code apart, and restructure it in such a way that the list is only traversed once, essentially combining these two functions into one function and letting what each does run successively, for this purpose it is permitted to generate a new function that combines what both does, and doing so iterating the list only once, and optimizers are typically clever enough to be able to do so.

Suppose that both of these functions be library function black boxes, provided by a dynamic library, then the implementation can no longer do this as it calls both from a library.

The way Haskell is written is completely dependent on this; it would be awfully slow if the compiler weren't allowed to do this.

The result in practice is that updating a dynamic library written in Haskell requires that all it's consumers be recompiled to compile against the new version. Or otherwise said: there is no concept of a stable a.b.i. in Haskell.


In practice, Haskell doesn't do this.

    import System.IO.Unsafe
    import Prelude hiding (last)

    last :: [Int] -> Int
    last (x:[]) = x
    last (x:xs) = unsafePerformIO (do
      print x
      return (last xs))

    secondToLast :: [Int] -> Int
    secondToLast (x:y:[]) = x
    secondToLast (x:xs) = unsafePerformIO (do
      print x
      return (secondToLast xs))

    main = let
      list = [1,2,3,4,5,6]
      x = last list
      y = secondToLast list
      in print (x + y)
The above code outputs

    1
    2
    3
    4
    5
    1
    2
    3
    4
    11
indicating that it traversed the list twice, fully completing one traversal before starting the next.


Yes, it doesn't do it there on optimization level 2, however it does do it here for instance:

    import System.IO.Unsafe

    debug x = unsafePerformIO $ print x >> pure x

    map1 = map $ debug . (+ 1)
    map2 = map $ debug . (* 2)

    main = let
      list      = [1,2,3,4,5,6]
      newlist   = map1 list
      newerlist = map2 newlist
      in print newerlist
The output of this is:

      2
      4
      3
      6
      4
      8
      5
      10
      6
      12
      7
      14
      [4,6,8,10,12,14]
Showing that it was smart enough to traverse once.

If `map1` and `map2` were imported by some library, it would be close to impossible to speak of maintaining stable a.b.i.'s to both these functions without the compiler somehow being able to turn them inside out and traverse only once.


I don't see how strict vs. lazy has anything to do with linkage. I guess you are talking about foreign functions defined in a dynamic library. Calling a foreign function (regardless of linkage) will just evaluate the arguments and pass them according to the calling convention of the foreign ABI. If you link to library dynamically and it gets updated (in an ABI compatible way) you don't have to recompile anything, it just works as you would expect.


No, the issue does not apply to the f.f.i., because those are called with strict semantics, as C expects that.

It's about dynamic libraries written in Haskell itself.

Try compiling a Haskell program in GHC with the `-dynamic` flag, and then update any of the Haskell libraries it is linked against, after this, the program will fail to start with a linker error, and must be recompiled itself.

https://www.reddit.com/r/archlinux/comments/7jtemw/which_pac...

See this discussion there which explains why, in essence, dynamic linking is not a feasible route with Haskell. Arch Linux currently does this, or at least at the time of writing there, which leads to all Arch Haskell packages being required to be recompiled and reinstalled, if but a simple library be updated that they all use.


Yes, but this doesn't seem to have anything to do with lazy evaluation, rather it is because GHC is quite close to a whole-program optimizer.


Optimizing lazy evaluation requires this kind of hole-program optimizations.

Non-strict semantics would be prohibitively slow without these optimizations.

You won't find a Haskell compiler that doesn't do this.


Okay you are right, GHC does not produce a stable ABI with dynamic libraries. But it could do so in principal, no problem with laziness itself. It just would be a lot slower without all the cross module inlining and so GHC chooses not to. The archlinux haskell packagers could just ship programs as statically linked binaries and not recompile them on each minor dependency upgrade (except if there is a security upgrade). The binaries would not have any dependency on haskell packages themselves and you would not be forced to upgrade the whole package graph when some dependency is bumped, that would also save a lot of download bandwidth. I'm not sure that anyone really needs haskell packages (besides standalone programs) from the system package manager. No haskell dev I know builds their programs this way.


> Okay you are right, GHC does not produce a stable ABI with dynamic libraries. But it could do so in principal, no problem with laziness itself. It just would be a lot slower without all the cross module inlining and so GHC chooses not to.

It could do so if it were to be willing to sacrifice having even a modicum of performance.

Lazy evaluation is incredibly slow without these. The theory behind lazy evaluation, is that it can gain back some of the performance it inhærently loses by allowing these kinds of aggressive hole-program optimizations because code can freely be re-ordered as no code has side-effects.

Without taking advantage of this re-ordering, one is only met with the pœnalties.

Note that in Haskell very fundamental primitive logical operations such as `||`, `&&` and `if` receive no special treatment from the implementation and are ordinary functions. If these were not allowed to be optimized as such, and they would be provided as a library, then the code would be prohibitively slow and would essentially require that a thunk be passed at every instance of a normal logical construct.

> The archlinux haskell packagers could just ship programs as statically linked binaries and not recompile them on each minor dependency upgrade (except if there is a security upgrade). The binaries would not have any dependency on haskell packages themselves and you would not be forced to upgrade the whole package graph when some dependency is bumped, that would also save a lot of download bandwidth. I'm not sure that anyone really needs haskell packages (besides standalone programs) from the system package manager. No haskell dev I know builds their programs this way.

Indeed — no one does, because the unsung pain with Haskell is that it's impossible.

With strict languages one has the option for dynamic linking, and dynamic linking seems to be what has won out due to it's convenient “update once; update everywhere.” benefits — that is not available to non-strict languages, for which static linking is the only viable way of distribution.


Do you mean no dynamic lib support for Haskell code, separately from C libraries that can be dynamically linked?


There is support for it, but no support for a stable a.b.i., as such with every minor change to the library, nothing linked to it can run without a recompilation.

This is due to the non-strict semantics.

Consider that in a C library, or any other strict language, the function loaded in the dynamic library is called by first evaluating all of the arguments, and then calling the function by value with the evaluated arguments.

This is not how Haskell operates, where the arguments are, in effect, placed inside the function body directly and may, or may not be evaluated, as such a function can't be a black box that can arbitrarily change it's internal workings, so long as the outside a.b.i. remain stable — a program compiled against a dynamic library is fundamentally compiled against the inner workings of functions, rather than simply against the stable interface they proffer to the outside world.


The trend in the last 10 years has been away from dynamic linking and toward static linking; Rust, Go, Nim and Zig all have static linking as the default.


> Rust, [...] have static linking as the default.

Citation needed? Something like 20% of the reason I gave up on Rust was precisely that there was no documented way to force it to generate a static executable.


To clarify for Rust; Rust statically links other Rust code into your program by default. That's the relevant comparison in my post, because Haskell also links dynamically to glibc by default, but it statically links with other Haskell code. Presumably that's what Blikkentrekker meant.

If you want to have a 100% static executable, that's been possible for a while now: https://doc.rust-lang.org/edition-guide/rust-2018/platform-a...


> > no documented way to force [the reference implementation, with the standard libraries (in particular libc) that the default system semantics are dependent on] to generate a static executable.

> https://doc.rust-lang.org/edition-guide/rust-2018/platform-a...

Yes, I'm aware of that; the fact that they continued using glibc as their default (aka canonical) OS/syscall interface after discovering that it was so impossible to statically link that people had to retarget code to a whole second libc was one of the final nails in the proverbial coffin for me.

> because Haskell also links dynamically to glibc by default, but it statically links with other Haskell code.

Not particularly relevant, since my inquery was mainly about Rust, but on my system I get:

  $ echo 'main = pure ()' > foo.hs
  $ ghc foo.hs
  [1 of 1] Compiling Main             ( foo.hs, /tmp/tmp.icUr2pqZXx/Main.o )
  Linking foo ...
  $ file foo
  foo: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
Admittedly, it's entirely possible that I fixed ghc and/or gcc at some point and forgot (I've fixed other bugs, but I don't recall fixing this one).


Not relevant to what you asked, but relevant to the point that was made higher up in this thread.

I think you've fixed something. I get

    $ file foo
    foo: ELF 64-bit LSB executable, x86-64, version 1  
    (SYSV), dynamically linked, interpreter /lib64/ld-linux-
    x86-64.so.2, for GNU/Linux 3.2.0, 
    BuildID[sha1]=f9e391e2e7da2c80dbfc7820ac87294ef447c57a, 
    not stripped


Previous discussion on the Foundation: https://news.ycombinator.com/item?id=24988454

Also relevant if you read through those comments, and the discussion they include on FP Complete's involvement, is Michael Snoyman's more recent statement: https://www.snoyman.com/blog/2020/12/haskell-foundation

I think Snoyman makes a few good points - namely, it remains to be seen exactly what the Foundation can and will do, and what input the community is going to have in that process. While the criticism that was once made of Facebook/IOHK/FPC applies here (namely, that a large enough force will trump Haskellers who invest in the community), the Foundation has all the power to be much worse. The academic side of the language still carries the most weight.


If you're worried that the Foundation might not live up to your expectations or go awry, keep in mind that it can only function insofar as it has input from the community. Feel free to apply for the board at https://haskell.foundation/board-nominations/, and if you think you can execute HF's technical agenda, try out for the Executive director position at https://haskell.foundation/ed-job-description/.


I appreciate the openness of these invitations, but they are ultimately not valuable to large parts of the community. I (like most people) am not in a position to become a board member - certainly not an Executive Director. There are obstacles of both personal qualification and free time (since I don't see any evidence that the position is paid).

Because of that, I'm much more interested in how the Board will determine how it can best serve the community, and gather community feedback. That question can only really be answered after the board members themselves have been chosen.


> I don't see any evidence that the position is paid

> Salary will be commensurate with the experience, qualities, and location of the individual, and will also reflect the Foundation’s status as a non-profit organisation funded by donations.

https://haskell.foundation/ed-job-description/


And for the position of board member, which is the more accessible of the two?


Board members are board members. They do not have executive function in the same sense as an Executive Director. The latter carries out the strategic plans decided by their Board, and serves as the day-to-day executive function. The Haskell Foundation's Board responsibilities are clearly delineated on the site, and is the more accessible position of the two. It is also unpaid, and not a function (necessarily) of tech experience, unlike the Executive Director, who must necessarily have at least some experience with project delivery and Haskell's ecosystem + community in order to function.


Here is what I imagine being on the board to be like:

"Call to order! We have here before us a motion to assign the task of deciding whether to remove 'Hello World' as a program from the documentation, given the ambiguity of its semantics. We have a second...the ayes have it, motion approved."

Or more like, "Here, let us nibble away at this $400k cheese, and by the time we get through the paperwork decide to make anything we'll hire someone on Fiverr to do a cornflower blue logo, but it will be SVG so they know we're serious.""


I rage quit Haskell when I saw all the promises for application correctness fall like a house of cards by realizing I can write to a file after it has been closed when using lazyIO.


So, was that on the second page of a tutorial?

Yes, Haskell has those gotchas like "don't use lazy IO", "there are partial functions on the Prelude", "String is slow", and "code may throw exceptions where you don't expect".

They are there mostly because of history and the language would be better without them. Yet, they are easily avoided and aren't that many (those above are most of them). If you rage quit every toolset that has problems, I have some really bad news for you.


Thank you. You demonstrate very well what the problem is with the Haskell community, and that in a topic about Haskell adoption.

For a language to be adopted it must be beginner friendly. And by beginner friendly, I don't mean easy. It can have a steep learning curve as long as it enables the user to be productive, otherwise the beginner will give up quickly.

For the experienced user things like, "don't use lazy IO", "there are partial functions on the Prelude", "String is slow", and "code may throw exceptions where you don't expect", may be obvious, but are not for the beginner. I don't remember exactly how I ended up using "lazy IO", but trust me it wasn't because I ignored a big fat warning that I should NOT be using it.

It's gotchas like these that prevent the beginner from being productive.

Then the beginner reaches out to the community sharing his experience and gets a defensive and passive aggressive reply like yours. It's enough to make them not want to touch that language ever again.

You think you are defending your favorite language, but actually you are doing more harm.


Honestly, I'm not much concerned with beginner friendliness. As long as there are enough users to maintain and improve its ecosystem, it will stay a good language. The moto of "avoid success at all costs" was correct, and if beginner friendliness comes at the expense of expert usefulness, I'll be glad the second is chosen.

That said, none of that applies to those gotchas. They are clear problems with the language. None of them are obvious, but all of them are widely discussed so you will see them early when learning the language. Learning the gotchas of an ecosystem is a central part of learning any tool on informatics, and as gotchas go, one of the strong points of Haskell is that there are very little of them.

You are making a very common complaint, that comes from ignoring the problems of the tools you know while making a fuzz over anything you find on the new one. It's almost certain that whatever languages you are used to, they have a much larger pile of gotchas than Haskell (because Haskell has an atypical small number of them), but that doesn't stop them from being useful.


> You are making a very common complaint, that comes from ignoring the problems of the tools you know while making a fuzz over anything you find on the new one. It's almost certain that whatever languages you are used to, they have a much larger pile of gotchas than Haskell (because Haskell has an atypical small number of them), but that doesn't stop them from being useful.

"Yeah well your language isn't good either" isn't a very good argument for why a language is _better_. If the idea is "Haskell works better than other languages as long as you happen to know how to make it so", isn't that just every language that exists? Due to Haskell's branding, I was under the impression that the issues the above user is describing should be impossible. If Haskell doesn't even make good on those guarantees, but instead shifts that guarantee as "Things you should've known not to do" onto the user, how is that different from any other language?

edit And since this thread has a lot of tension and vague hostility, I will add, I am not attacking you or the language you enjoy. I legitimately would like to better understand.


Hum... I do think "Haskell has an unusually low amount of gotchas" isn't a subjective argument that applies to any language. Also, no, yours "Haskell works better than other languages as long as you happen to know how to make it so" can't apply to every language, by construction you can only put one language on it¹, unless you assume that Haskell is the only language the developer knows well, what would make for a really lame argument.

But "you won't be able to use it well if you don't learn how to use it" does indeed apply to all languages. Yet, it's more relevant for some, and Haskell is one of those more affected by it.

It is a fact that Haskell is not beginner friendly. Your first program on it will suck, whatever proficiency you have with other languages (there are some people that argue that it is easier if you know less beforehand). It takes some learning before you are able to get the better safety, high productivity and easy collaboration people talk so much about (and even then, you won't get those for every problem, the language has limitations too).

1 - The statement is false anyway. There's no such language one can single out there without more context.


Code written in Haskell is only safe if the authors of such code take advantage of the features that the language provides to write safe code. Haskell makes it ergonomic to write safe code, not mandatory. It turns out that the authors of Haskell's Prelude 30 years ago didn't get things quite right. We're still dealing with that today and it takes time and effort to fix things.


I understand your point. My question, though, is: for what languages is this statement not true:

" < language > is only safe if the authors of such code take advantage of the features that the language provides to write safe code "


It's not a binary classification of "Haskell safe, everything else unsafe". It's a gradation. Haskell provides more such features and more ergonomically (goes the claim, and it's one I happen to agree with).


Well, while python still has 2 or 3 gotchas, I think it can be called safe (now the question for me arises what do we actually mean by safe).


Another point is that beginner-friendly languages don't always make expert-friendly languages. Python is great for beginners, but I have seen its lack of types create heaps of brittle code at companies that no one wants to touch.

Some of the "C++ contenders" (besides Rust) like Nim, Zig, and potentially Swift seem to balance this line well: simple to start using, but grow with one's expertise.


I wrote quite a bit of Haskell several years ago and sadly I have to agree! I understand why all the cruft is kept around - for one backwards compatability. But they really need a clean break to remove all the cruft and simplify things. There shouldn't be so many ways to do things (especially if some of those ways are just flat out wrong).


So you went from hearing some promises to discovering gotchas in complex, decades old technology, decided to stop and then school the haskell community on topics they are very well aware of.

How very modest of you, if only they could listen.


some people spout answers when they should be asking questions


haskell is a language for experts. quite frankly, it does not need runaway success to be successful. it can remain a top 20 or whatever language popularity-wise and remain best-in-class for experts.

once you are the expert, you would waste your time to use other programming languages.

also - the post you are replying to isn't condescending. the haskell community just doesn't treat beginners like infants who need coddling. haskellers tend to give the nuanced & truthful answer to questions.


> haskell is a language for experts.

Expert in what? Programming? Lambda calculus? Category theory? Let’s not pretend that you are simply an expert in something randomly, it takes a lot of practice, mistakes, and actually being a beginner before being an expert. Even if you are an expert in the above you still are a beginner when learning Haskell the first time, and you will make mistakes, so having beginner friendly resources is important.

Quite frankly, it’s this kind of gatekeeping attitude - that Haskell is meant for “us” and to become one of us, you have to be an expert, and we aren’t going to coddle you - is a kind of attitude that significantly contributes to the lack of good and welcoming beginner resources and causes very few people to actually pick up and stick with the language. You can be a “language for experts” and still have good, beginner friendly learning resources to welcome more into some community.


It's a language for experts in Haskell. It isn't optimized for the beginner, and all the proposed fixes to issues don't work due to that. We can't just remove lazy IO or partial head without significant impacts, so we coexist with them - tutorials and all.

There are plenty of beginner-friendly resources already (more & more every year), but they can't erase worse ones from the Internet. The Haskell community lacks the sort of strong-fisted leadership to accomplish that by design.

That said, acting like a tutorial mentioning lazy IO or partial head/tail is a killer is a little ridiculous. Neither of those things are atrociously problematic, so learning about them early is fine. You may get cut due to them, but getting cut is okay if you don't flip the table.

Also I (and most of Haskell posters) do not gatekeep. The community will go well out of its way to respond to any questions beginners have with blog-post-quality comments in the various forums (mailing list, reddit, irc, slack, github issues, etc) There is definitely a teaching culture, which is the opposite of gatekeeping.


Is it really so hard to add a flag -safe that rejects or warns on garbage like what's in Prelude (lazy IO, partial functions)?


Well kind of yes and no. As far as GHC is concerned, `Prelude` is just a module like any other, within `base`, a package like any other.

We could special-case it, but on the other hand, there are already multiple other solutions in this space that are more generally useful. Alternate Preludes (and compiler support of -XNoImplicitPrelude) `hlint` can also be customized to do exactly what you're asking for as well.


> haskell is a language for experts

That's going to make it very hard for me to run a business, which involves sourcing, hiring, and training a potentially large number of people. Hiring only experts is a tremendous business cost.


That's potentially true, and also fine. Success for a programming language doesn't mean it has to be a good language for a corporation's interests.

Haskell is first and foremost a gigantic personal productivity booster. It allows me to do bigger and more complex projects on my own or with one or two others. These projects are not for corporate interests although that can be for personal profit. I am pretty confident I will be able to stop giving my labor to any corporation within the 10 year mark of my career, and Haskell will be a big reason why.

But I still try to convince corporations to use Haskell since then we offload the cost of learning onto a corporation instead of an individual. That feels like a worthwhile re-allocation of capital to me! Corporate Haskellers get paid to gain expertise and keep it forever, leaving corporation with nothing besides their direct labor. Luckily it's easy to convince a corporation of anything with what amounts to propaganda & politics.


> If you rage quit every toolset that has problems, I have some really bad news for you.

As someone who has tried to learn Haskell twice and failed (and trust me I went way beyond the second page of a tutorial ;-), I don't think the issue is that Haskell has problems. As you hinted every language has problems. The issue is that for a lot of people Haskell requires significantly more effort to learn than other languages and doesn't have a lot publicly available software to show off for. So when we learn Haskell and stumble on problems, we are less forgiving than with other languages because we have spent more effort and we don't have enough examples of useful software made with Haskell to reassure us that our efforts will eventually pay off.

We loose confidence because it seems that the promises of correctness aren't fulfilled and we don't have enough evidence that we aren't wasting our time. We would need at least to see the light at the end of the tunnel to keep us motivated. The sad thing is that if we decided to spend long hours of our free time learning this language instead of doing other things it's because we really believe in ideas behind it. What we have read about the language obviously speak to us but at some point we loose motivation because we don't get any tangible results.

I'm pretty sure that a good part of why it clicks for some people and not others is due to the amount of mathematical literacy. I don't have a strong mathematical background so the amount of effort for me to learn the language is probably much higher than it is for someone better at math. I have to invest more so I expect more in return, which in my personal experience Haskell fails to deliver. To appeal to a wider audience Haskell needs to be pretty much flawless and also to have more public software to showcase.


> We loose confidence because it seems that the promises of correctness aren't fulfilled and we don't have enough evidence that we aren't wasting our time. We would need at least to see the light at the end of the tunnel to keep us motivated.

Sometimes you go into the tunnels that are not tunnels but rather mines, and you go there not to see the light at the end of it, but rather to dig diamonds.


Sorry you had a bad experience. Hopefully the Haskell Foundation will be able to put resources into the beginner experience and you will find it easier in the future.


And actually all of those things are problems with the default prelude and not the language core. There are alternative preludes that one can use, and hopefully these warts will be fixed in the future. It takes time because it's a breaking change though.


Here's a nice summary of some of the more popular preludes.

https://guide.aelve.com/haskell/alternative-preludes-zr69k1h...


Are there from-scratch tutorials based on any of these?



Thanks. I meant a recommendable tutorial for Haskell newbies that starts from scratch with modern/sane defaults. I’ve seen webdev examples with quite offputting boilerplate that I’m told could be dealt with by a prelude and I wonder if it’s a problem that people are trying to solve.


> And actually all of those things are problems with the default prelude and not the language core.

This directly means "a problem with the lanuage" to any beginner with the language. Since it's the default Prelude. So a beginner has to somehow learn that there are issues with Prelude, that there are other preludes, find the differences between them, select one, and then reconcile the inevitable differences between those preludes and the official one when reding various docs.

...

And then figure out wich of the two dozen language extensions you need to use to even write something useful beyond a "Hello, world".


We didn't throw away C++ back when the STL was no good. Why should the situation with Haskell be different? (And Prelude's problems are way less severe than the problems the STL used to have.)


> We didn't throw away C++ back when the STL was no good.

You're not doing Haskell any favors by comparing with "the times when STL was no good" (when was this? 20 years ago? 30?).

> Why should the situation with Haskell be different?

Because the world has moved on and expects languages, their libraries and runtimes to become better.


> You're not doing Haskell any favors by comparing with "the times when STL was no good"

Huh, why not? He's comparing not equating, after all.

> And Prelude's problems are way less severe than the problems the STL used to have


Um. No, you can't. I don't know what you are talking about. What you are describing is not and cannot be how lazy IO works. Lazy IO is always on the read side.

Lazy IO, like the partial functions in Prelude, is a convenience hack for making some very low-complexity things easy. I don't have a problem with it existing, because I am fine with matching my tool complexity with the problem complexity.

And also, I understand what it does. I'm surprised I'm the first person to point out that your assertion is not an accurate description of anything that actually happens.


> I rage quit Haskell when I saw all the promises for application correctness fall like a house of cards by realizing I can write to a file after it has been closed when using lazyIO.

The promises aren't “using Haskell magically causes correctness”, but “Haskell provides powerful tools that most other languages don't that enable correctness”.

But, you know, the people that need to telegraph “rage quitting” a language are usually the people chasing an unreasonable silver bullet and misinterpreting things through that lens to start with.


I'm not a Haskeler, but didn't they add affine/linear types recently that could help?


Linear types don't help here.

The complaint is that with lazyIO, even if your code does the write before closing the file, the actual order the operations occur in could change, to put the close first.

If you look at the lazyIO documentation, you will see:

>Although this module calls unsafeInterleaveIO for you, it cannot take the responsibility from you. Using this module is still as unsafe as calling unsafeInterleaveIO manually. Thus we recommend to wrap the lazy I/O monad into a custom newtype with a restricted set of operations which is considered safe for interleaving I/O actions.

https://hackage.haskell.org/package/lazyio-0.1.0.4/docs/Syst...

TLDR, don't use lazy IO.


The lazyio package is not what people are talking about when they say "lazy IO".


> The lazyio package is not what people are talking about when they say "lazy IO".

I don't know which people you are talking about, but it clearly was in this specific subthread which started with a complaint about “LazyIO”, not “lazy IO”, and where the only use of “lazy IO” in a post before yours was in a post where all previous references were to “lazyIO” and it's specific package documentation, and contextually the “lazy IO” references was about the same thing, not something else.


You can write bad code in any language if you try hard enough. No one should use lazyio (or any library) for important work without checking the code or at least the docs to see if the code is intentionally bad (and renaming "unsafeInterleaveIO" to "lazyIO.InterleaveIO" is intentionally bad).

One of Haskell's known weaknesses is that there are no curated catalogs of high quality libraries. The industrial strength stuff sits on package servers alongside the broken toys.

I once got an apology from a major luminary in Haskell, author of dozens of high quality packages, because I used a package he wrote that turned out to be an abandoned broken experiment, but not documented as such.


Well, that's not really covered by correctness guarantees and nor could it — what happens does not lead to undefined behavior and memory corruption but “external correctness” related to writing to files depends on what the outside world does with the file to begin with.

There are other strange things in the IO monad, such as the ability to observe evaluation orders by throwing exceptions.


> There are other strange things in the IO monad, such as the ability to observe evaluation orders by throwing exceptions.

That's not specific to IO; you can more or less always observe evaluation order by throwing exceptions:

  bar = error "bar"
  baz = error "baz"
  foo = (bar,baz) :: (Int,Int)
  main = print foo -- well, I guess you need IO to run anything at all
`bar` is evaluated first (at least on my machine), as evidenced by:

  $ ./test
  test: bar
  CallStack (from HasCallStack):
    error, called at test.hs:1:7 in main:Main


> I guess you need IO to run anything at all

Its a bit more that that. You need to be in IO in order to catch an exception.

That is to say, if you have:

   bam :: (Int, Int) -> String
There is no way to write:

  bam (x,y) = "bar" if x is called first
                     "baz" if y is called first.
However, if you were to instead have:

  bam :: (Int,Int) -> IO String
Such a function is possible to write, as you can now catch the exception and do something with it.


Fair enough [edit: distinction to make], but the problem is still the existence of exceptions (at least as something that doesn't always terminate the process, but that's implied by most useful definitions of "exception" anyway), not anything that would be true of a hypothetical alternate version of Haskell that still had IO, but didn't have exceptions.


The difference is that with IO, one can catch an exception, and thus write a program that branches based on evaluation order.

In this case the Haskell program itself does not observe the order, as it crashes, but the order is printed as debug inormation.


Honest question that's going to draw some ire. Why use Haskell when we have rust? I've poked at Haskell and it was not a good experience.

I just don't think you can make an honest business case for it on a greenfield project. That's my opinion, is there a chance I'm wrong?


I'm not sure if an opinion of this sort can really be wrong, but here's an attempt at an honest case:

- If you're working with complex domain objects, Haskell has somewhat better (edit: or at least more powerful) tools for forbidding invalid states during runtime and assisting refactors as your data model changes.

- Haskell seems to have better libraries for writing parsers and compilers than Rust, if that's relevant to your project.

- Some folks prefer ML syntax to Algol-like syntax.

- The GC, immutable-by-default data, and reasonable concurrency model are pretty nice features of the GHC runtime.

- The performance difference between Haskell and Rust probably doesn't matter for most projects.

- Haskell might actually compile faster if you avoid crazy type-level programming shenanigans.

- Haskell has a pretty decent REPL.

Obviously, Rust has a different set of trade-offs, and the bigger cost to either is going to be that both are a bit off the beaten path and require some practice and experience to be productive and comfortable in.


Rust is still fundamentally a systems programming language and Haskell is a higher-level application programming language. You can do both in either, just as you can write anything in any language that is turing complete. But that doesn't mean a given language is always meant for the kind of thing you're using it for.


Is your argument then that rust is less fit for high level programming tasks? Because I disagree with that assessment.


Yes.


Why?

Rust is a nice modern language that's both imperative and functional with great tooling and libraries.

What makes Haskell better in this area in your opinion?


Have you tried Haskell? It has garbage collection and immutability. This entirely removes the need for a borrow checker (while maintaining correctness). Overall development is at least twice as productive in my experience. Of course this comes with an associated slow-down in runtime performance.


If garbage collection is a plus and not a drawback on the project then I would prefer Go. In neither case would Haskell top the list of options for me.


Maybe you could clarify what was not good in your experience poking at Haskell.

Which source materials did you use? What is your background? How comfortable are you with higher level typing systems?

I have very little hands-on experience, but with the little I messed with both languages, they serve different purposes and are not apples to apples comparison.


It would probably help if you said more about your experiences.


I am confused. I thought Haskell was taking pride in not being used in production (and instead being a testbed for new language features/concepts)


You might confused by the motto "avoid success at all costs" which is deliberately ambiguous.


I think it's (avoid (success at all costs)) isn't it?

But I agree - why do people seem to go out of their way to hamper themselves with pun mottos that are just begging to be misunderstood, either genuinely, or deliberately and used against them?

Many people don't stick around for the 'ah it actually means...' bit where you unveil your witticism. They just go away with the face-value explanation and don't bother to learn more.

Like 'free software'. 90% of people aren't sticking around to hear the pitch about 'free as in freedom'. You've already missed your opportunity to explain your point of view - you wasted it on making a pun instead of actually communicating! It's madness.


Yes, that's the line I remember (and read the explanation that Haskell is comfy with being niche). What other interpretation do you think is intended with this sentence? (to me it looks very un-ambiguous, more like deliberately un-ambitious)

Edit: I think I read this explanation: https://news.ycombinator.com/item?id=12056169#:~:text=haskel.... "Haskell would prefer to be powerful, safe, efficient, obscure and niche; rather than popular, industry-standard, widely-known, highly compatible, unsafe, insecure, inefficient and restricted"


No, not really, the number of Haskell-in-production shops is small but steadily growing.


"I'm convinced that if only we could get a word in with management, and explain what a monad is, they'd consider using Haskell"

Not an actual quote, but I liked the scene it brought to mind.


I'd like to add that there's also a large Discord community for Haskellers here that's friendly to beginners: https://discord.gg/7meVVxA


And for longer discussions and regular meetups, there's FP Zulip: funprog.zulipchat.com


That's not even that much of a stretch. Consider:

"I'm convinced that if only we could get a word in with management, and explain what [buzzword] is, they'd consider using [technology]"


Haskell is the ultimate shibboleth. It's value for actual programming is secondary.


I’d say that the value of Haskell is in all of the lessons we learn from the research that it enables. Language designers are constantly looking to languages like Haskell for well thought-out / sound features they can add to make programming easier in other languages. I’m not sure why, but Haskell is an amazing hotbed for experimentation with new language features.

You don’t have to use Haskell directly to benefit from it.

Saying that it’s a shibboleth—well, that’s kind of a crass way to dismiss an entire community. It’s rude and not insightful.


If you want vengeance and to show me the error of my rude, crass ways, then write some great software in Haskell.


The error of your rudeness has nothing to do with whether great software exists written in Haskell (which, of course, it—including , but not limited to, pandoc and PostgREST—does.)


Allowing for your (silly) premise, I use XMonad and Pandoc all the time and they are great.

Of course, the quality of a language is at best weakly correlated, and at worst anticorrelated, with the amount of useful software that is written using it.


I’m not trying to “get vengeance” or “show you the error of your ways”.

Your comment was dismissive, and I’m pointing it out because if nobody responds to people making rude comments, other people will think it’s okay to make comments like yours.


It used to be a very precise indicator of technical competence/interest, but grifters have caught onto this and now there are several of them in the Haskell world. It's still a good indicator, but seems to be getting less predictive over time.


> Haskell is the ultimate shibboleth.

For...what group? I mean, any language can be trivially seen as a shibboleth for the language community, but that's not worth falling our for any language specially over others.


I agree with you, although I read your statements as disjoint, as opposed to conditional.

I think the “secondary” value it has created is immense. I’m not a Haskell programmer by any stretch, but thinking in terms of concepts evangelized by Haskell (isolating side effects, first class functions, monads) has totally changed the way many people design systems in conventional programming languages.


> concepts popularized by Haskell...

So, this may be a learning opportunity for me, but I thought the idea of separating out side-effects at the process boundary is a very old idea. The same for "first class functions". Haskell was invented around 1985. Lisp was 1958, Smalltalk 1972.

Besides, isn't Haskell fundamentally about about lazy evaluation and rigorous type safety?


Haskell enforces function purity (side effect isolation) using the type system. This is relatively novel, neither Lisp nor Smalltalk did it or even tried to do it, as far as I can tell. You could put together lazy values in Lisp manually but Haskell made the entire language lazy by default, and as a result, you couldn’t reliably sequence I/O operations if you tried to “cheat” the type system by giving pure type signatures to functions with side effects. The result was (eventually, in Haskell 1.3) the modern IO monad.

If you were using Lisp or ML, you could write pure code in the core of your program but fall back to impure code wherever you liked, because purity was not ever enforced. This explains why nobody really used stuff like monads in Lisp or ML—it wasn’t practically useful, because you could always just write impure code.

> Besides, isn't Haskell fundamentally about about lazy evaluation and rigorous type safety?

I’d say—no, absolutely not. It’s not about either of those things.

“Rigorous type safety” is kind of an ill-defined concept. I’d say Java has rigorous type safety, but it’s very different from Haskell. The lessons from Haskell is that function purity (side effect isolation) is incredibly useful, and laziness is the historical reason why functions in Haskell had to be annotated with the correct types—because a "print :: String -> ()" would not work in Haskell.

In other words, the fact that Haskell uses lazy evaluation forced the development of type system capable of expressing side effects and sequential operations, and now that we’ve discovered how to do that, we could discard the lazy part and make an eager version of Haskell. Some people have done that, there are a couple eager variants of Haskell out there. They are very recognizably Haskell.

There are a couple other cool things that came out of Haskell’s purity, like software transactional memory. Haskell didn’t invent STM, just like it didn’t invent purity, but Haskell is still the one major success story for STM. The lesson from trying to port STM to other languages is that Haskell’s purity is what made it successful there. Lazy evaluation, in a sense, is just the thing that breaks your program if your functions are not pure, keeping you honest.


Thank you. This was a very informative comment. I knew that Haskell didn't originally have monadic IO but the reasoning behind its adoption never occurred to me.


Yes, the concepts are pretty old. For a lot of people newer to programming, Haskell is the first language that compels them to be understood. That’s what I meant by “popularized”, to clarify. Changed it now to “evangelized”, to make the comment more clear.

I’d typed a whole paragraph on LISP, but deleted it because I’ve seen that whole regurgitation of history play out so many times on HN. :)

My comment was looking at Haskell as the standard bearer for FP, but it’s not of course.

I agree that lazy evaluation and rigorous type safety are what Haskell is best known for nowadays, especially since its less controversial components have found their way into most conventional programming languages by now.


Chance to improve some user experience here: is it possible to use `stack` without installing Xcode on macOS (BigSur)?

I get an error 'xcodebuild requires xcode' and 'C compiler cannot create executables' when running `stack setup`. I have the Xcode Command Line Tools installed. Anyone had similar experience?


If you are on Big Sur then it happens a lot. It happens with macports right so they have some information in tickets.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: