Hacker Newsnew | past | comments | ask | show | jobs | submit | doyougnu's commentslogin

My recommendation: find and cultivate vision, then view the $JOB not as separate from _your_ work towards _your_ vision, but part of it. Its the part that funds you enough to continue to progress on your own plan.

Here's how I think of it: If I were a painter, I would paint, explore and experiment in my free time because its what I want to do. Maybe, as a painter, my vision is to improve the state of the art of some kind of dye or brush or canvas and that is my vision. But! That does not mean that I cannot be commissioned to work on a mural or put on a retainer for a museum or something else. The only difference is that in the latter you are being explicitly payed by a patron to produce something they want. And furthermore I need that work, I work for myself but still need projects to bring in money to do the work I care about.

I view my software dev as the same thing. I have a vision of where I want to be, what I want to do, and how I want to contribute to advance the state of the art of the things I care about. I do not care, and am unconcerned about the corporate needs of the thing I care about, its for me and for people like me. My $JOB is just one part of that larger goal and the path I walk towards that goal. Its an important part, sure, and I show up and give a good faith effort and my expert opinion, but its not the part that enriches me as much as my personal stuff. The distinction is that the $JOB is not separate, its a necessary and important part of my plan to execute on my vision.

Once you have vision I think you'll find its much easier to find similar people who want to work on the same things you want to work on. And I think you'll find it much easier to tolerate capitalist minutiae because you will reduce the things you need from $JOB.


I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.

I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?


Your coworkers were probably writing subtle bugs before AI too.


Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?


Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.


Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.


… while not having a real distinction between flies and non-fly ingredients.


No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.

You don’t get to fix bugs in code by simply pouring it through a filter.


I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.

Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.


Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.

Who are we speeding up, exactly?


Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.

It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.


I wonder if there’s a way to measure the cost of such code and associate it with the individuals incurring it. Unless this shows on reports, managers will continue believing LLMs are magic time saving machines writing perfect code.


Agreed. the Haskeller in me screams "You've just implemented the IO monad without language support".


It's not a monad because it doesn't return a description of how to carry out I/O that is performed by a separate system; it does the I/O inside the function before returning. That's a regular old interface, not a monad.


> 1. a description of how to carry out I/O that is performed by a separate system

> 2. does the I/O inside the function before returning

How do you distinguish those two things? To put my cards on the table, I believe Haskell does 2, and I think my Haskell effect system Bluefin makes this abundantly clear. (Zig's `Io` seems to correspond to Bluefin's `IOE`.)

There is a persistent myth in the Haskell world (and beyond) that Haskell does 1. In fact I think it's hard to make it a true meaningful statement, but I can probably just about concede it is with a lot of leeway on what it means for I/O to be "performed by a separate system", and even then only in a way that it's also true and meaningful for every other language with a run time system (which is basically all of them).

The need to believe that Haskell does 1 comes from the insistence that Haskell be considered a "pure" language, and the inference that means it doesn't do I/O, and therefore the need that "something else" must do I/O. I just prefer not to call Haskell a "pure" language. Instead I call it "referentially transparent", and the problem vanishes. In Haskell program like

    main :: IO ()
    main = do
       foo
       foo

    foo :: IO ()
    foo = putStrLn "Hello"
I would say that "I/O is done inside `foo` before returning". Simple. No mysteries or contradiction.

https://hackage-content.haskell.org/package/bluefin/docs/Blu...


> I would say that "I/O is done inside `foo` before returning".

It is not. The documentation and the type very clearly shows this:

https://hackage.haskell.org/package/base-4.21.0.0/docs/Prelu...

> A value of type `IO a` is a computation which, when performed, does some I/O before returning a value of type a.

So your function foo does no IO in itself. It returns a "computation" for main to perform. And only main can do this, since the runtime calls main. You can call foo as much as you like, but nothing will be printed until you bind any of the returned IO values.

Comparing it to other languages is a bit misleading since Haskell is lazy. putStrLn isn't even evaluated until the IO value is needed. So even "before returning" is wrong no matter how you choose to define "inside".


I'm also pretty sure that its immaterial if Haskell does 1 or not. This is an implementation detail and not at all important to something being a Monad or not.

My understanding is requiring 1 essentially forces you to think of every Monad as being free.


Ah! My favourite Haskell discussion. So, consider these two programs, the first in Haskell:

    main :: IO ()
    main = do
      foo
      foo

    foo :: IO ()
    foo = putStrLn "Hello"
and the second in Python:

    def main():
      foo()
      foo()

    def foo():
      print("Hello")
For the Python one I'd say "I/O is done inside `foo` before returning". Would you? If not, why not? And if so, what purpose does it serve to not say the same for the Haskell?


My Haskell is rusty enough that I don’t know the proper syntax for it, but you can make a program that calls foo and then throws away / never uses the IO computation. Because Haskell is lazy, “Hello” will never be printed.


You can do this

    main = do
      let x = foo
      putStrLn "foo was never executed"
but you can also do this

    def main():
      x = foo
      print("foo was never executed")
What's the difference?


So it's the reader monad, then? ;-)


Yes.


Can you explain for those of us less familiar with Haskell (and monads in general)?


A reader is just an interface that allows you to build up a computation that will eventually take an environment as a parameter and return a value.

Here's the magic:

    newtype Reader env a = Reader { runReader :: env -> a }
    
    ask = Reader $ \x -> x
    
    instance Functor (Reader env) where
      fmap f (Reader g) = Reader $ \x -> f (g x)
    
    instance Applicative (Reader env) where
      pure x = Reader (\_ -> x)
      ff <*> fx = Reader $ \x -> (runReader ff x) (runReader fx x)
    
    instance Monad (Reader env) where
      (Reader f) >>= g = Reader $ \x -> runReader (g (f x)) x
That Monad instance might be the scariest bit if you're unfamiliar with Haskell. The (>>=) function takes a Monad (here a Reader) and a continuation to call on it's contents. It then threads the environment through both.

Might be used like this:

    calc :: Reader String Int
    calc = do
      input <- ask
      pure $ length input
    
    test :: Int
    test = runReader calc "Test"
    -- returns: 4
Not sure how this compares to Zig!

https://stackoverflow.com/questions/14178889/what-is-the-pur...

Edit: Added Applicative instance so code runs on modern Haskell. Please critique! Also added example.


Here's a minimal python translation of the important bits:

    class Reader:
        def __init__(self, func):
            self.run = func
        def pure(x):
            return Reader(lambda _: x)
        def bind(self, f):
            return Reader(lambda env: f(self.run(env)).run(env))

    ask = Reader(lambda env: env)

    def calc():
        return ask.bind(lambda input_str:
            Reader.pure(len(input_str)))

    test = calc().run("test")
    print(test)
Admittedly this is a bit unwieldy in Python. Haskell's `do` notation desugars to repeated binds (and therefore requires something to be a Monad), and does a lot of handiwork.

    -- this:
    calc :: Reader String Int
    calc = do
      input <- ask
      pure $ length input

    -- translates to:
    calc' :: Reader String Int
    calc' = ask >>= (\input -> pure $ length input)


A Monad is a _super_ generic interface that can be implemented for a whole bunch of structures/types. When people talk about "monads", they are usually referring to a specific instance. In this case, the Reader monad is a specific instance that is roughly equivalent to functions that take an argument of a particular type and return a result of any type. That is, any function that looks like this (r -> a) where `r` is fixed to some type, and `a` can be anything.

Functions of that form can actually implement the Monad interface, and can make use of Haskells syntax support for them.

One common use-case for the reader monad pattern is to ship around an interface type (say, a struct with a bunch of functions or other data in it). So, what people are saying here is that passing around a the `Io` type as a function argument is just the "reader monad" pattern in Haskell.

And, if you hand-wave a bit, this is actually how Haskell's IO is implemented. There is a RealWorld type, which with a bit of hand waving, seems to pretty much be your `Io` type.

Now, the details of passing around that RealWorld type is hidden in Haskell behind the IO type, So, you don't see the `RealWorld` argument passed into the `putStrLn` function. Instead, the `putStrLn` function is of type `String -> IO ()`. But you can, think of `IO ()` as being equivalent to `RealWorld -> ()`, and if you substitute that in you see the `String -> RealWorld -> ()` type that is similar to how it appears you are doing it in Zig.

So, you can see that Zig's Io type is not the reader monad, but the pattern of having functions take it as an argument is.

Hopefully that helps.

---

Due to Haskell's laziness, IO isn't actually the reader monad, but actually more closely related to the state monad, but in a strict language that wouldn't be required.


I see I’ve been beaten to the punch, but I’ll post my try anyway.

Your comment about IO handled by an external system In response to a comment about the more general concept of a monad is what they are, somewhat abruptly referring to in the above two comments.

The IO monad in Haskell is somewhat ‘magical’ in that it encapsulates a particular monad instance that encodes computational actions which Haskell defers to an external system to execute. Haskell chose to encode this using a monadic structure.

To be a bit more particular:

The Reader monad is the Haskell Monad instance for what can generically be called an ‘environment’ monad. It is the pattern of using monadic structure to encapsulate the idea of a calling context and then taking functions that do not take a Context variable and using the encapsulating Monad to provide the context for usage within that function that needs it.

Based on your streams in the new system I don’t see a monad, mostly because the Reader instance would basically pipe the IO parameter through functions for you and Zig requires explicit passage of the IO (unless you set a global variable as IO but that’s not a monad, that’s just global state) to each function that uses it.

From my perspective Zig’s IO looks to be more akin to a passed effect token outside the type system ‘proper’ that remains compile time checked by special case.


Reader monads have been used to implement dependency injection in Haskell and Scala libraries. A monad in general is the ability to compose two functions that have pure arguments and return values that encode some effect... in this case the effect is simply to pass along some read only environment.

Based on my understanding of above, passing an environment as a parameter is not the Reader monad, in fact passing the parameter explicitly through chains of function calls is what the Reader monad intends to avoid in typed, pure functional programming.


Reader monad is a fancy way of saying ‘have the ability to read some constant value throughout the computation’. So here they mean the io value that is passed between functions.


Well I don't think that fits at all. In Zig, an Io instance is an interface, passed as a parameter. You can draw some connections between what Zig is doing and what Haskell is doing but it's not a monad. It's plain old interfaces and parameters, just like Allocator.


Passing an interface as a parameter is a monad. (Io -> _) is an instance of Monad in Haskell.

Haskell just has syntax to make using (any) monad much nicer. In this case, it let's you elide the `Io` parameter in the syntax if you are just going to be passing the same Io to a bunch of other functions. But it still is there.


And for comparison, here's Haskell's (or rather Bluefin's) equivalent of Zig's `Io` parameter:

https://hackage-content.haskell.org/package/bluefin/docs/Blu...


Couldn't have said it better myself. But IIUC Andrew stated that its not a monad because it does not build up a computation and then run. Rather, its as if every function runs a `runIO#` or `runReader` every time the io parameter is used.


Is it necessary that a monad "builds up a computation and then runs"? In fact it's very hard for a monad to do that because the type of bind is

    (>>=) :: m a -> (a -> m b) -> m b
so you can really only make progress if you first build a bit (`m a`), then run it (to get `a`) then build the next bit (applying `a` to `a -> m b`), then run that. So "building" and "running" must necessarily be interleaved. It's an odd myth that "Haskell's IO purely builds an impure computation to run".


Are you saying "monad" is a synonym of "interface"?


Not a synonym, but `Monad` is one of the commonly used interfaces in Haskell (not the only one).


OK I think I understand now, thank you. My takeaways:

1. Yes, Zig is doing basically the same thing as Haskell

2. No, it's not a monad in Zig because it's an imperative language.


It still is a monad. It's just Zig doesn't have language support for monads, so it's less ergonomic.

Just as modular addition over ints in Zig forms a group, even if Zig has no notion of groups. It's just a property of the construct.

Laziness has nothing to do with it.

What that means practically for Zig, I'm unsure.


Monads do not need to build up a computation. The identity functor is a monad.


Let's see if I can do it without going too far off the deep end. I think your description of the _IO type_ as "a description of how to carry out I/O that is performed by a separate system" is quite fair. But that is a property of the IO type, not of monads. A monad in programming is often thought of as a type constructor M (that takes and returns a type), along with some functions that satisfy certain conditions (called the "monad laws").

The `IO` type is a type constructor of one argument (a type), and returns a type: we say that it has kind `Type -> Type`, using the word "kind" to mean something like "the 'type' of a type". (I would also think of the Zig function `std.ArrayList` as a type constructor, in case that's correct and useful to you.) `IO String` is the type of a potentially side-effecting computation that produces a `String`, which can be fed to other `IO`-using functions. `readLine` is an example of a value that has this type.

The Haskell function arrow `(->)` is also a type constructor, but of two arguments. If you provide `(->)` with two types `a` and `b`, you get the type of functions from `a` to `b`:

`(->)` has kind `Type -> Type -> Type`.

`(->) Char` has kind `Type -> Type`.

`(->) Char Bool` has kind `Type`. It is more often written `Char -> Bool`. `isUpper` is an example of a value that has this type.

The partially-applied type constructor `(->) r`, read as the "type constructor for functions that accept `r`", is of the same kind as `IO`: `Type -> Type`. It also turns out that you can implement the functions required by the monad interface for `(->) r` in a way that satisfies the necessary conditions to call it a monad, and this is often called the "reader monad". Using the monad interface with this type constructor results in code that "automatically" passes a value to the first argument of functions being used in the computation. This sometimes gets used to pass around a configuration structure between a number of functions, without having to write that plumbing by hand. Using the monad interface with the `IO` type results in the construction of larger side-effecting computations. There are many other monads, and the payoff of naming the "monad" concept in a language like Haskell is that you can write functions which work over values in _any_ monad, regardless of which specific one it is.

I tried to keep this brief-ish but I wasn't sure which parts needed explanation, and I didn't want to pull on all the threads and make a giant essay that nobody will read. I hope it's useful to you. If you want clarification, please let me know.


This is pretty concise, but is still really technical. That aside, I think the actual bone of contention is that Zig’s IO is not a Reader-esque structure. The talks and articles I’ve read indicate that function needing the IO ‘context’ must be passed said context as an argument. Excepting using a global variable to make it available everywhere, but as I said in a sibling comment, that’s just global state not a monad.

In a manner of speaking, Zig created the IO monad without the monad (which is basically just an effect token disconnected from the type system). Zig’s new mechanism take a large chunk of ‘side-effects’ and encapsulates them in a distinct and unique interface. This allows for a similar segregation of ‘pure’ and ‘side-effecting’ computations that logically unlined Haskell’s usage of IO. Zig however lacks the language/type system level support for syntactically and semantically using IO as an inescapable Monad instance. So, while the side effects are segregated via the IO parameter ‘token’ requirement they are still computed as with all Zig code. Finally, because Zig’s IO is not a special case of Monad there is no restriction on taking IO requiring results of a function and using them as ‘pure’ values.


A series of functions all passing the same `io: IO` value around exhibit exactly the behavior of the reader monad.


i mean not really? it absolutely does nothing to segregate stateful impurity into a type theoretically stateless token


Not a game dev. Besides profiling, I would create game scenarios that exercise certain parts of the game engine.

For example, I would create a game fight scenario where the player has infinite health and the enemy just attacks super fast at some settable rate. That way you could monitor whats happening in extreme abnormal conditions with the hypothesis that if the game works in extreme conditions then it will work in normal conditions.

Another example. If you have random encounters like in old school JRPGs then I would create a scenario where a fight happens per step of the player, the fight loads, then the enemy immediately dies, rinse wash and repeat. That should allow you to asses how the game performs after 100s of fights quickly.

The idea here is to create tests that improve your signal to noise ratio. So you create a scenario that will create a large signal so that then you can more easily diagnose the performance issues.


I haven't dabbled in rust since 2018, but if rust has managed to be as complicated as C++ while being a fraction of the age then I would think that would be some kind of macabre achievement in its own right.


I still like Olin Shiver's take on this: https://www.ccs.neu.edu/home/shivers/papers/why-teach-pl.pdf


There is something to the existence of fads and fundamentals. When I started, it was Object-Oriented-Programming (with multiple-inheritance and operator overloading, of course), Round-Trip Engineering (RTE), XML, and UML.

IMHO, not the ideas were bad, but the execution of them was. Ideas were too difficult/unfinished/not battle-tested at the time. A desire for premature optimisation without a full understanding of the problem space. The problem is that most programmers are beginners, and many teachers are intermediate programmers at best, and managers don't understand what programmers actually do. Skill issues abound. "Drive a nail with a screwdriver" indeed.

Nowadays, Round-Trip Engineering might be ready for a new try.


Olin also wrote the greatest acknowledgment section ever: https://scsh.net/docu/html/man.html


I always recommend people to learn at least one of the following: scripting language, compiled language, bytecode compiled language (C# or Java are industry giants), and at least one front-end web language either TS or JS. If they're still hungry I explain Erlang / Elixir / Gleam and tell them to try that out.


> Java and its OO relatives capture a communications-oriented model of computation where the funda- mental computational elements are stateful agents that compute by sending one another messages;

I wish even only half the OOP world actually understood it as the above.


Does every usage site have to change? You would alter fibonacci to be:

  fibonacci :: (MonadLogger m, MonadState (Int, Int, Int) m) => m Int
  fibonacci ...
and now of course all callers must support MonadLogger. But instead of using the MonadLogger (or any mtl constraint directly) you should just be constructing an abstraction boundary with a type class synonym:

  class (MonadLogger m, MonadState s m) => MyMonads s m
and now you change fibonacci:

  fibonacci :: MyMonads (Int, Int, Int) m => m Int
  fibonacci ...
And now if you need to add a monad or add Eq or whatever you just have to change your type class synonym rather than every function. Its not a problem with the language its just programing with modularity in mind, even in the type system.


I have seen this in the wild. The result often is that every function has a kitchen sink MyMonads constraint of which it only uses a tiny subset. It's death by a thousand cuts. If you make such a class for every monad combination you get insanely large amount of classes. It's simply unworkable. Which is why you get the kitchen sink monad pattern.


If you think it's fine that you can log from all functions in other languages, then what's the problem with adding that constraint to all your Haskell functions to allow this?


The problem is that you should always write your code to be idiomatic in the language. In this case I feel like the Idiomatic Haskell way has serious drawbacks.

For example, It's fine in C to manually allocate/free memory, it's the way you have to write C. It's not fine to do the same thing in Rust. Even though you of course could do that in Rust as well.


It's perfectly idiomatic Haskell to annotate all your functions with an effect you believe they should all have.


The only reason this is idiomatic is because there is no better way. That's the entire point I'm making... Haskell prides itself in writing generic and reusable functions. This then is then thrown out of the window with the kitchen sink monad. Very understandable, because everything else sucks.

That precisely why I think this is a great shortcoming of the language.


It's not a shortcoming of the language; it's a shortcoming of the goal! You can't have both the goal of fine-grained effect tracking and the goal of not having to make fine-grained changes when effects change. They're incompatible goals in any language.

The strength of Haskell is that it allows you to achieve the first goal if you want. Most languages don't (pretty much no other language, actually).


That's an extremely limited point of view. Just because Haskell allows precise specification does not mean it is impossible to disallow loose specification. In fact that's one of the strongest values of Haskell's type system. For example you can overconstrain your head function

    head :: [Int] -> Int
But you can also just leave out the type signature all together to let the type checker figure out the most generic type. You can have your cake and eat it too.

You can even almost do what I want with partial type signatures. Just sprinkle it everywhere inside your constraints. GHC will automatically pick the right constraints. At the call site you actually care for the definition you can not use the partial type signature. The great disadvantage of this is that you now introduce ANY constraints into your type signature and you lose your types as documentation.

But that doesn't have to be

You could have a constraint with something like `UseMonadSubset (...)` which works almost like partial type signature. GHC should infer 0 or more of the monads insde `UseMonadSubset` as the actuall constraint. `

Then you could write something like:

    fibonacci :: UseMyMonadSubset m => m Int
    fibonacci = -- Uses only MonadState (Int, Int, Int)

    -- Type checks because type checker can see fibonacci ONLY uses MonadState
    foo :: MonadState (Int, Int, Int) m => Int
    foo = fibonacci

    bar :: UseMonadSubset m => Int
    bar = fibonacci
Which allows for precise specification if you want to and if you don't you let the type checker figure it out. You may even be able to implement this as GHC type checker plugin.


It's an interesting idea but I can't say I feel that would solve a problem I've ever had. In fact, I always completely annotate top-level definitions with their types. I never want them inferred. And I've never felt it too burdensome to fix up a call stack when adding a new effect. But if you consider that a weakness of Haskell then so be it!


And what's wrong with the kitchen sink monad pattern? I've certainly used exactly that. And I have no problems with it.


Because your code is very much overconstrained at that point. For the same reason you don't add a `Num a` constraint to list `head` function. You have now essentially fused your function to your codebase.


That's not a problem in business logic heavy code. Requirements change and you could use previously unnecessary constraints at any time.


I think that is a fair assessment of that chapter. The goal of the chapter was to take a project that has never done any kind of optimization and to show an optimization engineering pass. Basically one has to be sure the implementation doesn't have any obvious easy to fix leaks before considering a different algorithm or something like that.

So I would argue that the real message of that chapter is demonstrating, step-by-step the methods used to find the memory leaks: info-table profiling and biographical/retainer profiling and ticky-ticky profiling.


A chapter dedicated to understanding laziness is indeed doable, but my target audience is Haskellers that have already read through LYAH, Real-World Haskell and perhaps UPenn's CIS 194 class; each of which cover laziness and so I want to focus on things that should be more widely used or known, such as info table profiling, eventlog or the one-shot monad trick.

But that doesn't mean that laziness doesn't come up! For example, its impossible to demonstrate using (or defining) unboxed or unlifted types without discussing laziness. The same goes for using GHC.Exts and explaining the difference between Data.Map and Data.Map.Strict.


Those books define laziness, and provide a couple examples - but they do not teach a programmer how to use it correctly.

Correct use of laziness involves choosing sufficient space invariants, implementing them, and documenting them. This is critical for writing efficient code in Haskell, and rejecting the "everything strict" cargo cult at the same time allows you to recover compositionality.

There was a period of time around 15 years ago, back before core libraries really started to understand this concept, and they would often have updates that silently made things too strict, breaking my code that was using their previous laziness in ways they hadn't predicted.

And it bugs me when I see new resources being set up to train people to write code that prevents my creative uses of their libraries. We should teach people how to write efficient Haskell code that's still Haskell code. It's great that we have so many advanced strictness tools when they're needed. But they shouldn't be reached for before we know if they help.


Great! If you could open an issue and perhaps lay out what you would like to see I would be more than happy to add a chapter like this. This book should serve the community and I think you've described a good gap that the book has which we could close with such a chapter.


You're right. I should at least give you an outline for the topic.


Author here! I figured it was only a matter of time before this showed up on HN after the haskell foundation announced we had moved it the HF org. If you have any recommendations then by all means please open an issue, but bear in mind that the book is still very much a work in progress. And most chapters are just todos at the moment.

My goal is to have a handbook that consolidates and demystifies optimizing GHC Haskell because I think this resource is sorely missing in the Haskell community. So that includes reading and understanding Core, Stg, and Cmm as well as understanding the tools that already exist for GHC Haskell but are under documented in addition to the real advanced features, like altering the RunTimeRep your data types to control their behavior at runtime. Needless to say there is a lot to do :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: