Hacker News new | past | comments | ask | show | jobs | submit login
Defunctionalisation: An underappreciated tool for writing good software (gresearch.co.uk)
339 points by Smaug123 on March 9, 2020 | hide | past | favorite | 128 comments



Great article, great technique and lot of on-point advice for intermediate level programmers, BUT, it has the same problem many functional programming advocacy posts suffer from: the code examples are given in a language everyday programmers will probably not recognize.

Don't get me wrong, I love functional programming. I use Haskell and OCaml with joy. I also read about Idris, F#, Elixir and such quite often but I can also remember when all of this was alien to me.

Useful advice requires accessible set of examples and the very first example in this article (basic calculator) already make use of sum types, pattern matching, higher order functions and recursion, in a programming language with a relatively low adoption rate.

I don't know a solution to this issue without giving extra burden to the author. They can use one pure functional language alongside with a widely used, strongly typed language (Typescript, C++ etc) in their examples but that's probably too much to ask for.

Maybe my understanding of the target audience is wrong and my whole criticism is obsolete. Please correct me if that is the case.

I have friends with 5+ years of industry experience with languages like C#, Javascript, Java, PHP and they tend to verify my claims about accessibility of these type of articles.

Does anyone agree/disagree?


Speaking as an author who deliberately chose to use JavaScript as the language of communication...

I both agree and disagree with you. It is nearly impossible to make one post that communicates strongly to all audiences. If you use an example problem simple enough to be understood by everyone, you almost certainly give up some of the nuänces that would be surfaced by a more complex problem to solve.

Likewise, using a language like JavaScript that is popular but doesn't embrace modern FP, you necessarily end up reinventing a lot of wheels like composition and chaining and partial application.

This can be very instructive for the newcomer, but is nothing but incidental complexity for the more experienced functional programmer.

In the end, I think the world benefits from authors not trying to be all things to all people. We all win if there are a variety of posts about a similar subject, using different languages and solving different complexities of problem.

So yes, 100% this post is not going to meet a lot of programmers' needs. But if there is some set of programmers--no matter how small--for whom it is a good fit, then let's agree that it's valuable and well-written for iuts target audience.


I totally agree with you. Nobody is obliged to satisfy all audiences.

Just to make it explicit; I'm not suggesting Javascript for communicating this type of posts. Without strong typing, this kind of refactoring will immediately become a spaghetti of screaming ducks.

Defunctionalisation is a not-so-rare pattern to encounter and I humbly think choosing a non C-like language to explain it is a waste of opportunity to introduce it to people who can benefit it most (i.e network programmers, game scripting tools programmers, distributed systems programmers etc).


Alas the all seeing eye of hacker news prevents many would be writers from publishing their findings. Even the theoretical chance of being criticized on hacker news is an oft cited reason that I’ve heard from many coworkers for not publishing interesting findings.


This type of attitude suggests all criticism is bad which I believe is not true at all. No serious writer would hold science back because of a probable debate in some random internet forum. Otherwise, they don't understand how science works.

The discussion in this comment section (for this post) is so far friendly. The author is kindly giving up personal time to engage with the readers. There are even people who posted real code pieces to elaborate the topic for people who had hard time to follow. If this is not HN at its best, I don't know what is.


I (author) agree with you. The target audience of this post was originally my coworkers, if I even targeted it at all; but there is certainly room in the world for more basic explanations targeted at people who aren't used to the idioms of functional programming.

The trouble is really that defunctionalisation is much, much easier if you've got sum types, pattern-matching, and higher-order functions. (It's not clear to me that it has any use at all if you don't have higher-order functions.) Is it worth the time trying to implement this sort of pattern in C#? I don't know. Insofar as C# is nice to write, it's because the IDE writes so much of the boilerplate for you, and no IDE is set up to admit this kind of pattern.


C#'s Expression<T> is almost exactly what you use in your example, baked into the language - you specify a lambda but the compiler emits construction of an expression tree.

https://docs.microsoft.com/en-us/dotnet/api/system.linq.expr...

https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...


Indeed, we have used `Expr` and the `ReflectedDefinition` attribute internally in F#. However, if the user is capable of giving you literally anything, you'll struggle to optimise it! We've found it important to artificially restrict what the user can give us, by explicitly modelling the domain.


If you happen to see this, during an editing pass it would help to include information about what language you're using to present your code examples when the article is standalone for a wide audience.

Something like:

"For this section and the next, let us imagine the guts of a very basic calculator application, expressed in the initial algebra style, using [your language and environment]."


"(It's not clear to me that it has any use at all if you don't have higher-order functions.)"

It can be done. There are plenty of C codebases that end up having something defunctionalized eventually. It's arguably one of the main pressures that causes Greenspun's 10th Rule [1] to be so accurate; as your codebase grows the odds that you'll be forced to defunctionalize something significant in your code goes to 100%.

It isn't any fun, though.

[1]: "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."


> It's not clear to me that it has any use at all if you don't have higher-order functions.

The very origin of defunctionalization is to emulate higher-order functions in a language without them.

https://en.wikipedia.org/wiki/Defunctionalization

> defunctionalization refers to a compile-time transformation which eliminates higher-order functions, replacing them by a single first-order apply function.


The author has indicated that this blog post was an internal communique which escaped confinement, so this isn't intended as criticism:

For a general audience, linking the first instance of the string "defunctionalization" to this Wikipedia article would help people get up to speed. I'm not embarrassed to admit that I had never heard of it, and as a consequence I'm learning more from the Wiki than I learned from the Fine Article.


It didn't exactly escape confinement; rather, I originally wrote much of this content for internal purposes and then realised it would be good to publish it externally.

True, since I linked "closure" I should probably have linked the main topic of the article. Honestly, though, I'm not sure the Wikipedia article is very comprehensible at all; I'm hoping mine is more so.


It's not a great Wikipedia entry, agreed.

I was left wondering if the topic is inherently that difficult or if this is just another case of le Wik being kinda dense for maths and comp sci. Looking around a bit at other links in this thread, I'm gathering it's the latter.


Here's [1] the first example in C# (using language-ext [2] to provide unions)

    static void Main()
    {
        var expr = TwoArg.New(Add.New(), Constant.New(1), Constant.New(1));
        var res = ExprModule.Interpret(expr);
    }

    [Union]
    public interface OneArgFunction
    {
        OneArgFunction Negate();
        OneArgFunction Custom1(Func<int, int> x);
    }

    [Union]
    public interface TwoArgFunction
    {
        TwoArgFunction Add();
        TwoArgFunction Subtract();
        TwoArgFunction Custom2(Func<int, int, int> x);
    }

    [Union]
    public interface Expr
    {
        Expr Constant(int x);
        Expr OneArg(OneArgFunction f, Expr x);
        Expr TwoArg(TwoArgFunction f, Expr x, Expr y);
    }
        
    public static class OneArgFunctionModule
    {
        public static Func<int, int> Interpret(OneArgFunction f) => f switch
        {
            Negate _         => x => -x,
            Custom1 (var fn) => fn
        };
    }
        
    public static class TwoArgFunctionModule
    {
        public static Func<int, int, int> Interpret(TwoArgFunction f) => f switch
        {
            Add _            => (x, y) => x + y,
            Subtract _       => (x, y) => x - y,
            Custom2 (var fn) => fn
        };
    }

    public static class ExprModule
    {
        public static int Interpret(Expr e) => e switch
        {
            Constant (var x)             => x,
            OneArg (var f, var x)        => OneArgFunctionModule.Interpret(f)(Interpret(x)),
            TwoArg (var f, var x, var y) => TwoArgFunctionModule.Interpret(f)(Interpret(x), Interpret(y)),
        };
    }

[1] https://gist.github.com/louthy/54e216373d71b7fb4ceb5c619ea32...

[2] https://github.com/louthy/language-ext/


Hey, when did language-ext get roslyn-based code generation? That's awesome!

Do you know if it's widely used yet? Any insights/gotchas to share? I see it's using CodeGeneration.Roslyn.BuildTime, I wanted to use it to make my own sum types library for c# for some time now, but never got around to it.


A more in-depth reply :)

> Do you know if it's widely used yet?

Not sure which project you mean?

* Roslyn code-gen - not many

* LanguageExt.Core - yes, lots - 1.6 million downloads on nuget and counting

* LanguageExt.CodeGen - not much from what I can tell, but that might be because there's the runtime Record system that makes a type have structural equality/ordering/hashing and serialisation by just deriving it from Record<TYPE>, i.e.

    public class Person : Record<Person>
    {
        public readonly string Name;
        public readonly string Surname;
        public Person(string name, surname) => 
           (Name, Surname) = (name, surname)
    }
It builds the IL and compiles it on first-use, so it's code-gen without any of the hassle of setting up the tool chain.

Some of the additional benefits of using the CodeGen instead is that the With function and lenses are built automatically, but that might not be enough to make people use it. Also, things like [Union] and [Free] (monad) are relatively new, so perhaps it wasn't compelling enough until recently. There's obviously benefits to compiling the code up-front rather than generating the IL, so I'd expect it to pick up over time.

I and my team certainly use it a lot ;)

> Any insights/gotchas to share? I see it's using CodeGeneration.Roslyn.BuildTime

None really, the CodeGeneration.Roslyn.BuildTime hasn't had much love, and a new version they built (to support plugins) just fundamentally didn't work, so it clearly wasn't tested properly. But, the version I'm using is stable right now, I just can't migrate to .NET Standard 2.1 until they bring the new one out, so that is a limiting factor for me at the moment.


> LanguageExt.CodeGen - not much from what I can tell

Yes, I meant that one.

> None really, the CodeGeneration.Roslyn.BuildTime hasn't had much love, and a new version they built (to support plugins) just fundamentally didn't work, so it clearly wasn't tested properly. But, the version I'm using is stable right now, I just can't migrate to .NET Standard 2.1 until they bring the new one out, so that is a limiting factor for me at the moment.

Ah, that's too bad. Visual Studio 2019 finally has a really nice and frictionless way to do code generation, but it is pretty complicated to bootstrap all the stuff that CodeGeneration.Rosylin takes care of. I wish Microsoft sponsored or supported it in some way. There is also a fork of it at https://github.com/mwpowellhtx/Code.Generation.Roslyn, but I don't know how well it works.

Thanks for all the answers!


I guess about a year or so. I have some docs on the wiki re the code-gen features. I’m writing this on my phone atm, so I’ll do a fuller reply to your qs when I’m back in front of my PC

https://github.com/louthy/language-ext/wiki/Code-generation


The idea of producing a description of some effectful computation that is later interpreted instead of generating the effects directly is certainly applicable more widely than just functional programming. For example, the Command pattern described in the book Design Patterns is broadly analogous in object-oriented programming and provides similar advantages. (Edit: On reflection, the Interpreter pattern might be a better example, but the main point stands.)


> It's not clear to me that it has any use at all if you don't have higher-order functions

Your example in the "Testability" section doesn't involve HOFs. I thought it was interesting to view that example as an instance of defunctionalisation.


I guess that's true. It was my inner compiler-writer speaking there. (To match defunctionalisation with the initial algebra pattern does require HOFs in all but the simplest cases, I think.)


> Insofar as C# is nice to write, it's because the IDE writes so much of the boilerplate for you, and no IDE is set up to admit this kind of pattern. No offense, but it kind of sounds like you haven't worked with C# since 2001 or so.


I certainly wouldn't describe myself as a C# expert, true. I interact with it on a weekly basis, but no more than that. Please make allowances for the fact that I am spoiled by F#!


I'm lead developer of a quite large Java code base and I found the artivle super helpful in giving a name and shape to something I've half observed but really come to appreciate.

Over the past two years we've been slowly moving our code base in this direction general, eliminating Exceptions, for example, in favor of more detailed Result<ValueT, ErrorT> return types.

With lamdas, writing this kind of code does require more boilerplate in Java, but it's getting better.

Nonetheless, ADTs and pattern matching are sorely missed. The Visitor pattern is an alternative and is well worth it for key data structures, but is too verbose for one off usage.


You can agree, but it doesn't mean you should have changed anything you did. The only thing I would add is the meta analysis you have included here. To write your post in a language without those features would have either made it into OOsagne or tripled the lines of code.

The metaidea is to stay at a richer abstraction level for longer, and then drop down in the last moment for execution? One doesn't always have a chance to do this, being able to detect when to apply the technique would be a good lesson.

Rather than reworking the post to meet the imperative programmers in their safe space, it could entice them into your crystal palace.


> I can also remember when all of this was alien to me.

The only way to make it the norm is to make it the norm.

Maybe someone sees this...

  type Expr =
    | Const of int
    | CustomOneArg of (int -> int) * Expr
    | CustomTwoArgs of (int -> int -> int) * Expr * Expr
...instead of this...

  interface Expr {
  }

  class Const extends Expr {
      public Integer val;
      public (Integer val) {
          this.val = val;
      }
  }

  class CustomOneArg extends Expr {
      public Function<Integer, Integer> fun;
      public int val;
      public (Function<Integer, Integer> fun, Integer val) {
          this.fun = fun;
          this.val = val;
      }
  }

  class CustomTwoArgs extends Expr {
      public BiFunction<Integer, Integer, Integer> fun;
      public Integer valA;
      public Integer valB;
      public (BiFunction<Integer, Integer, Integer> fun, Integer valA, Integer valB) {
          this.fun = fun;
          this.valA = valA;
          this.valA = valB;
      }
  }
... and thinks "I should look into this more".

That said, I don't want to put words into the author's mouth, but this particular article looks more "by FPers for FPers" than FP advocacy.


For me its not the code/language that is un-intelligable (all code would require me to stop and try to brain interpret it); but the fact that the article at least to me refers to too many things at once. Talking about dependency handling, using data structures with dispatch vs direct function calls, data types for errors vs exceptions, expressions, etc makes the article dense to read and hard to skim.

Many programmers have used these techniques even outside FP and would be able to understand it one concept at a time - they aren't advanced concepts to me. It's more that it isn't "easy reading"; but it seems its for an internal audience with assumed background knowledge.


The author said in a comment here that it was written for an FP audience.

The example you gave is similar to a discriminated union in ML but it is not as robust because the language will not check that every match expression (probably an if-instanceof-ladder in Java) handles every case.


that interface is actually an abstract or virtual class that can be extended.


I’m a developer with nearly 8+ years of experience mostly in the JavaScript and Java/C# space with a couple of years of experience in Go. I’ve dabbled in functional languages like F# and Haskell. I found the article difficult to understand and the code examples were off putting to me, so I agree with your assessment.

That being said, I may not be the article’s target audience.


Hah, I was worried I was alone in thinking that (or at least, something very similar). When he went into concrete examples of defunctionalization, I saw that it was Haskell (or some ML-style functional language), and I rolled my eyes.

I thought, "Okay, great, you're working in a language where you already effectively have to think of it as, 'I'm not telling you how to do this, just defining the concepts.' Fine, but the language has already done most of the work that you ascribed to defunctionalzation. It's already very highly abstracted."

I would have hoped to see defunc appended to an environment that didn't already have the functional lattice to build off of.

And for my part, yes, I have worked with Haskell, enough to do some programs in Project Rosalind and build some codebreakers for classical ciphers, but I still find it hard to read and debug. It just isn't conducive to conceptually breaking down what the code is doing into bite-size pieces. That's probably because I, like most programmers, probably grew up with a procedural mindset and started from languages that enforce procedural thinking. But I suspect it goes beyond that, in being harder to reason about logic as "apply this function to this function to this function which operates on this".


> Okay, great, you're working in a language where you already effectively have to think of it as, 'I'm not telling you how to do this, just defining the concepts.

I don't think this is really the same thing. You're discussing the object level; I'm discussing the meta-level.

Haskell is based around "define the concepts as you would naturally define them and I'll use that definition to create a program". However, this is entirely implicit in the language. Defunctionalisation explicitly reifies this, allowing the language to discuss how it does it.

I see your point as basically "what's the use of continuations? I already have `goto`".


I confess a lot of this is over my head. So, not a lot to contribute further here. However:

>I see your point as basically "what's the use of continuations? I already have `goto`".

If I were trying to convince skeptics about the merits of structured programing (i.e. no goto), I wouldn't start from Java and merrily introduce try/catch in that context. I start from a BASIC program, explain the semantics of e.g. a while loop, and replace some program's gotos with while loops, and then tackle increasingly "stubborn" gotos.

And yes, I hope you would be able to explain the benefits of continuations to a goto-fanatic, regardless of how much inferential distance there is.


I think it's unreasonable to expect basic functional programming tutorial on any reasonably advanced article.

If you want to learn basic functional programming, look for basic functional programming tutorials.


This is why academic articles (should) contain pseudocode. I do prefer functional concepts illustrated using ML-family syntax over equivalent C-family code because C-style syntax seems to be verbose, full of boilerplate, and just overall not as well suited for expressing functional concepts. I personally find myself doing context-switching where I struggle to incorporate information I learned in a functional context into an imperative context and vice versa.

I'm a strong advocate of pseudocode. I've had a few situations where I explained something online in Python, people complained they didn't know Python and couldn't understand it, so I just sort of arbitrarily change the syntax a bit so its inconsistent, and people reacted much better to it. Throw in a curly bracket and semicolon, maybe an arrow or two, write a line in English sometimes, invent syntax on the fly and then use completely different syntax on the next line. You wound think that would harm comprehension, but actually I've found that it focuses people on what's important and gives them an idea of what to follow along with (the ideas, instead of the syntax which is different from one line to the other).

That said, ADTs and pattern matching I can see needing care to making the article understandable to people on the fringes of your target audience, but if you can't handle functions and recursion, then you're not a programmer with the sufficient expertise to make use of the article in the first place. The target is already functional programmers who know enough about functional programming that stating a problem in purely functional terms is a requirement for the refactoring technique. If you can't state things in terms of first order functions and recursion, then you can't perform the underappreciated tool for which the article is based on.

In conclusion, use pseudocode and imagine your audience is slightly sleep deprived from having a screaming baby at home.


I've used F# for real non toy code (admittedly years ago so my memory is very hazy, and being a beginner my code likely relied on too many c#isms), and I unfortunately found this article impenetrable, so I agree. Part of it is that Im not familiar with what an Initial Algebra is, and writing a calculator seems very abstract compared to actual code Im likely to write.

Looking up the definition on wikipedia definitely didn't help: "In mathematics, an initial algebra is an initial object in the category of F-algebras for a given endofunctor F". Oof, I think I would need semesters worth of higher level university math to even begin to understand that.

I'm probably just not the target audience for this, but I frequently think people avoid functional languages because they (appear to) require an understanding of math that the vast majority of coders have never been exposed to.


I've asked the Initial Algebras for the Uninitiated speaker to see if we can get the video of that talk (which is superlative). I promise the idea is actually quite natural and does not require maths at all, but it does need to be taught, and it probably is quite hard to pick up from just slides alone.


> the code examples are given in a language everyday programmers will probably not recognize

The code examples are given assuming the reader already knows the language and runtime -- not once in the article does it say what this language is or explain the syntax/grammar rules. I would take it to mean this isn't meant for a broad audience, or the author has made a common curse of knowledge[0] mistake.

[0]: "The curse of knowledge is a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand." https://en.wikipedia.org/wiki/Curse_of_knowledge


You're right, I had simply assumed that F# was readable. Interestingly, four proofreaders did not pick up on this either.

Parenthetical aside: I could have been rescued from this by a sufficiently advanced Markdown renderer, and in fact the HTML source is annotated with `lang-fsharp`, but that metadata did not make its way up to the reader.


Tell them it's like the command pattern in OO.


It's really more like the Interpreter pattern (another one of the original GoF patterns)[0]. Instead of doing something directly, you create a description of it, and then provide an interpreter to do the work elsewhere in the program. That leaves you free to serialize/optimize/test/etc. the description prior to or instead of actually executing the work.

[0]: https://en.wikipedia.org/wiki/Interpreter_pattern


That's an interesting comparison.

On one hand, it feels similar, especially if the Command class is "sealed".

On the other hand, it feels like completely the opposite since the command classes contain the code they execute and act like lambdas.

I guess you could apply a visitor pattern on top of the command pattern to make it truly like defunctionalization. I hope nobody actually does this.


You're just describing an AST interpreter. Not only do people do this, but it's a reasonable thing to do.


In fact, the command pattern is the opposite in a certain way. Defunctionalization corresponds to deeply embedded DSLs, whereas the command pattern corresponds to shallowly embedded DSLs.

In a proper functional language, you do not need objects for the latter, as you can directly compose functions.

That being said, the ability to add more operations than just "execute" to your commands allows you to emulate the features of deep embedding (e.g., serialization). But you have to extend every command for a single new operation (read up on the expression problem). That style is known as tagless in the FPL community, btw.


I think programmers who haven't been exposed to a language like F# or other OCaml/ML languages should take time to broaden their horizons. It'll make them a better programmer by exposing them to new concepts and new ways of thinking.

I learned Lisp and was into Clojure for awhile - do I write it professionally? No, but I've adopted some of its ideas and tenets in my own code for work.


For this article, maybe the author could have used Swift. The key idea of this article is to use a recursive tree structure to describe a future task. To build that tree structure with swift, one could use recursive enums with associated values (ex: https://www.pointfree.co/episodes/ep26-domain-specific-langu...).


I think JavaScript should be the perfect language for explaining these concepts. Many programmers understand JavaScript, and it has higher-order functions.

And many programmers use JavaScript so they could easily start adopting this mechanism in their actual programs.


But it lacks ADTs. And defunctionalization without ADTs is way more complicated to explain due to the necessary boilerplate.


It might be easier in Typescript, which at least has a type system that can say or.


It would go a long way if the post used a Rust-like syntax even if it’s not a real language. Perhaps ReasonML, but I think that even does away with more parens than necessary IIRC.


what I am wondering is how that compares to how we think as certain human languages cause to think differently structurally as the composition of a sentence goes by different rules...ie German vs English for an example.''

German verbs and subjects is backward in order to English sentence structures.


Junior programmers who don't write compilers don't need to know about defunctionalization. When they start writing compilers (or whenever the curiosity over comes them) they can spend an hour to learn basic ML syntax which is much more readable than Algol for discussing these topics.

It's not necessary to Cobolize everything to make it superficially acceptable to people who refused to learn basic language.


Junior programmers may not be writing compilers but there are at least two scenarios I can think of defunctionalization can become useful to the mediocre programmer.

1. Serializable, scriptable configuration parsing (i.e used widely in video games such as visual novels)

2. RPC over network where request/response paradigm isn't applicable (i.e a distributed state machine, language server etc)

Both of these can be worked around with existing techniques but they can also benefit from defunctionalization for much better debug-ability.

In my comment, I stated that even non-junior programmers have hard time reading these kind of articles and I believe they already deal with problems like 1 and 2 in their career.


> Junior programmers who don't write compilers don't need to know about defunctionalization.

Well, it depends. My experience is that being aware of how lambda lifting/defunctionalization works actually makes first-class functions/HOFs a lot easier to understand as a language feature, even to a novice programmer. It's precisely the "missing link" between that language feature and the usual case of first-order functions/subroutines. Same for other "high-level" features known from FP and other programming paradigms - there are way too many of those to list them, really.


As a "junior programmer", we aren't the only ones who need this. There are a lot of us in this new wave of programmers who have been learning about and working to explore the material in the hard CS(i.e FP and type system) fields.

There are a lot of "senior programmers/developers/engineers/whatever other term you want to use" that don't know about these techniques. More broadly they don't know about a lot of the features and functionalities of functional programming and more generally higher order (as in function) programming paradigms.

If we don't write introductory articles in a way that those outside the subfield can grasp them, we are alienating large groups of new-guard and old-guard programmers who could benefit greatly from understanding the techniques that not only help them improve their code and productivity, but also the techniques that allow them to use high level features without paying for the cost of them.

Anecdote: I've worked with a senior engineer on a C++ project. We used C++11 but effectively we wrote C++98 style code because they had largely unfounded fears of not only the performance impacts of utilising new features and FP paradigms in their codebase, but also the debugging and readability impacts of using them. I found that many of the articles and videos I would send them to try to ease those fears ended up falling flat due to them not targetting his audience. It took me slowly writing up internal memos and presentations on these various features, and showing how they came at little to no cost while providing their benefits for our team to actually start using them. Now they like to joke about how "In 2019 I did the impossible and helped drag them kicking and screaming into 2011".

On your last comment: You claim that we shouldn't need to "Cobalize everything to make it superficially acceptable to people who refused to learn basic language" but the issue is a matter of priorities. For a new engineer learning the language is important however for senior engineers and leadership, these are not the most important concerns. The important stuff is the following in order:

- Actually delivering a product.

- Making the product stable, easy to maintain, easy to debug/validate/test, and easy to modify/improve without introducing issues.

- Making sure that the product ages well, handles staff churn without losing design intent, and doesn't unnecessarily accrue technical debt.

- Ensuring that new staff can be easily on-boarded and become productive.

It should come to no surprise that senior staff are hesitant to waste time on what could potentially be fads or misinformed design choices. They won't waste time on learning something they don't think will be useful or pay off and as a result, unless introductory content includes them in the intended audience, they never will get a chance to find value in these concepts.

One last note: Sorry if I rambled a bit, I'm typing this up real quick while I take a break and I've been a bit all over the place recently. If I don't seem clear about anything, ask me to clarify and I will.


I might explain this concept to OOP-minded programmers like this:

Sometimes, you can improve your code by having it return a description of what to do, rather than doing the thing directly. This is like in SQL where you might return a query plan, rather than executing a query. Once you have this description, or plan, you can analyze, transform and inspect it before passing it to some execution engine that actually does the work.


Thanks for that synopsis, when put that way it seems similar to one of the takeaway messages from the talk "Constraints Liberate, Liberties Constrain", especially the early example in the talk of making a description language for printer commands being better than just doing the commands directly.

(https://www.youtube.com/watch?v=GqmsQeSzMdw&feature=emb_logo)


>I might explain this concept to OOP-minded programmers like this:

Sometimes, you can improve your code by having it return a description of what to do, rather than doing the thing directly.

...which requires you to turn functions into data structures and then interpret those.

Meanwhile, in OOP land you have objects, which can be seen as self-interpreting data structures.


I don't claim that this technique is impossible in OOP; it's just not as natural without discriminated unions and match expressions.


You're missing my point. In OOP every object is computation represented as data. It's not some kind of "unnatural" design pattern programmer should concoct in a special way. Erasing the distinction between data and procedures (or getting rid of both, if you look at it in another way) in one of the fundamental ideas that lead to creation of OOP in the first place. It sounds like Alan Kay was more interested in getting rid of data, but it does work both ways.

Thus, saying "let me explain this to OOP developers" is highly ironic.

If that's not clear, let's go back to your description:

>Sometimes, you can improve your code by having it return a description of what to do

Every object is (or at least should be) "a description of what to do" in the exact sense you're using here. This is crucial to understand for properly using OOP.


I think that's a vision of OO that not many OO programs that I've seen actually follow. It's more common to see OO programs where object methods represent the computations, not the whole objects, while the bundled data is used to store resulting state of the computations, with much or all of the data hidden to preserve invariants for that state. Just acting directly on the data like that without an intermediate representation of the computation itself is just the sort of liberty that the video I mentioned in the sibling post, warns about that can turn into unwanted constraints in some cases.


>I think that's a vision of OO that not many OO programs that I've seen actually follow.

Maybe not, but it absolutely was part of the original vision. This is why Smalltalk 80 implements if/else statements and loops as methods, rather than keywords. A boolean in smalltalk is not just a value. It's a latent algorithm for choosing between two blocks of code at some later point in time. It's also a latent algorithm for operating on other booleans via binary logic methods. Until you start seeing objects this way, you will not be able to appreciate the elegance of object-oriented programming.

Here are some examples of how this can work in non-trivial scenarios:

http://www.vpri.org/pdf/tr2007003_ometa.pdf

https://bracha.org/executableGrammars.pdf


> Every object is (or at least should be) "a description of what to do" in the exact sense you're using here. This is crucial to understand for properly using OOP.

I think you may have missed the point. In most OOP languages, an object exposes some capabilities (methods that can be called, properties that can be read, etc), but it does not describe itself very well at all. This is why we have RTTI, instanceof etc. to switch on what an object is. Alternatively, we can use the visitor pattern. As the OP states, these are not as elegant as DUs and matching.


I think what the parent means is that even if you have data in your object, still the (OOP) language doesn't have discriminate unions nor match expressions.

Also, if you call that method on the (OOP) object then "it still does the thing" - rather than returning a description of the thing.

The author talks about a different (meta) way of programming where you create data types for all the actions the program should be able to do, then have your functions modify these descriptions, and, as the very last step interpret the description/run the program.


The fact that almost every modern use of OOP consists of objects with numerous methods all with differing behavior essentially contradicts what you've stated.

This concept is not OOP.


There are, of course, tradeoffs to using defunctionalization (I've also heard this called a "data DSL"). I have seen these tradeoffs often ignored or argued past when discussing various solutions that take advantage of it.

The cons to defunctionalization that I have experienced are things like:

You now not only need to test your application, but also the runtime that turns this data representation into actual work. Bugs can now occur in the runtime, in the reification of the app logic, or in the integration between the two.

If your application is particularly concurrent or lazy, then ensuring that your DSL works well with your main language's concurrency and laziness machinery can get pretty hairy when you start executing your side effects.

It becomes harder to leverage your languages developer tools; breakpoints and debugging often end up in your DSL runtime's code, not your application code, often requiring special-purpose tools to be built to debug your defunctionalized DSL.

Performance can also be a double-edged sword. On the one hand, you can do some very clever things; use tricks like memoization, all the way up to writing a JIT compiler for your defunctionalized DSL to improve performance. However, you're taking on that work due to the fact that your main language's runtime can no longer do that work for you. Often these data DSLs end up allocating a lot of data structures that end up being parsed and thrown away later, and those allocations increase work in cleanup and the parsing itself.

I also heavily question the efficacy of testing these data DSLs. It is objectively easier to test pure functions, but on the other hand, how do you validate that they are correct? Often we don't care about the actual data representation, we care that it does the work it describes; properly testing then essentially becomes a re-implementation of the DSL runtime with mocks / etc.

For a concrete example of all of these tradeoffs, take a look at React in the webdev world. React is unequivocally a good idea, but it has required a massive investment from the React team and the ecosystem to make it correct, make it fast enough, to create developer tools for it, and to figure out how to effectively test applications that use it.



There appears to be a fuller version of this talk on the author's own site: http://www.pathsensitive.com/2019/07/the-best-refactoring-yo...

Agreed it's quite good.


Yes, I found this to be the best presentation among the several articles mentioned so far in this comment section.


Author here: I'm happy to elaborate on any of this, except where I've signed NDAs about specific projects.

Also an obligatory "we are hiring" on behalf of G-Research; feel free to get in touch at patrick.stevens@gresearch.co.uk or patrick+hn@patrickstevens.co.uk if you're interested in quant finance research/development in central London.


G-Research should come with the health warning of just how litigous and paranoid around IP theft they are. My understanding is that you can't have a personal phone whilst at work, you're weighed on the way in and out, and they sent one of their former quants to jail for several years.

One might suspect that the periodic renames (De Putron, Glouster Research, G Research) are mostly a tactic to distance themselves from the negative image.

1. https://www.bloomberg.com/news/features/2018-11-19/the-tripl...


Its quite common in the quantitative finance world. Unscrupulous people steal a model, try and peddle it to a competitor. Thing is most models only work if noone (or only a few others) are doing the same. Usually when approached, most firms stay above board and report it to the person's employer.

Citadel, for instance sued a former employee for steeling a model (he emailed the source for a model to his personal email. He was sued in federal at 8am on a Monday, fired at noon. Criminal charges came months later. Tried dispose g of the evidence by tossing hard drives in the Chicago River. Dive teams were involved to recover the drives. But, Citadel already had all they needed because they monitored all outgoing and internal communication, including MITMing SSL email services.


I'm fully aware of the sensitivites of the quant finance world, but there are plenty of high quality workplaces out there that don't have the extremely invasive approach to security that e.g. g research do.


So it looks like the whole purpose of this company is to make one (very unpleasant) person richer. What a reason to get up in the morning.


This is a technique worth talking/writing about a bit more.

Sometime ago, Jimmy Koppel also wrote somewhat extensively about defunctionalization and refunctionalization as "refactoring" techniques -

http://www.pathsensitive.com/2019/07/the-best-refactoring-yo...


That's a cool talk which I'd definitely recommend to people!

Is there something in particular that you'd like to see "a bit more"? Are you imagining e.g. a more extended walkthrough of how to realise the benefits of defunctionalisation on a real-life program, or some more theoretical background, or some of how it's used in compilers, or anything in particular?


More real-life cases. The classic cases like compilers are covered enough, but I think we could use more examples in a wider range of domains. For those who use it, I think it feels like second nature and perhaps even "it's not a big deal really", but the duality is critical for building flexible systems. It's only a feeling I have at this point though.

I can offer a small example to start with. Steller [1] - is a sound modeling library that's written primarily as higher order functions. It however exposes a "specFromJSON" function [2] that offers a serialization through (informal) defunctionalization.

[1] https://github.com/srikumarks/steller

[2] https://github.com/srikumarks/steller/blob/master/src/stelle...


The second definition of module Expr appears to have a typo:

    | TwoArg of OneArgFunction * Expr * Expr
Should this instead be:

    | TwoArg of TwoArgFunction * Expr * Expr


You're quite right, thanks. I'll try and get this fixed up tomorrow.


The "Testability" example also seems to have a typo in the definition of describe. Specifically the first error case, it uses the constructor Odd, when Even is the one defined (and the error case itself).


Isn't this strongly related to monads?


Somewhat. A monad allows for representing an effectful program as a data structure, but it's not very "introspectable" by itself (which is what this article is advocating). For example, you can pass a monad around, but you can't tell what it will do. All you can really do with a monad is execute it and find out (similar to how a regular function is "opaque" - you can run it with various inputs but can't know ahead of time what it plans to do with them).

However, if you were to use "free monads", you get closer to what the author is talking about. Free monads are a special implementation of monad that don't actually _run_ any effects - they just build up a data benign structure that can be interpreted later (possibly as an effect).

To do defunctionalization, you don't need monads though - you just need to define data that represents operations. Monads can just provide convenient syntax to do this while looking like a sequential program.


You could draw an analogy that monads are to text as defunctionalization is to an AST.


Insofar as everything is related to monads, yes ;)

In fact you don't need to use monads at all here. I've used this pattern on merely applicative non-monadic initial algebras before; in fact, the article's "arithmetic expressions" example isn't what I would call "monadic", because it doesn't readily admit Bind as currently phrased.

Worse, the "writeSmallOddToFile" example isn't even generic at all, so I don't think there's a reasonable sense in which it could even be said to be functorial.


Fantastic article. I didn't know about this FP terminology but this seems to be a common, general concept.

While reading it immediately reminded me of:

(1) re-frame (ClojureScript) and Redux (JavaScript), which are both web-frontend libraries for managing state and event handling. Browser events dispatch plain, serializable data instead of invoking behaviour directly, which are in practice tagged union types, kind of similar to the provided examples? (I'm not very familiar with the syntax in the article).

(2) it seems to be a common idiom in Rust to move branching logic into pattern matching and enumerations. So defunctionalisation is applicable here and is likely very common to refine Result types and defer execution.

(3) as user edflsafoiewq mentioned this is strongly related to the OO Command Pattern


Prior Art: "The Best Refactoring You've Never Heard Of"

http://www.pathsensitive.com/2019/07/the-best-refactoring-yo...


I think this is a much better introduction than TLA, thanks.


This seems to be advocating to turn program programming into compiler programming. Like it took "every sufficiently complicated program reimplements half of lisp" and said that was an admirable goal and here's how to do it well.

I only skimmed the initial algebra talk linked out, but it seems to mirror what any given programming language does with your code. It takes your text, does some processing to turn it into an AST, then turns that into a compiled program and runs it. AFAICT the initial algebra suggestion says to basically program directly with your AST wherever you can, and then write code to turn that into a program. Now you've written an AST and a compiler, but naively interpreting that AST will probably be slow, so maybe you work in some optimizations. And you don't always write correct code, so you have to add a debugger too.

Now you've just written a programming language and toolset. Why not just use an existing language and mature toolset? Is it because your DSL might be constrained to the original problem well enough to be simpler than anything general purpose?

Replacing e.g. `map` and a native function with `MyMap` and a few of my own functions seems to be throwing out all the good that comes with a mature language ecosystem.


Yes, I think you've basically summed it up very well!

One of the reasons we use this pattern internally is because we have quite specific performance requirements which are not well-served by your standard language compiler/runtime. If we handle some of this compilation workload, we can make sure we emit constructs in the underlying runtime which have the right performance properties. We also want to make sure our users don't really need to think about this sort of low-level mucking around with performance; so we use the initial algebra to expose natural data-driven abstractions to them, which we then carefully manipulate into the right forms for the .NET runtime to have our desired performance properties.

This is an area where F# really shines. Its "computation expressions" make it really easy to construct DSLs embedded in F#. We offer our users this DSL, and we promise that anything written using this DSL will be appropriately fast and safe; but we do sometimes give them an escape hatch. By encouraging the user to stick to this heavily restricted DSL, we make it easier for them to write code that we guarantee will perform well. If the code doesn't perform well, that's a big problem, but crucially it's a problem for the devs to solve, not for the quant researchers to waste hours digging into.

Entirely separately from the above, we can also offer certain safety guarantees by restricting to our DSLs. One project in particular has involved taking something that previously existed but required a lot of manual procedural bookkeeping on the part of the user, and extracting the "intent" of the library in a purely data-oriented DSL. As long as the user sticks to our DSL, they don't need to consider how their computations are sequenced; we'll do that for them, in the process of converting from the DSL into the underlying system.


> we have quite specific performance requirements which are not well-served by your standard language compiler/runtime. If we handle some of this compilation workload, we can make sure we emit constructs in the underlying runtime which have the right performance properties.

Is it just me or does this sound a bit like Common Lisp's compiler macros (http://www.lispworks.com/documentation/HyperSpec/Body/03_bba...)?


Are you effectively interpreting it, or are you compiling it to something else? I'm really curious how you do this in a performant way.


We have places where we ultimately emit IL (the bytecode of .NET). However, usually we just emit F#, restricted to certain constructs which are allocation-free and so forth.

We aren't producing F# in the sense that we have to invoke the F# compiler ourselves, though. In the calculator example, given the description `Add (Negate (Const 5)) (Const 3)`, we might ultimately end up having assembled an F# function in memory like `fun () -> -5 + 3`; when this function is invoked, the value `-2` will be calculated. Ultimately we usually put together the functions in the usual way you make a function: by composition of smaller functions. The very simplest expressions like `Const 5` have a simple template like `fun () -> 5` which we can just directly produce; more complex expressions have to be interpreted recursively into F# function objects.

I agree it's a bit hard to explain unless you're trying to solve a less trivial problem :(


Lisp hacker summary.

Defunctionalization is the elimination of function objects (closures) from run-time representations.

Functions that don't carry an environment (such as global functions) can be replaced by symbols. For instance, a simple calculator's binding for the negation button - can use a symbol such as Negate instead of the #<closure> for the negation function.

Functions that carry and environment can be replaced by objects which represent that information with explicit properties. The object can somehow later be used as if it were a function anyway. (Either a closure can be made which references the object in its environment, or else the object itself can be callable like a function.)

These representations are more readable than #<closure> when debugging, usefully susceptible to manipulation by code, and susceptible to validation.


In the testing example, I honestly don't see the value of the suggested approach over passing in two continuations. In production, the actual I/O methods can be passed in, and in testing we can pass in functions that validate their arguments.

This "dependency injection" type approach would be... functionalisation (?) i.e. the opposite of the suggested approach. But it also leads to greater decoupling.

I often find refactoring in the spirit of functionalisation more powerful, because then what I do becomes less about implementing a specific piece of logic, and more about creating a robust library of combinators that can be puzzled together to implement the desired business logic. In my experience, getting combinators right is easier but also more productive.

With the suggested approach, changes to the logic requires mirroring sets of changes across many locations (because of the expected protocol of discriminated cases), whereas the functionalised code requires changes only either in the continuations or in the combinator, but generally not in both.


> I often find refactoring in the spirit of functionalisation more powerful, because then what I do becomes less about implementing a specific piece of logic, and more about creating a robust library of combinators that can be puzzled together to implement the desired business logic.

The problem here is that your state is almost always transient and non-inspectable. Almost all the complexity in any non-trivial program is in the state, whereas the transformations are usually trivial and things like combinators don't really matter one way or another.


What programming language is that?

I can't read it enough to understand the article.


It's F#, a dialect of ML, related to OCaml and running on .NET. Think "the love child of Haskell and C#".


Looks like Haskell to me.


It's similar, but Haskell uses :: for types, and : for building lists, while the other ML family languages (and typescript) uses : for types, and some use :: for lists.

Also Haskell doesn't have modules, so I thought it was ML. Though I think someone said it was F#? I don't know those well enough to tell them apart.


Haskell has modules.


uh, parametric modules? Haskell tends to use Typeclasses instead. (Yes, I know each file gets called a module, but that's not really the same as seeing a "module" keyword in the file). Unless backpack counts? I haven't been following it.

I know there's a thing in Coq/ML/OCAML which uses the keyword `module` and can do some of the same things Typeclasses can do but aren't exactly the same. I don't know them well enough to explain them, but I know Haskell doesn't have them.

https://gitlab.haskell.org/ghc/ghc/wikis/backpack


ML-like languages all look the same, but compile in completely different ways.


The article makes a lot of references to an 'initial algebra', and there's a link, but it goes to a PDF of a talk. I didn't get anything useful from just the PDF itself.

Is there a video of the talk?


When Skills Matter went bust, the only public recording I know of that talk vanished from the Internet. They've found a buyer now, though, and hopefully soon the talk will return. I will suggest to the talk's author that he hold it again so that he can put the recording somewhere more permanent.


The article was good enough. There was a lack of mentioning the many drawbacks that this approach can have[1], but it was a well written description of it.

Many of the comments, however, are... Not quite as good; there was a bit of elitism and maybe even zealotry[2]. As usual, it soured an interest in FP (not that I've ever needed it) that an even slightly humbler approach might have fostered.

Now I'll dabble a bit in the self-congratulatory tone of some of the comments: I have never written a line of F# and the last time I dabbled with a primarily functional language was a couple of toy projects in Clojure (quite different) years ago and yet I was able to follow along the code and the explanations.

Functional approaches aren't all that different or revelatory for someone with enough development experience[3] and it's just a different approach, who would have thought it?

[1]: Not unexpected; functional programmers who've convinced themselves that FP has no drawbacks are unfortunately common. I'm not saying the author is one of those, though; he probably thought the article was long enough as is.

[2]: Also not unexpected in any discussion about paradigms, specially functional programming.

[3]: I've seen the general idea discussed here in different more multiparadigm languages.


I don't completely understand this technique, so maybe I'm off in my interpretation of it. But it seems like something, flexibility-wise, between hard coding things and embedding a scripting language.

Back in high school I liked to work on MUDs (multiplayer text roleplaying games). It feels like this is sort of thing we resorted to a lot. Content creators that couldn't code could use a text UI to build up spells, skills, etc that the code would use to apply the specified effects. We had basic things (such as "fire damage") and more generic modifiers (such as "make the next thing apply to everyone close to you rather than just 1 person").

And every step of the way other subsystems could intervene and modify the outcome, so someone could e.g. create a monster that nullified all fire damage someone tried to deal to anyone nearby.

For people that did know how to code, we had a basic scripting language for more flexibility. But these scripts still plugged into the above system.


Yeah, it's very similar conceptually to adding a scripting language. But the point isn't to add user extensibility to the system, it's to have an introspectable description of the program.

So in the scripting analogy, what you'd do is take the program you wrote in C++ or whatever, and write a little mini-language that allows you to express all of the functionality needed to write the game in. So for example, C++ features like template metaprogramming aren't necessarily something you need to write games, but "spawn an NPC" is a very common task, for example. You reduce this language to just whats needed to write the game with.

The difference from a scripting language is that instead of the game being written in C++, with a little interpreter for the scripting language, instead the C++ is just a compiler for the scripting language. You take the script description of the game, and compile it down to the final game.

That little level of indirection allows you to do things like optimization, but also lets you do things like easily swap out how the program is evaluated. For example, in a graphical game, you could skip rendering to the screen, or advance time in jumps instead of needing to wait for timers, etc.


So.... this is just a fancy name for the advice "give your functions a name"?

The example he gave is just replacing an anonynous lambda for addition with a named function call Add?

Maybe I'm too critical, but this is a thing I see in many fields:

1. Take a simple well-known idea and give it a new obscure name. 2. Write blogs, give lectures, organize summit and conventions. 3. Profit!


No, that's not the point at all. We are not replacing an anonymous lambda with a named function call; rather, we are replacing an anonymous lambda with a specific (finite, inspectable, small) piece of data. Then, later on, we decide what to do with that small piece of data: often, we replace it with a genuine function call, but sometimes (e.g. in the "testability" section) we might just decide to inspect it instead.


Add is a piece of data, not a function. The equivalent in C would be something like:

    enum function { add, ... };

    // then inside your code
    switch(myfunction) {
    case add:
        return a + b;
    ...
Defunctionalisation is about removing higher-order functions, not about naming anonymous functions.

It's not a new trend, the first citation on wikipedia is to a paper in 1972.


You're focusing on my not knowing the syntax and specific of the languages. It's still just giving names to things that did not have a name. Whether the name is a variable name, a function name, a type name, doesn't matter.

It's still a fancy, obscure jargonish way of saying "name things".


There is naming but that is not the point. The point is that you turn functions into (serializable) data. Nothing fancy happening here.


Two problems I find:

1. You'll lose function inline optimizations, and also the branch predictor will get thrashed.

2. Local variables in that function essentially become globals.


I think it depends on the compiler?

In the example of that article, the only place that use the data structure is the when in the function that execute the data structure. A compiler may happily just inline it because no other where use it. And you lost no performance at all.

Or it may just decide to generate the result as it and you surely have performance problem now.


> So.... this is just a fancy name for the advice "give your functions a name"?

Not quite: "defunctionalization" is a name for replacing functions with not-functions (data structures). (And IMO it's the most unfancy name one can give to that activity, as the name literally just indicates "removing functions"... I can't think of a less fancy name for it: maybe "un-function-ize"? But that's pretty much the same as "defunctionalization".)

BTW I recommend reading the 3 linked articles in this order:

1. https://blog.sigplan.org/2019/12/30/defunctionalization-ever...

2. http://www.pathsensitive.com/2019/07/the-best-refactoring-yo... (with accompanying video)

3. https://www.gresearch.co.uk/article/defunctionalisation/ (this one actually shows how you can incrementally adopt this)


A lot of this and a few other links in this thread end up looking like creating an AST limited to the operations you want to expose to the user, in a serializable way. I think that's even called out explicitly in the articles. But I'm left thinking "what is the serialized structure of an AST?" It's source code. So we've implemented a language that then we have to use from within our code that is not as ergonomic as the host language.

In a way, it's kind of like type-safe eval(), with the limited nature of the purpose-specific implementation creating a sandbox of types around the eval, i.e. the impl isn't complete enough to give us turing completeness, File IO, or other things that would let the data turn into executable code that could escape the sandbox. So I'm curious to see if some wrappers around Roslyn[0] could get us less verbosity than Roslyn itself, a more complete impl that doesn't have to be written for every application, and the sandboxing necessary to prevent it from being handing the user a fully-automatic submachine gun pointed at our database.

Maybe it's something like a white-list for Roslyn structures. You could take a C# file, parse it with Roslyn, filter through the white-list, and either error-out if the user is doing something nefarious, or compile the structure and continue as normal. And then the code that we pass around as data could just be C# code, rather than having to have our own AST structures that we serialize on our own.

Maybe you could even allow limited forms of looping by injecting infinite-loop-breaking code that does some simple pre-/post-condition checking. Like "while(true) loops are only allowed to execute 10K times", or "for loops need to be bounded correctly for the update expression, and the index must not be modified in the body".

[0] Side note: my office is in a neighborhood called "Rosslyn", so I'm constantly misspelling one or the other.


These are very useful patterns!

I feel that the thing that I haven't found my way around about this style of programming is how to make the types scalable. Once the type `Expr` is set to include literals, addition, substraction, and custom functions, there's no clean way to extend it in a modular fashion with more _typed_ operations (eg say you wanted to add multiplication).

When I say modular I mean either in a separate file, or in any other way that doesn't turn every function of `Expr`, over time, into a giant, unreadable pattern-matching statement spanning multiple pages once there are multiple dozens of operators in the type. Any suggestions for that?


This could benefit from a discussion of when it is and when it isn't useful or practical to apply the technique described.

It helps with serialization, and optimization in cases like deep learning frameworks. In other cases I think it smacks of overengineering. If you need a representation of a function along with its arguments to pass to other systems, then do it as needed. Otherwise, YAGNI.


I have a budding interest in ML-languages and was seeing a couple of references to category theory in this article and the linked slides. Are there any good resources for learning category theory applied to ML programmers or is there a list of category theory concepts that programmers will often come across?


I think this is fairly popular, though I have not read it myself: https://bartoszmilewski.com/2014/10/28/category-theory-for-p...


As an F# developer with just enough category theory to follow along, I think this is really great.

I find the workaround for existential types in .NET particularly interesting, but it seems so verbose. Is there no way to do it with plain functions in order to avoid all the explicit type signatures?


Thanks!

I'm afraid I don't know of any way to make the existential types hack neater. If you find one, I'm all ears!

There is an issue open with the F# compiler (https://github.com/fsharp/fslang-suggestions/issues/567) to allow higher-ranked types, but there are (well-founded) objections that this could be a pretty big jump in language complexity.


The major thing that we found was that you had to look at the whole problem. - Joseph Henry Condon, Bell Labs

... via https://github.com/globalcitizen/taoup


Guessing one of the highest velocity teams on the planet


nice! I haven't seen this expressed in functional programming, but it feels more natural in ML.

Is it possible to nest these computations?


I'm not quite sure what you mean by "nest". You can certainly have a computation expressed in terms of a defunctionalised initial algebra, which contains computations expressed in terms of different defunctionalised initial algebras.


Correct use of functional programming:

[The Pure Function Pipeline Data Flow v3.0 with Warehouse / Workshop Model](https://github.com/linpengcheng/PurefunctionPipelineDataflow)

1. Perfectly defeat other messy and complex software engineering methodologies in a simple and unified way.

2. Realize the unification of software and hardware on the logical model.

3. Achieve a leap in software production theory from the era of manual workshops to the era of standardized production in large industries.

4. The basics and the only way to `Software Design Automation (SDA)`, just like `Electronic Design Automation (EDA)`.Because [The Pure Function Pipeline Data Flow] systematically simulates integrated circuit systems.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: