Over the past few years, OCaml has seen an abundance of new language features and standard library additions. OCaml 4.08 added let-binding operators (syntactic sugar for continuation-passing style and monadic programming), OCaml 5 implemented a multicore runtime and effect handlers, OCaml 5.3 finally added dynamic arrays to the standard library, and OCaml 5.4 added labeled tuples.
With unboxed types, I believe OCaml would achieve similar granularity over memory allocations as C#: garbage-collected, but supporting "structs" which are allocated on the stack (or inline the same heap allocation when part of a reference type). I think there is an unexplored space for "soft" systems programming languages that retain a garbage collector by default, while also allowing the programmer to tightly control memory allocations in performance-critical code.
If OCaml hits this sweet spot in abstraction, what domains would adopt it? Could OCaml potentially compete with C#, or Swift?
The `Either a b` type in Haskell is equivalent to the `Result<T, Error>` type in other languages. The only difference is in the naming: "Result" semantically implies error handling, while "Either" implies a more general usage as the alternative between two options. Either and Result are the binary sum type (the disjoint union between two types).
Contrast the definition of the Result type in Swift:
public enum Result<Success: ~Copyable & ~Escapable, Failure: Error> {
/// A success, storing a `Success` value.
case success(Success)
/// A failure, storing a `Failure` value.
case failure(Failure)
}
I'm not sure what the parent commenter meant when they claimed that "Result <e, t> of yours is a Result (e, t) in Haskell." In Haskell, `(e, t)` would be the pair type (the binary product type).
> The `Either a b` type in Haskell is equivalent to the `Result<T, Error>` type in other languages. The only difference is in the naming:
This is false and misleading.
The Result <E, T> usually (C#, at the very least, most probably C++ and many other languages) should always be fully instantiated. One usually cannot construct a type "function" like Result <E,> that needs a single type argument to instantiate a full type. The partial application on type level is not there in most languages, including Rust (a result of a little googling).
The Haskell's Either type can be instantiated to Be a two-type-arguments function, one type argument function and, finally, a fully instantiated type like Either String Int.
This means that Result <E, T> type effectively has a single type argument, namely pair of types. The Either type has two type arguments and can be partially applied.
You are right, in Haskell type constructors may be partially applied. In my opinion, this feature has less to do with any fundamental difference between `Either` in Haskell and `Result` in other languages, and more to do with Haskell's more powerful type system. In the same way, the pair type (a, b) in Haskell is also different from the pair types in other languages. This feature is called "higher-kinded types."
In particular, higher-kinded types are necessary to abstract over functors (or functions from types to types, * -> *). The list type constructor is a functor, and the partially applied type constructor `Either a` is also a functor. However, in languages without higher-kinded types, type variables can only be "ground types" (of kind *).
I don't agree with this statement:
> This means that Result <E, T> type effectively has a single type argument, namely pair of types. The Either type has two type arguments and can be partially applied.
The Result<T, E> type still takes two type arguments. The main distinction, in my view, is that Haskell allows types to be "higher-order." In fact, to be really pedantic, you could argue that the `Either` type in Haskell really takes one type argument, and then returns a function from types to types (currying).
This is kind of like the type-level equivalent to how many programming languages support some notion of function or procedure (and functions may have multiple arguments), but only more modern languages support higher-order functions, or allow variables to be functions.
One of the most thorough articles on error handling in programming language design that I've read is this one: https://joeduffyblog.com/2016/02/07/the-error-model/. It was written by Joe Duffy, who worked on Microsoft's experimental Midori language.
Another relevant article is Robert Nystrom's "What Color is Your Function?": https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... This article is about async/await, but the same principles apply to error handling. This article uses colors as an analogy, but is really about monads.
Both IO and exceptions can be denoted as a monad. What this means is that a function inside the programming language, A -> B, can actually be denoted by a mathematical function of the signature [[A]] -> M [[B]], for some monad M. For example, if we are dealing with the exception monad, M would be _ + Exception.
A language such as Java implicitly executes in the IO + exception monad. However, the monad can also be exposed to the programmer as an ordinary data type, which is what Haskell does. When people talk about the tradeoff of exceptions versus Result<T, E>, or the tradeoff between preemptive concurrency and async/await, they are really talking about the tradeoff between making the monad implicit or explicit. (A language where all functions may throw is like one where all functions implicitly return Result<T, E>. A language where all functions may be preempted is like one where all functions are implicitly async, and all function calls are implicitly await points.)
The theoretical technique of using monads to model the implicit effects of a programming language was pioneered by Eugenio Moggi, and the idea of making them explicit to the programmer was pioneered by Philip Wadler.
Something else to think about is how monads stack. For example, how would you handle functions that are both async/await and throw exceptions? Does the answer change when the monad is implicit (e.g. throwing exceptions) or explicit (e.g. returning a result)?
That Midori article looks great, I'll give that a closer read. I actually used to work with Bob, and am familiar with the (wonderful!) function color article.
I think my biggest question might be addressed in the Midori article: with things like bounds checks and checked casts you already have exceptions (or panics), so should you have a way to capture them anywhere on the stack? Are they recoverable in some programs? So should you have try/catch even if you try to make most errors return values?
Another set of questions I have is around reified stacks. Once you have features like generators and async functions, and can switch stacks around, you're most of the way to resumable exceptions. I don't yet fully grok how code as the resume site is supposed to deal with a resume, but maybe resumable exceptions are a reason to keep them.
I'd never heard of "resumable exceptions" before, so I searched them up [1][2]. Is this another name for the language feature called "effect handlers" in OCaml 5?
Pretty much every language has a form of resumable exception known as a "function call". It's hard for me to understand why no one in the algebraic effects/effect handlers community has noticed this yet.
This is the difference between functions and effect handlers, to my understanding:
Functions map inputs to outputs, with a type signature that looks like A -> B. Functions may be composed, so if you have f: A -> B and g: B -> C, you have gf: A -> C. Function composition corresponds with how "ordinary" programming is done by nesting expressions, like g(f(x)).
Sometimes, the function returns something like Option<B> or Future<B>. "Ordinary" function composition would expect the subsequent function's input type to be Future<B>, but frequently you need that input to have type B. Therefore, optionals or futures require "Kleisli composition," where given f: A -> Future<B> and g: B -> Future<C>, you have gf: A -> Future<C>. Kleisli composition corresponds with "monadic" programming, with "callback hell" or some syntactic sugar for it, like:
let y = await f(x);
g(y)
Effect handlers allow you to express the latter, "monadic" code, in the former, "direct style" of ordinary function calls.
Today, we tend to see The Great Gatsby as a work of historical literature, as it gives a window into the Roaring Twenties. However, F. Scott Fitzgerald did not set out to depict the past; he was depicting his own present. Similarly, Proust's literature is seen as a window into the French high-society of the Belle Epoque, a society in which Proust lived.
Which works today do you think future generations will see as the classics of the 2010s and 2020s? Such may not even necessarily be works of literature; they could be other storytelling mediums, such as film.
> Which works today do you think future generations will see as the classics of the 2010s and 2020s?
South Park?
Maybe collectively those "actually the villain is just misunderstood" movies I hear are becoming a thing recently? They seem like a decent candidate for the "window into the culture of the time" thing.
Some of those wide-audience computer games like Candy Crush and Farmville?
Part of the problem of our time is that shared culture has significantly receded. There's little capacity to maintain "classics" as we understand them today. Take any massive artistic output (film, book, TV show) and it's nowadays either not seen/read/heard by more than 20% of the population or it's a flash in the pan hit which will be forgotten in another year or so (e.g. Barbenheimer).
* Minecraft: This will probably hold the position Pac Man holds to us. Single biggest culture-marker perhaps.
* Taylor Swift: She's like Michael Jackson was (perhaps because of access and audience size improvements)
I don't know if Wikipedia or the mainline social media websites would count. People remember The Myspace Era. Tumblr and Twitter have reputations for their culture but would they be classics? Hard to tell.
> Which works today do you think future generations will see as the classics of the 2010s and 2020s?
I think the film Tár will be. It captures the “fake it until you make it” spirit of the present really well along with the god complex and repressed guilt that accompany “making it”. Also the performances and direction are just excellent.
There’s a lot of discussion flying back and forth as to whether we’re in a period of cultural stagnation. Obviously you need to heavily discount the possibility of “old fogeyism” and reactionary nostalgia whenever you make such claims, and historically a lot of them have been totally false, but the one argument I have found convincing that we’re not in a good cultural place right now is the difficulty of coming up with such a work.
What work captured the zeitgeist of the 2010s and so far of the 2020s? I certainly can’t think of any novels that did it, far too much of that literary decade was about self-obsessed New Yorkers and had no relevance to anyone else. None of the reactions against it (i.e. Dimes Square) produced anything of lasting note either.
To be fair, it’s a fair assessment. Superhero movies like that are a defining feature of the last two decades, with titles and plots worsening at an exponential rate. Not that prior decades lacked superheroes. They just used to be less superficial.
No one even talks about GOT anymore bc the ending was so bad.
> No ending was going to be good
Why do you say that? Plenty amazing shows have great endings. And GOT isn't some uniquely incredible story. Killing MCs is not new to GOT, either, and you give that show too much credit. Lost did it long before GOT. 24. Grey's Anatomy is SUPER famous for it. They killed off like half the original cast in a single helicopter crash.
ASOIAF wasn't original bc it killed MCs. It was original bc it treated fantasy as political first, fantasy second.
I don't doubt that the show ending mirrors the book's intended ending, however a huge part of it is how you get to that ending, which they rushed and fumbled horribly in the show. Like Bran becoming King could work, but not when he basically shows up out of nowhere and nobody knows anything that happened to him or what he is capable of, he basically disappeared for years and when he came back he said a few nonsense things but wasn't involved in much of the politicing that could make him a viable candidate for King. Or Dany going full Mad King, after they spend seasons showing her trying to not be a crazy ruler but then suddenly snapping, instead of going through a series of harder and harder choices that turn out worse each time and drive her to more relatable desperation and violence.
At minimum they needed a full extra season and a full final season, if not more. But without GRRM handholding them throughout the entire plot they completely lost the path.
There is a good ending to Game of Thrones: evil wins, everyone dies. All the fools who pursued their own interests rather than face an annihilating threat get annihilated. It's right there in the show's motto. "Winter is coming."
The writers just lacked the courage to do it. They tried to tack a Disney ending onto a tragedy.
Either that or Cersei being queen would've been the correct ending.
The Lannisters would've had the only real army left without that WWE-style defeat of the Night King. Cersei's consistently outwitted everyone (except Tommen, I guess), and they knew how to buy loyalty.
Instead we ended up with the usual plot armor, and a "twist" that the character that behaved like a tyrannical zealot for 7+ seasons was, in fact, a tyrannical zealot.
> TypeScript’s type system is purely structural and exists only at compile time. It has no way to verify that your function actually implements what its signature claims. You can declare that a function transforms a User into a SafeUser, and as long as the return object has the required fields of SafeUser, TypeScript doesn’t care what additional properties might still be lurking in there.
> This is fundamentally different from languages like Rust, where the type system can actually guarantee that if you claim to return an Option<T>, you genuinely can’t return null, the compiler enforces the contract at the language level. Rust’s type system doesn’t just trust your annotations; it verifies them.
This design where types are present at compile-time but disappear at runtime is called type erasure, and it's extremely common. For example, Java's generics are type erased. If you have some Java class Foo<T, U>, in the bytecode it will simply become Foo, and T and U will become Object. Therefore, you cannot use runtime introspection to recover their instantiations.
The remark contrasting TypeScript to Rust seems a little confused. Rust also uses type erasure; types and lifetimes are checked by the compiler, then the compiler produces a native executable, which is just machine code and would not contain type information. Option<&T> could be treated as a pointer T*, because the niche optimization ensures that the Option::None variant is represented as 0 or NULL. If C code were to interact with Rust code via FFI, it would be able to pass a value of 0. However, Rust doesn't have a null value the way that it's commonly understood in languages such as Java, C#, or JavaScript, a distinguished value that denotes a "sentinel" reference that does not refer to any object. I would say that the null reference is semantically a higher-level concept, specific to these particular programming languages.
Philosophically, the notion of type erasure goes all the way back to Curry-style (extrinsic) typing, which is contrasted with Church-style (intrinsic) typing. For example, in Curry-style typing, the program (fun x -> x) is the identity function on all types, while in Church-style typing, each type A has its own identity function, (fun (x : A) -> x) and a program is meaningless without types.
It looks like the op is actually talking about structural typing vs nominal typing, which makes more sense bc Rust is nominally typed (newtype pattern, for example), whereas Typescript is structurally typed.
And you’re right, this has nothing to do with type erasure.
When I first tried to learn Vulkan, I felt the exact same way. As I was following the various Vulkan tutorials online, I felt that I was just copying the code, without understanding any of it and internalizing the concepts. So, I decided to learn WebGPU (via the Google Dawn implementation), which has a similar "modern" API to Vulkan, but much more simplified.
The commonalities to both are:
- Instances and devices
- Shaders and programs
- Pipelines
- Bind groups (in WebGPU) and descriptor sets (in Vulkan)
- GPU memory (textures, texture views, and buffers)
- Command buffers
Once I was comfortable with WebGPU, I eventually felt restrained by its limited feature set. The restrictions of WebGPU gave me the motivation to go back to Vulkan. Now, I'm learning Vulkan again, and this time, the high-level concepts are familiar to me from WebGPU.
Some limitations of WebGPU are its lack of push constants, and the "pipeline explosion" problem (which Vulkan tries to solve with the pipeline library, dynamic state, and shader object extensions). Meanwhile, Vulkan requires you to manage synchronization explicitly with fences and semaphores, which required an additional learning curve for me, coming from WebGPU. Vulkan also does not provide an allocator (most people use the VMA library).
SDL_GPU is another API at a similar abstraction level to WebGPU, and could also be another easier choice for learning than Vulkan, to get started. Therefore, if you're still interested in learning graphics programming, WebGPU or SDL_GPU could be good to check out.
With unboxed types, I believe OCaml would achieve similar granularity over memory allocations as C#: garbage-collected, but supporting "structs" which are allocated on the stack (or inline the same heap allocation when part of a reference type). I think there is an unexplored space for "soft" systems programming languages that retain a garbage collector by default, while also allowing the programmer to tightly control memory allocations in performance-critical code.
If OCaml hits this sweet spot in abstraction, what domains would adopt it? Could OCaml potentially compete with C#, or Swift?
reply