That's a nice overview of Hindley-Milner in practice!
For those interested, I recently have been thinking of a better way to specify type inference with principal derivations that lends itself better for type system extensions:
The man, the myth, the legend! Ok, so for extensible records: I don't have a good intuition for why the record objects would have to keep around the shadowed (hidden/duplicate) fields at run-time. Do you have a motivating example for that? Also entirely possible I missed it in the paper
(I'll check out the new paper later - thank you for the link)
Haha, thank you for your kind reply :) really enjoyed the blog post as it shows nicely that implementing HM can be straightforward -- many papers on inference are usually quite math heavy which can be a tough read.
About duplicate labels.. one needs to retain the duplicate field at runtime _if_ there is a "remove_l" or "mask_l" operation that drops a field "l". For example, `{x=2,x=True}.remove_x.x` == `True`. (Where the type of `remove_l` is `{l:a|r} -> {r}`)
This comes up with effect systems where we could have 2 exception handlers in scope, and the current effect would be `<exn,exn>` (which corresponds to a runtime evidence vector `evn` of `[exn:h1,exn:h2]` where the h1,h2 point to the runtime exception handlers.). If a user raises an exception it'll select `evn.exn` and raise to `h1`. But a user can also "mask" the inner exception handler and raise directly to `h2` as well as `evn.mask_exn.exn`.
One could design a system though with different primitives and not have a remove or mask operation, such that the duplicate fields do not have to be retained at runtime (I think).
(Anyway, feel free to contact me if you'll like to discuss this more)
For those interested in the history of computing: the article mentions that the algorithm
"seems to have been discovered independently multiple times over the years". Interestingly, it also seems to have been discovered by Max Newman [1]
, who, at some point, was Alan Turing's supervisor. See [2, 3].
Interesting date on that Max Newman paper - 2nd September 1942. Newman started working at Bletchley Park on 31st August, where he was deeply involved in the decryption of the Lorenz cipher and the development of Colossus.
If you want to learn how HM typechecking works by studying working (Haskell) code, I would recommend the papers below. The code in these papers is closer to how you would actually implement a type checker in practice.
For those interested, I recently have been thinking of a better way to specify type inference with principal derivations that lends itself better for type system extensions:
https://www.microsoft.com/en-us/research/uploads/prod/2024/0...
Still a bit preliminary but hopefully fun to read :-)