> I think they actually make a fair point because in this case "hard to do" probably equates to "makes the compiler a lot slower".
They are wrong in this case.
When considering whether to allow `foo<T>();` in Rust, we measured the performance impact that infinite look-ahead / backtracking would have in that case. The result was negligible.
Why? Because the amount you have to look-ahead, before you can say "definitely wrong; let's turn back", in realistic code is bounded to maybe max 30 for some very complex generics. It's however much more likely that no back-tracking occurs at all because the parse will be correct.
When engineering your compiler, you can always make choices about which path is statistically more likely, as back-tracking in a well designed language is usually pathological. The theoretical O(..) complexity is largely irrelevant.
(Source: I was the main maintainer of rustc's parser and refactored large parts of it.)
Off the top of my head: Poor error messages, any type checking is run on the generated code so harder to enforce properties, and if code generation is not in-language, then the base logic can diverge for different types, which is bad when you find bugs.
An interesting example is the addition of `{-# LANGUAGE LinearTypes #-}` to GHC. The primary interest for this "experiment" is not from academia but from industry (tweag IO). Another example is `QuantifiedConstraints` which was quickly caught up by practical applications as well. Haskell is not just for academia, it is very much intended for practical use after all.
It is intended by it's creators as a research language. That doesn't mean it isn't suitable for practical use, it just means that you should expect it to continue to gain features and grow in complexity.
We aggressively use rollups to merge PRs into rust-lang/rust to mitigate the effect of the 4 hour build times. But it would sure be nice to bring it down to less. It would certainly make my life as the maintainer of the bors queue easier. ;)
It's probably a testament to the scope of the build + test that makes it four hours. If it were faster, would the scope tend to increase in order to hit more tiers/more tests?
I'd rather my commit sit in a test queue for several hours than push and cross my fingers like LLVM does it.
Thanks, I haven't seen that RFC before, but reading it now it seems it would be insufficient to support this use case without also supporting `let x: impl Future`, since the RFC deliberately chooses to expand to a temporary binding for `self`.
It would also appear to be deficient for the same parsing reason I mentioned, i.e. that you need some way to tell whether `2 + 2.bar!()` should expand to `2 + bar!(2)` or `bar!(2 + 2)`; the RFC appears to choose the latter, whereas a hypothetical `await!()` would want the former. This problem is called out in the RFC:
"Rather than this minimal approach, we could define a full postfix macro system that allows processing the preceding expression without evaluation. This would require specifying how much of the preceding expression to process unevaluated, including chains of such macros. Furthermore, unlike existing macros, which wrap around the expression whose evaluation they modify, if a postfix macro could arbitrarily control the evaluation of the method chain it postfixed, such a macro could change the interpretation of an arbitrarily long expression that it appears at the end of, which has the potential to create significantly more confusion when reading the code."
As far as I can see, the RFC doesn’t mention precedence at all, but I think it’s safe to assume that it’s meant to be the same as method calls. So `2 + 2.bar!()` would expand into something like
I think it's not a good idea to talk about them as one thing and the use cases also differ. `const fn`s are deterministic ("pure") functions that can be evaluated at compile time if all arguments provided also can. `const A: B` generics are about compile-time value dependent typing. The former is important for the expressiveness of the latter but they are ultimately independent. Moreover, the implementation effort is also mostly independent (different people are doing the effort). Even having them in the same WG might not be a good idea.
I don't think the Go 2 generics proposal amount to type classes;
From what I inferred from the proposal, contracts are structurally typed rather than nominally such that you don't have a location at which you explicitly denote that you are implementing something. Rather, the fact that a type coincidentally has the right methods and such provided becomes the implementation / type class instance.
Also; I didn't check this in a detailed way, but does contracts as proposed have the coherence property / global uniqueness of instances? I would consider that a requirement to call a scheme for ad-hoc polymorphism on types "type classes". In other words, Idris does not have type classes but Haskell and Rust do.