Apologies, I forgot to follow up on the promised inference discussion in the original version of the post. I just added a paragraph at the end on my thoughts.
> Another big question in gradual programming is inference vs. annotation. As our compilers get smarter, it becomes easier for information like types, lifetimes, and so on to be inferred by the compiler, even if not explicitly annotated by the programmer. However, inference engines are rarely perfect, and when they don't work, every inference-based language feature (to my knowledge) will require an explicit annotation from the user, as opposed to simple inserting the appropriate dynamic checks. In the limit, I envision gradual systems have three modes of operation: for any particular kind of program information, e.g. a type, it is either explicitly annotated, inferred, or deferred to runtime. This is in itself an interesting HCI question--how can people most effectively program in a system where an omitted annotation may or may not be inferred? How does that impact usability, performance, and correctness? This will likely be another important avenue of research for gradual programming.
This is to say, I don't think inference and gradual systems are necessarily at odds, but could potentially work together, although that doesn't happen today.
Additionally, I didn't want to dive into dynamic vs. static typing at risk of inflaming the Great War, but the point I was trying to make was moreso empirical. A huge number of programmers have used dynamic languages to build large systems in the last two decades, which suggests that there is _some_ benefit, although further research is needed to precisely quantify the benefit of dynamic languages.
> This is to say, I don't think inference and gradual systems are necessarily at odds, but could potentially work together, although that doesn't happen today.
Type inference demands some rigidity in the type structure of a language. You must have the right amount of polymorphism: the STLC has too little, System F has too much. If you want both type operators and type synonyms, then operators can't be applied to partially applied synonyms [0], because higher-order unification is undecidable. If you want subtyping, you also have to do it in a certain way [1]. More generally, the type language must have a nice enough algebraic structure to make it feasible to solve systems of type (in)equations, because that's how inference works.
Your post contains a link to a paper [2] that contains the following passage:
> In this paper, we present a way to achieve efficient sound gradual typing. The core component of this approach is a nominal type system with run-time type information, which lets us verify assumptions about many large data structures with a single, quick check.
I have no way to confirm this claim at the moment, but it seems entirely reasonable. Checking structural types requires doing more work. But it is this very structure that makes type inference possible and useful.
So, yes, it seems that inference and gradual typing will be difficult to reconcile.
> Another big question in gradual programming is inference vs. annotation. As our compilers get smarter, it becomes easier for information like types, lifetimes, and so on to be inferred by the compiler, even if not explicitly annotated by the programmer. However, inference engines are rarely perfect, and when they don't work, every inference-based language feature (to my knowledge) will require an explicit annotation from the user, as opposed to simple inserting the appropriate dynamic checks. In the limit, I envision gradual systems have three modes of operation: for any particular kind of program information, e.g. a type, it is either explicitly annotated, inferred, or deferred to runtime. This is in itself an interesting HCI question--how can people most effectively program in a system where an omitted annotation may or may not be inferred? How does that impact usability, performance, and correctness? This will likely be another important avenue of research for gradual programming.
This is to say, I don't think inference and gradual systems are necessarily at odds, but could potentially work together, although that doesn't happen today.
Additionally, I didn't want to dive into dynamic vs. static typing at risk of inflaming the Great War, but the point I was trying to make was moreso empirical. A huge number of programmers have used dynamic languages to build large systems in the last two decades, which suggests that there is _some_ benefit, although further research is needed to precisely quantify the benefit of dynamic languages.