Hacker News new | past | comments | ask | show | jobs | submit login

You have a different definition of "works well" and also a different definition of "type inference" here. Rust doesn't have global type inference. C++'s `auto` doesn't count. Go's `:=` syntax doesn't count.

Why not? Because you are adding typing retroactively to a dynamically typed language like Python. If a language was built with typing, you can do this because your initial choice of a type system already constrains the set of valid programs. But a Python-like language where users are already used to having no restrictions on runtime types? Absolutely not. Especially not in a language that encourages duck typing. Let's go back to the omega combinator example. The Stack Overflow link is correct: Haskell cannot assign a type to it. That means the type system is purposefully designed weak enough that the omega combinator cannot be written in the first place! No valid Haskell program has it! Therefore you can totally disregard it. Python is different. You can already write the omega combinator in it. I wrote it three times in this thread. So a bolt-on type system needs to assign a type to it. And you can't.

Python does not have a type system that "works well" and will never have one.




I think the disagreement is about what “works well” means. You are approaching it from some kind of theoretical purity whereas most people approach it from the perspective of “works for every program I am likely to encounter”. Omega combinators aren’t an example of something typically written in Python. I think you’re making a similar decision about global type inference as the only “true” definition of type inference when clearly more limited type inference already adds a significant amount of utility & there’s not typically that much lost in an inability to avoid typing functions - it’s a bit inconvenient in some places, but overall acts as a way to define the explicit contract between the method body and the rest of the code which makes errors faster to spot and the language compile more quickly.

You are correct that the holy grail you’re trying to reach is impossible, but relaxing some constraints still yields a heck of a lot of practical benefit.


The problem with "works for every program I am likely to encounter" is that you don't know what kind of programs your users are writing.

Well, now that I think more about it, except when you are Google or Meta and you have a giant monorepo full of code that users of this language has actually written. So if Starlark were to gain a type system, the type system designer would probably just run it through the entire Starlark code at Google.

Speaking of Meta, this suddenly reminds me of Flow https://flow.org/ an effort at Meta to add a bolt-on type system to JavaScript. Naturally it doesn't aim to "work well" 100% of the time but maybe working 80% of the time is already valuable enough for them to use and announce it.


> is that you don't know what kind of programs your users are writing

Except we have lots of easily accessible OSS code available via GitHub/GitLab etc in addition to self-reporting from closed repos. Can you find any instance of an omega combinator in use? I'm not even sure omega combinator is relevant to Starlark given that Starlark is a more limited language derived from Python. For example, it doesn't typically allow for recursion which I believe would be required to express an Omega combinator within it. Yes I know the definition of the combinator says it's not recursive, but that's in a pure CS sense. In a practical sense, there's not much difference between a function being passed itself as an argument and then invoking it vs invoking itself directly, particularly from an enforcement perspective. Indeed, you can verify it'll fail the recursion prevention code in Starlark [1]

[1] https://github.com/bazelbuild/bazel/blob/e1a73e6e89a082e3d80...


Global inference is a nightmare, you simply don't want global inference. It means that the minimum understandable system is the entire system, if you don't understand the entire system you don't understand any of it, this can't scale.

When weird type theory people insist you want global inference and that you should sacrifice something you actually value to get it, treat them the same as if Hannibal Lecter was explaining why you should let him kill and eat your best friend. Maybe Hannibal sincerely believes this is a good idea, that doesn't make it a good idea, it just further underscores that Hannibal is crazy.


As a user of languages that do global inference, I have yet to see much killing and eating in the programs I work on. They mostly make sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: