Not really, in my opinion. Type inference especially has come a long way. If I offer you a choice between language A and language B and claim that their syntax is nearly identical and you'll use essentially the same number of keystrokes to accomplish the same task in both, but language B can inform you about mistakes you've made at compile-time instead of waiting until run-time... which one are you going to choose?
While dynamic languages afford some degree of freedom, that freedom comes at a cost. The tradeoff used to be that safety required lots of extra writing (e.g., Java). Now that this isn't the case... the freedom of dynamic typing can be more harmful than helpful.
(I still use Python all the time, so don't mistake me for a static-only zealot of some sort. But I think the landscape of type systems has changed a lot since Kay made these claims.)
Types can be thought of as a way to defensively program - encode what you can to avoid runtime errors. At minimum this means you have to think about your constraints - we may both agree that that's a fine tradeoff, but it's a tradeoff nonetheless.
Erlang, an actor based system, does not do this. Instead, it assumes that even if you added types youd run into failures and any reliable system should spend its efforts not trying to avoid that but to deal with it.
Erlang allows you to instead encode failure modes in an extremely resilient way (supervisory trees) and to trivially move code across networks to avoid hardware failures.
Languages like Python, in my opinion, are the worst of both worlds. Python encourages lots of shared mutable state but, until recently, offered very little static analysis. Time was instead spent on testing code - something that I do not believe it does any better than Erlang or a statically typed language.
To me, the answer is sort of... why not both? We can use actors and supervisory trees and static types. As an example, I write Rust services that execute on AWS lambda in response to queue events. I get state isolation and all of the good bits of the actor model, and static types.
Pony is a more fine grained solution, offering static types and in-process actors.
Types aren't just for catching type errors, they're also a way of defining and enforcing at least parts of the programming contract. Most type systems are poor at this, but it still goes beyond just catching runtime type errors.
The time you save from writing a dynamic program is time not spent on defining that contract. You will eventually be paying for that when the first major refactor comes - if it comes, that is. You will pay for it with extra test coverage, if that's in the budget. Or you will pay for it by re-writing everything.
Those may all be worthwhile tradeoffs, but in the long run, I think static types win out.
> Types aren't just for catching type errors, they're also a way of defining and enforcing at least parts of the programming contract.
I don't disagree. To be clear, I'm a type safety zealot.
> You will eventually be paying for that when the first major refactor comes - if it comes, that is.
This is fine but not relevant. Erlang, for example, just assumes you'll fuck up the refactor. Actors are isolated interfaces and you can not share state - so a failure in one actor can not impact other actors directly.
It's fine to say that static types are better but if you read about Erlang you may find the approach very compelling - Erlang's managed to provide the basis for extremely reliable systems, without types.
And as I said, it is not either or. You can build power supervisory structures and statically type your code if you like, but no languages really do it, so you have to reach outside of the language (like using a distributed queue/ microservice approach).
> It's fine to say that static types are better but if you read about Erlang you may find the approach very compelling - Erlang's managed to provide the basis for extremely reliable systems, without types.
I have yet to see some convincing proof of that, besides that Ericsson router from 20 years ago that ended up being rewritten in C++.
Also, even if it is true like you say that
> Erlang's managed to provide the basis for extremely reliable systems, without types.
This still doesn't prove that there are no languages that can do a better job at it than Erlang.
99.99% of extremely reliable software today runs on non Erlang: C, C++, Java, you name it.
Finally, in my experience, writing supervisors in Erlang is just as painful, and if not harder, than writing resilient code based on exceptions in Java or C++.
This paper is fundamental to reliability, and describes two primitives for building reliable systems - transactions, and the "persistent process" (spoilers: it's an actor).
And here's Joe Armstrong's thesis. The first 3 chapters are quite relevant and will point you to further research
> besides that Ericsson router from 20 years ago that ended up being rewritten in C++.
This is missing some important information. Erlang is still used at the control plane/ orchestration layer, for exactly the reasons I've described.
> This still doesn't prove that there are no languages that can do a better job at it than Erlang.
Didn't say otherwise.
> 99.99% of extremely reliable software today runs on non Erlang: C, C++, Java, you name it.
Sure, but who cares about Erlang? The real money's in isolated persistent processes aka actors, and I bet most reliable software is built on those, whether language provided or not. See AWS's cell based architecture, which is just the actor model with discipline attached. Or all of microservice architecture.
> Finally, in my experience, writing supervisors in Erlang is just as painful, and if not harder, than writing resilient code based on exceptions in Java or C++.
"I have yet to see some convincing proof of that, besides that Ericsson router from 20 years ago that ended up being rewritten in C++."
Control plane for 80% of mobile in the world is Erlang. Core critical infra @ Goldman is Erlang. Control plane for 90% of internet routers is Erlang.
> they're also a way of defining and enforcing at least parts of the programming contract.
Violation of those parts of the contract are type errors. (And all proper type errors—those that aren't artifacts of the type system and it's incorrect use or impedance mismatch with the project—are violations of the programming contract.
> The time you save from writing a dynamic program is time not spent on defining that contract.
No, you can still define the contract when using a dynamic language, and still often save time compared to a real static language.
In the ideal case, sure, a static language would add no additional overhead to this, but that's an unattainable ideal.
I think you're maybe overly focused on the "compile time type checker" bit of static typing.
The contract definition bit is secondary, but useful. It's also at least somewhat separable from the type checking.
The best example I can think of is to compare the duck typing that is common in many dynamic languages with the formal interfaces that are more popular in static languages. With duck typing, you basically look for a method with a particular name in an object, and then, having found it, simply assume that that method is going to implement the semantics you expect. That works surprisingly often, but it is a bit rough-and-ready for many people's tastes.
With formal interfaces, you have a clearer indication from the type's author that they're consciously planning on implementing a specific set of semantics. They could still be doing it wrong, of course, but it's at least trying to be more meticulous about things.
I also think it's worth pointing out that static typing can be useful as a performance thing. Pushing all those type checks into run-time does have costs. There's the extra branch instructions involved in doing all those type checks at run time, and there's the extra memory consumption involved in carrying around that type information at run time.
(It's also true that this aspect is very much a continuum, especially with regards to the performance considerations: Many dynamic languages have JIT compilers that can statically analyze the code and treat it as if it were statically typed at run time, and any ostensibly static language that allows downcasting or reflection supplies those features by carrying type information and allowing for type checks at run time.)
> So the only message passing between your rust actors is via SQS or such?
It's S3 -> SNS -> SQS -> Lambda
This gives me:
* Persistent events on s3, for replayability
* Multiconsumer for SNS
* Dead letters, retries, etc via SQS
Maybe from a latency perspective this is slow, but my system can tolerate latency at the level of minutes, so I'm really doubtful that my messaging system will matter.
Most time is spent downloading payloads in S3 as far as I know. I batch up and compress proto events to optimize this.
I haven't tried scaling out my system but I'm confident that message passing is the least of my concerns.
No this completely misses the point! The point is the development model, not the language syntax.
Static types assume a "static" phase: an edit-compile-run cycle, with bright lines between each stage.
But as a Smalltalk programmer, you inhabit the system and build it inside-out, while it is running. For example, during development it is routine to query live objects. The lines between the edit-compile-run phases disappear.
Static types don't make sense in the Smalltalk model because there's no static phase.
I agree with that. The interesting aspect of dynamic languages is that it's a living program, you can interact with, it can interact with itself and you could even effectively make it an entire new program without ever closing it like the Ship of Theseus. Smalltalk, Lisp Machines and the Erlang Virtual Machine were pushing this idea.
Which is why I feel that a dynamic language that doesn't focus on the interactivity, like having a great REPL based development cycle, hot swapping capabilities and strong code as data utilities kinda misses the point in the trade-off between static and dynamic, sacrificing safety for too little. Jupyter notebooks seem to have recently brought a little of the first to the mainstream at least.
Static types assume no such cycle! Milner's MLs, which introduced the hugely influential simple type theory to functional programming in the late 70s, of which Ocaml and Haskell are derived, supported global type inference, and were very "lispy". They were first used for interactive theorem proving, where you spent all your time spinning around MLs REPL, updating your global environment where your global bindings have a nice static type, and when you had finished for the day, you saved your image (you save-lisp-and-die'd).
This is still a way to use Ocaml today, and Poly/ML, which is the standard ML used for the Isabelle theorem prover, makes heavy use of an image based model.
> If I offer you a choice between language A and language B...
Realistically you don't have the choice between hypothetical buffet languages A and B, you have a choice between Java or C# and Python or Javascript and maybe a handful others. Those are the type systems you will have to deal with, most likely.
I'm sure the type inference in Haskell (or similar) is fantastic but virtually nobody uses those languages so those benefits are purely theoretical for most programmers.
By the way, static type inference in Javascript also has gotten pretty good and in many cases it can catch all of the bugs that an ordinary type checker would catch.
> Realistically you don't have the choice between hypothetical buffet languages A and B, you have a choice between Java or C# and Python or Javascript and maybe a handful others. Those are the type systems you will have to deal with, most likely.
> I'm sure the type inference in Haskell (or similar) is fantastic but virtually nobody uses those languages so those benefits are purely theoretical for most programmers.
I think this argument would hold more water if the dynamic language under discussion wasn't Smalltalk...
> Not really, in my opinion. Type inference especially has come a long way.
Does dynamic typing also make writing tests easier?
For example when writing tests in Python it's relatively trivial to mock out specific methods or functions. However, when I was writing tests in Go, I realized to test things which required mocks I would have to change all the type signatures to use interfaces instead of structs, and that required writing a new interface as well.
I admit I am not entirely sure whether this difficulty stems from static vs dynamic typing or something else.
Honestly this is why I think dynamic languages with optional type systems tacked on are the best.
You get a huge amount of power in your type system (Python is actively working on dependent types and supports variance first class), but for code where that's a burden (test cases), you can drop it.
Maybe powerful macros that let you copy an interface (Python's mock.patch with autospec=true, but better) would solve the same problems, but you don't generally have that option.
Go is not very impressive when it comes to type system power though. My bet is that if you wrote your statically typed code in Crystal instead, then you wouldn't have any problems creating your mocks. There is nothing in the concept of static typing that forbids duck typing - it is just that most type systems are weak enough that it doesn't work.
Tests tend to test values. Types generally don't catch wrong values, so if you implement add() as subtract(), the signatures are the same, but you still get a wrong result.
However, tests of values incidentally also test the types, because values have types and for the values to match, the types must also match.
So if you write the tests that you need to write anyhow, the ones for the values, you have also tested the types, without extra effort.
Tests check the values, but that doesn't mean it's the only thing they test. When you say 'incidentally' you are conflating the goal with the means.
Anyway, my point is that dynamic typing creates a whole category of errors and make much easier to make several other kinds of mistakes and, what is worse, delay the moment in which they're obvious.
> but that doesn't mean it's the only thing they test.
That was exactly my point. In checking values, they also check types, without any extra effort.
> creates a whole category of errors
It doesn't create them. It just doesn't prevent them (type errors) at the language/compiler level. But since you have to test the values anyway, that doesn't really matter.
> delay the moment
Not really. Dynamic languages make possible environments that compile+run before the static compiler is done compiling.
That line of reasoning is the same as certain persons that "sell protection": they want you to pay to solve a problem that was much smaller before they appeared. Of course they try to convince you that the problem was already there.
Not really. Dynamic languages make possible environments that compile+run before the static compiler is done compiling.
You're confusing dynamic types with dynamic compilers. Or maybe you mean speed? There are very fast compilers, I assume your reference is C++ that is a dog. Anyway that's a red herring, because the delay I was talking about is logical, not about the implementation.
Hmm...static type checking is something you add to a programming language, so you seem to have your causalities mixed up a bit.
> confusing dynamic types with dynamic compilers
Nope. Most of the compilers for languages for different kinds of rich static type systems are slow, and getting slower. Yes, C++, but also Haskell, Scala and Swift. Some have Turing-complete type-systems, so you actually have no guarantee that compilation will even terminate, never mind the time it will take.
An almost bigger point is incrementality. A language/system like Smalltalk lets you compile a method at a time without stopping the running program, and you can do an exploratory step that is allowed to be inconsistent without having to fix all related code.
The development experience is something that has to be seen and lived to be believed.
> Most of the compilers for languages for different kinds of rich static type systems are slow, and getting slower
It's not the types that make Haskell slow to compile, as you can verify but running a type-check only pass using -fno-code. You will find it run an order of magnitude quicker than a full compile.
Maybe you know this, but type inference goes back to the 1970s. It's taken this long to start seeing it in mainstream languages.
Similarly, pure functional strongly statically typed languages with type inference, pattern matching, algebraic data types and non-null types have been around for literally decades yet we're still stuck debating about the so-called benefits of dynamically typed languages.
It's a risk and reward. You're giving away seconds or so on every line, for what might take a little debugging if one is encountered.
I'd rather get the code written and the problem solved than make converting a string into a date the problem to be solved. There are plenty of problems figuring out if something is a date or not, but it's worth it.
Depends on what exactly you mean by “type inference”, “fast”, and “easy as Python”.
Smalltalk has been pretty fast for ages (without inference, and may or may not be as easy as Python to you).
JavaScript JITs can use a form of type inference, so you get the benefits of speed with no change to syntax. Again, it’s pretty fast, and “easy” is in the eye of the beholder.
This is the part I always fail to grok in discussions about type systems. For me it is just the opposite: I typically don't find dynamic typing to be worth the trouble, so I fail to understand this sentiment when going the other direction.
Maybe it's because my degree is in mathematics (lots of proofs written out long-hand), and my first language was Java? I have since all but abandoned many of the ideas that Java so rigidly enforces, OOP among them, but static typing just has so many advantages for me that I have no desire to get rid of it.
One thing I've noticed in some recent work is that the static language I'm using (Go) allows me to both be lazier and get more done than comparable work being done in dynamic languages. I'm not reading or writing tons of documentation (I'm reading other people's code directly, or just looking at the types involved and moving on). I'm not constantly handling byzantine runtime errors or having to memorize the arbitrary intricacies of any over-bearing frameworks in order to be productive. I don't need to be on guard quite so much, I lean on static analysis. In general I just don't need to hold so much extraneous crap in my head, it's all spelled out right there in the code and I'm free to think about larger concerns, like the best way to solve the problem at hand. And when I circle back around to some 10-20k line library I wrote 3-6 months ago, I can read, understand and refactor parts of it quickly.
People often talk about developer comfort and speed being improved by dynamic languages, and maybe that's true if Rails solves all your business needs, but for many of the tasks I've work on, that just hasn't borne out over the past 10 years. I have no doubt that dynamic languages seemed a breath of fresh air after fighting with C/C++/Java, but to my eyes Ruby/Python/Javascript etc were always a bridge too far for a lot of tasks.
It's not worth the trouble with "write once" programs that you never refactor.
I think the problem is that mainstream statically typed programming languages come with terrible or non-existing metaprogramming facilities to make up for type constraints.
Rust supposedly has good metaprogramming but then it also has a type system that is so obnoxious it's not "worth the trouble in many situations".
There has also been improvement in dynamic typing. For instance EcmaScript-6 default arguments mean that the programmer can effectively indicate the types of arguments a function or method is to be called with.
From the programmer's point of view default arguments are a cheap way to achieve some benefits of "type declarations". And they are especially cheap when you consider that in return you get - default arguments. They are a win-win.
Consider also the discussion about Smalltalk preserving the system integrity by allowing only methods manipulate the internal state.
This means that anything that gets modified inside an object is done by methods of that object. That means that it is easy for you to add dynamic type-checks whenever you are writing an instance-variable. That goes a long away towards alleviating type-problems.
Whereas in a language where objects' properties can be modified from the outside there is no way to intercept such modifications and type-check them. EXCEPT in recent developments such as ES6 you can define SETTERS which get automatically called when you or anybody tries to set a value of a property. That means you can then put the type-checks there.This just as an example of improvements in dynamic type-checking.
Now ES6 is great, but it does not force you to create such setters. Whereas Smalltalk does force you to create methods if you want to modify property-values.
This great feature is somewhat diminished in value by the fact that (in Smalltalk) many methods can write the same instance-variable so it sometimes becomes tedious to figure out which method including inherited ones wrote a delinquent value to the instance variable. But then Smalltalk IDEs allow you to browse all methods which read or write a given instance variable, which helps in such a situation. Then you can think about refactoring it so that really there is only one setter for a given instance variable.
My feeling is that it is not about improvement but democratisation : Ocaml has been around since 1996 and its type inference is stellar. I was shocked when I first touched C and Java after two years of Ocaml.
That's probably more controversial here than his views on OO.