Java uses table dispatch by default, but you can opt into
direct dispatch by using the final keyword. C++ uses direct
dispatch by default, but you can opt into table dispatch by
adding the virtual keyword. Objective-C always uses message
dispatch, but allows developers to fall back to C in order
to get the performance gains of direct dispatch. Swift has
taken on the noble goal of supporting all three types of
dispatch.
I kinda already knew this but it's nice and succinct.
> Most languages support one or two of these. Java uses table dispatch by default, but you can opt into direct dispatch by using the final keyword.
This is not accurate. Java does not use table dispatch by default. The HotSpot compiler will optimize most code to use direct dispatch, reserving table dispatch only for cases where calls cannot be inlined; and final does not particularly enable direct dispatch.
> Like many myths about Java performance, the erroneous belief that declaring classes or methods as final results in better performance is widely held but rarely examined. The argument goes that declaring a method or class as final means that the compiler can inline method calls more aggressively, because it knows that at run time this is definitely the version of the method that's going to be called. But this is simply not true. Just because class X is compiled against final class Y doesn't mean that the same version of class Y will be loaded at run time. So the compiler cannot inline such cross-class method calls safely, final or not. Only if a method is private can the compiler inline it freely, and in that case, the final keyword would be redundant.
> On the other hand, the run-time environment and JIT compiler have more information about what classes are actually loaded, and can make much better optimization decisions than the compiler can. If the run-time environment knows that no classes are loaded that extend Y, then it can safely inline calls to methods of Y, regardless of whether Y is final (as long as it can invalidate such JIT-compiled code if a subclass of Y is later loaded). So the reality is that while final might be a useful hint to a dumb run-time optimizer that doesn't perform any global dependency analysis, its use doesn't actually enable very many compile-time optimizations, and is not needed by a smart JIT to perform run-time optimizations.
> Like many myths about Java performance, the erroneous belief that declaring classes or methods as final results in better performance is widely held but rarely examined.
Yes, was about to point this out, one of the more resilient misconceptions about Java. I've lost count of how many times I had to explain this and remove a boatload of unnecessary "final".
> but you can opt into direct dispatch [in Java] by using the final keyword
Most Java JIT compilers will also opt you into direct dispatch automatically where possible, for example a class with no subclasses or an interface with only one implementation.
They'll also opt you into direct dispatch per call site if a given call site has only ever seen objects of a single class.
Yes they have to be done at runtime. In both cases the optimisation could become invalid if you load more classes, so the optimisation also needs to be reversible at runtime.
This is an example of how it possible for a JIT to out-perform a static compiler.
Interestingly, while the C# language uses static dispatch by default, it is at least the case that Microsoft's compiler always emits the `callvirt` instruction[1]. This is because the language spec requires that a method call to a null instance of a reference type must throw a NullReferenceException, and callvirt is apparently the cheapest way to do that on the CLR.
They did a great job of almost plastering over all the typical Objective-C and Cocoa things in Swift but you can't escape some of these weird underlying mechanisms of Objective-C creeping in without sacrificing compatibility.
Great read, good to know this stuff. I encountered a few of these examples while poking around with more complex features in the language.
I'm not sure what they're sacrificing in terms of compatibility. Do you have any examples? There is a degree of complexity, but providing interoperability with most languages more complex than C will introduce complexity.
Makes you wonder how long before Apple stops making the compromises in compatibility. In the near term, sure, but 3-5 years down the road, they may favor Swift.
I must say, while swift can be very readable, just reading through this article put me off of the language. If you don't work with objective-c or uikit/cocoa, is there a reason to use this over another llvm-based language with cleaner, simpler semantics? The two bugs outlined are terrifying.
Which other llvm-based language to you recommend? Swift is, to me, elegant and intuitive to use, quick to write, and supports a lot of nice features which you don't find in say C++, and supports stuff that, say, Rust lacks support for.
`Optional`s being baked into the language, using `Result` types, `guards`, `if let`, pattern matching, great support for Rx, and lots of other fun and cool stuff, easy use of high-level features with possibility to dive deeper, while still being able to use all the features of Cocoa, Cocoa Touch, and all the other frameworks and features of macOS, iOS, watchOS, and appletvOS as a native citizen, and also write code that runs better everyday on Linux, is a really great package. Swift Package Manager is also great and becomes better everyday.
Swift is a good combination of a great language that supports a lot of cool platforms and has a lot of tools available.
Funny, the description of message-passing dispatch is the same as Perl's & Python method resolution, including the caching of recently-resolved methods for fast direct dispatch…
The only analogy I can think of is to a "witness" to a logical proposition in mathematics. In this case the witnesses "prove" that the given concrete type implements the given abstract type.
If that's the case it's kind of a weird way to put it, though it sort of squares with the witnesses in proof-relevant formulations of logout where you care about the form of a proof and not just what it proves.
Good question, I'll update the post if I can figure it out. It turns out that swift uses both witness and virtual tables, something I'm realizing I didn't make very clear in the post. https://github.com/apple/swift/blob/master/docs/SIL.rst#vtab... has some more interesting information.
"a witness is a specific value t to be substituted for variable x of an existential statement of the form ∃x φ(x) such that φ(t) is true"
In the less abstract: a witness is someone who, because he has looked at data, can tell you something about that data.
A witness table contains the statements of multiple witnesses. In this case, the statements are of the form "'A' is a value for which the statement 'you can find the code implementing method M for class C at address x' is true".
As the article states "Most languages refer to this as a “virtual table". I find the use of Witness table strange here. I know it from string matching algorithms, where the witness statements are of forms such as "if you start comparing here, the fourth character is the first that doesn't match".
The problem with Arch is that Swift relies quite heavily on Glibc so the work to port to alternate libc is very substantial.
The other problem is that the mess basically starts there, e.g. Swift programs in practice also depend on Glibc and so after you kill the first turtle it's still turtles all the way down.
You could build an Arch with Glibc, but why you would want to is not immediately clear.