Its fairly interesting that this article doesn't mention actors, futures/promises and async/await style co-routines which are all extremely available in all of the major languages available today and broadly used (with the possible exception of golang).
Frankly, I think the concurrency story is one of the weaknesses of golang. Contrary to what this article says you cannot stop thinking about concurrency in your code in golang like you can in some of the more avante garde concurrency models (STM) and it doesn't provide nearly the sophistication in type or post build checking of other languages.
All of those interfaces are trivial to implement in Go precisely because Go implements a much stronger abstraction. By contrast, if you only have those other abstractions you're quite limited in how you can structure your code. (You might not realize how limited you are, however, if those are your only options.)
As the article says, most languages settle for those interfaces because they can mostly be implemented as libraries with minimal refactoring of the existing implementations and their execution model.
They have their uses but they're not universal enough. If you think actors, futures, and async/await are so great, imagine having to write everysinglefunctioninvocation in that manner. It would be unimaginable outside of some special-purpose declarative DSL. By contrast, the concept of a thread--linear flow of logical execution of statements and expressions--is fundamental to programming. Even in functional languages with lazy evaluation or automagic parallelism. It's a basic building block similar to functions. And much like functions, it's incredibly useful to be able to easily compose those blocks, which is where channels and message passing come into the equation. One way to compose them is with actor and future interfaces, but that's not the only way and not always the most appropriate way.
Threads ended up with a bad name because of (1) performance and (2) races, but that conflates the abstract concept of a thread--a construct that encapsulates the state of recursive function calls--with particular implementations. Goroutines are threads, pure and simple, but [mostly] without all the baggage of traditional implementations. (That they can be scheduled to execute in parallel complicates the construct but also makes them useful for the same reasons traditional OS threading constructs were predominately used, except with much less of the baggage.)
The biggest problem with go is that it’s not easy to implement those as libraries. This is a combination of the golang story around generics & their opinionated strategy on concurrency.
Conversely most other modern, mainstream languages can mimic golang concurrency as a library.
Or it’s possible I don’t agree with its premise that go solves the problem better than other languages because it hides async behaviors from the type system.
In practice it doesn’t. Asynchronous functions leak into golang implementations in worse ways in golang. Everything from the near universal use of channels as poorly implemented promises to the horrendous Context being passed to everything for cancellations. The golang concurrency story is weak compared to any language that has a story at all.
> If you think actors, futures, and async/await are so great, imagine having to write every single function invocation in that manner.
I don't see how:
val f = Future { foo() }
// do work...
f match {
case Success(res) => useRes(res)
case Failure(error) => handleError(error)
}
Is any more difficult than:
done := make(chan bool)
var res int
go foo(done, &res)
<-done
useRes(res)
As a matter of fact, the first one is much easier. I don't have to create and pass a channel, and manually handle returning a success or failure. The first approach is much better
I read that article before. Here's the thing, the only thing that the "go" keyword makes easier is having the ability to have any arbitrary function run in a green thread without having to modify it's signature. However, the moment you want to actually do something useful with it (e.g. communicate with it, cancel it, or read its returned value), you're going to have to pass a channel (or more) or a waitgroup to it anyway, modifying its signature and changing its "color". The issue remains pretty much the same.
I think it's fair to say that neither JavaScript, Perl, nor Python, notwithstanding Web Workers or rough POSIX threading support; but Lua and Scheme do even though their threading construct is not based on the system threading model (though strictly speaking it's likewise available to those languages).
What I really meant to get at was that Go provides a flavor of threading that is lightweight, simple, and ergonomic. The threading construct is a first-class citizen and fundamental to both the language design and its implementation tradeoffs. Threading, lexical closures, and GC were designed and implemented holistically. Go takes a performance hit for its seamless support of closures (lots of incidental heap allocation), for example, but they did it because notwithstanding the Go authors' utilitarian goal it was important that the core abstractions remained relatively unadulterated and natural. If this wasn't the case (if it was just about avoiding manual memory management), Go could have required explicit capturing of lexicals, like C++ and Python do. People complain that Go doesn't have a lot of sophisticated features, but that's because everybody is focused on typing and generics. But modeling execution flow is at least as interesting academically and important commercially. While Go seems simple, supporting these constructs the way Go does is actually really difficult. Which is why it's so rare to find.
I intentionally didn't mention channels because while syntactically nice the real internal complexity (and payoff) comes from the closures. Channels are something of an implementation detail which you can easily hide inside closures in a functional-style of programming; threading + closures allow you to implement coroutine semantics very cleanly (i.e. no async/await necessary) and, if you so desire, transparently. (And it just occurred to me that async/await is such a misnomer. In a coroutine caller and callee are logically synchronous. The fact that languages like Python, C#, and C++ use async/await terminology shows how they put the cart before the horse--these constructs were designed and implemented for the very specific use case of writing highly concurrent async-I/O services, and they stopped at making that use case moderately convenient. They're a very leaky abstraction. See function color problem mentioned in the article.)
> I think it's fair to say that neither JavaScript
Ah yes, how could I forget Javascript :)
> The fact that languages like Python, C#, and C++ use async/await terminology shows how they put the cart before the horse
Of these, I'm very familiar with C# (15+ years' experience). C# had great threading constructs long before it added async/await, and still does. It actually provides a great variety of threading constructs - you can go low level with mutexes, wait handles, and threads; then there's the Task.Run abstraction, Parallel.ForEach, the TPL...
When async/await in C# was first announced, it was hailed as making concurrency much simpler for devs, but I've always found threads much simpler to reason about and debug, while async/await gives you a variety of footguns that can be difficult to debug.
FWIW, I've been using async/await in C# for years now, but coming from much more of a threading background, I confess it's only now beginning to feel intuitive. I dunno, maybe if new devs come to concurrency from the async perspective first, it's easier to grok.
Maybe. But I’ve been developing golang full time in high scale concurrency environments for 4 years and working with a team of similar people. It’s an opinion that is near universally shared on that team.
At high concurrency levels almost everything abandons standard golang concurrency patterns and tools.
Consumer facing systems. System wide throughput between 6-12 million QPS (daily low/high) (query body average size is 1.5KB). Each server tops at ~130K QPS. On a system with 2 1 GB NICs we pop the NIC. On a 10G we pop the CPU.
Current bottleneck is the golang http/net libs. Would likely need to rewrite it from the NIC up to do better.
That's an issue with the http/net libraries, not the concurrency model.
At really high throughout you can run into into issues with the kernel's networking and driver stack. I've encountered situations with my own homegrown event libraries (mostly C or Lua+C; I've never used Go) that were bottlenecked in the kernel. I've also seen issues that were fundamentally related to the use of poor buffering and processing pipeline strategies that resulted in horrible performance. For example, I can get an order of magnitude greater streaming throughput using my own protocol and framing implementations than when using ffmpeg's, though I use ffmpeg's codecs and a non-blocking I/O model in both cases, all in C. And that's because of how I structured the flow of data through my processing pipeline.
There's is no general model of concurrency that can solve that, and I've never seen any model that was easier in the abstract to tweak than any others. Those are implementation issues.
I don’t know if it would have been holistically better, golang has lots of advantages.
But the concurrency would have been more straight forward on the JVM because the language allows for more choices & their are lots of options that get you there.
He does. He referenced it when referring to the color of functions: "I've enjoyed Bob Nystrom's What Color is Your Function[1] for explaining how annoying the model of "non-blocking only here, please" is."
The linked article [1] explains his take on why those futures, etc not good enough.
I don't know who owns the term "futures", but the limitations of javascript promises are just choices that javascript made.
In Java, you can write:
List<Future<String>> futures = new ArrayList<>();
...loop that populates futures
List<String> strings = futures.stream().map(Future::get);
The loop will run a series of futures (potentially asynchronously...the use of a Future is decoupled from the actual choice of thread pool, etc). The map will collect the results in a blocking fashion. And Future.get() will rethrow any errors that were uncaught in the execution.
For most use cases of async programming you never need to touch channels and goroutines. If you build a webserver in go it will be highly concurrent out of the box. as a developer you just write synchronous code. here's a nice tutorial: https://getstream.io/blog/go-1-11-rocket-tutorial/
Just my own anecdote, but I write go and typescript daily and I’d take goroutines and go’s other concurrency primitives over aysync/await any day of the week.
> actors, futures/promises and async/await style co-routines which are all extremely available in all of the major languages
Language features are indeed available. Runtime features backing them are only available in Go, erlang and .NET.
If you only need concurrency for CPU bound calculations, even C++ has decent options, e.g. OpenMP works great for my tasks. However, OpenMP offers nothing for IO. The point of go’s co-routines or .NET’s async-await, they allow to run a mix of CPU bound and IO bound code, and do so in parallel utilizing all available hardware threads.
Scheduling is not enough. For efficient IO you need a scheduler that’s tightly coupled with OS-specific kernel mode APIs for async IO.
Java has support for that in java.nio.channels but that’s limited and is not integrated with the rest of their standard library.
C++ can do that, too, but quite hard in practice.
Runtimes like Go, .NET and erlang already have that stuff included. There’re some limitations (e.g. erlang doesn’t support that on Windows, BTW they call the feature “kernel poll”), but still it works great and very easy to use.
There’re indeed multiple third-party libraries for that in many languages. E.g. for C++ we have lubuv, boost.asio, etc. However, built-in solutions have upsides.
They work out of the box i.e. deployment is simpler.
They’re integrated into the language, e.g. most network IO in golang is asynchronous under the hood, compiler and runtime do that automatically.
They’re integrated into standard libraries, e.g. in .NET all streams be it files, sockets, or transforms like compressors and encryptors support asynchronous operations.
Frankly, I think the concurrency story is one of the weaknesses of golang. Contrary to what this article says you cannot stop thinking about concurrency in your code in golang like you can in some of the more avante garde concurrency models (STM) and it doesn't provide nearly the sophistication in type or post build checking of other languages.