Hacker News new | past | comments | ask | show | jobs | submit login
Go hits the concurrency nail on the head (thegreenplace.net)
284 points by nikbackm on Oct 4, 2018 | hide | past | favorite | 304 comments



Go concurrency is just threads. It's a particularly idiosyncratic userland implementation of them.

There are two claims here I'd like to unpack further:

1. "I've measured goroutine switching time to be ~170 ns on my machine, 10x faster than thread switching time." This is because of the lack of switchto support in the Linux kernel, not because of any fundamental difference between threads and goroutines. A Google engineer had a patch [1] that unfortunately never landed to add this support in 2013. Windows already has this functionality, via UMS. I would like to see Linux push further on this, because kernel support seems like the right way to improve context switching performance.

2. "Goroutines also have small stacks that can grow at run-time (something thread stacks cannot do)." This is a frequent myth. Thread stacks can do this too, with appropriate runtime support: after all, if they couldn't, then Go couldn't implement stack growth, since Go's runtime is built in userland on top of kernel threads. Stack growth is a feature of the garbage collection infrastructure, not of the concurrency support. You could have stack growth in a 1:1 thread system as well, as long as that system kept the information needed to relocate pointers into the stack.

Goroutines are threads. So the idea the "Go has eliminated the distinction between synchronous and asynchronous code" is only vacuously true, because in Go, everything is synchronous.

Finally, Go doesn't do anything to prevent data races, which are the biggest problem facing concurrent code. It actually makes data races easier than in languages like C++, because it has no concept of const, even as a lint. Race detectors have long existed in C++ as well, at least as far back as Helgrind.

[1]: https://blog.linuxplumbersconf.org/2013/ocw/system/presentat...


That Go concurrency is threads is the point of the article. It says this explicitly: "You can think of goroutines as threads, it's a fairly good mental model. They are truly cheap threads". If OS threads become as cheap as userland threads then the implementation of userland threads becomes unnecessary, but the conceptual model stays the same.

In other languages, asynchronous APIs return futures or have explicit callbacks, whereas synchronous APIs return the type directly, thus creating a type level distinction between asynchronous and synchronous code. Go's model eliminates that distinction even though it calls asynchronous OS APIs under the hood.

Go isn't the king of cheap threads, though. Some cilk style task parallelism implementations have figured out clever tricks to make threads and synchronisation even cheaper. They're so cheap in fact, that a recursive fibonacci function that spawns threads for its recursive calls is almost as fast as one that doesn't. This is achieved by spawning those threads lazily: if a core is idle it looks at the call stacks of other cores and retroactively spawns threads for some of the remaining work on those stacks. It steals that work by mutating that call stack in such a way that if that other core returns to the stack frame it will not perform that work itself, but it will use the result computed by the core that stole the work. This not only avoids thread spawning cost unless the parallelism is actually used, but it also avoids synchronisation cost, because synchronisation is only necessary if work was actually stolen. Cilk style task parallelism traditionally focuses on parallelism rather than concurrency, but there's no reason why the same implementation strategies couldn't work for concurrency. OS threads have no hope of beating this.


Note that there is a widely-used implementation of Cilk for Rust, known as Rayon, used in Firefox Quantum among other projects.


After a web search I read that Rayon is a word play on Cilk, but neither are dictionary words.. Would someone explain the word play in the names for us non-native speakers?


Rayon[1] is artificial silk.

[1]: https://en.wikipedia.org/wiki/Rayon


I've never heard of cilk before, but it seems fascinating. Is there any material I could read on how this is implemented?


This is the canonical Cilk paper: http://supertech.csail.mit.edu/papers/cilk5.pdf


Here is the paper describing the lazy thread spawning implementation: https://ece.umd.edu/~barua/ppopp164.pdf


>Goroutines are threads

This is a thing I have to continually remind newer engs that are using Go. Goroutines have exactly the same memory observability / race conditions / need for locking/etc as any threaded language. Every race bug that exists in e.g. Java also exists in Go. The only real difference for most code is that it pushes you very hard to use channels directly, as they're the only type-safe option.

There are some neat tricks lower down, e.g. where you don't have to explicitly reserve a thread for most syscalls (but there are caveats there too, which is why runtime.LockOSThread() exists), which are legitimately nice conveniences. But not a whole lot else.


> The only real difference for most code is that it pushes you very hard to use channels directly, as they're the only type-safe option.

It is, but it is also a super important difference: Go provides inbuilt libraries to make inter-thread synchronization more bearable for the average user, e.g. channels and waitgroups. These are lot harder to misuse than bare mutexes and condition variables. Since those are not available in other languages and are too complex for most users to implement them on their own (especially when select{} is required), the solutions there often end up more error-prone.


Practically every language includes stuff higher level than bare mutexes and atomics. That people use them doesn't mean other options aren't available. And yeah, totally 100% agreed, most people should never implement them themselves.

But "futures", "blocking queues", "synchronized maps", and "locked objects" (e.g. a synchronized wrapper around a java object) are extremely common and often higher level than channels and selects and waitgroups[1].

[1]: Waitgroups in particular are covered by normal counting mutexes, marking them as low-level constructs like they are, and are also available nearly everywhere.


True, there are plenty of primitives available. However I found many of them are mainly for synchronizing low-level access to shared-data (e.g. the concurrent data structures, synchronized objects, mutexes), etc.

Primitives for synchronizing concurrent control flow (like Go's select()) seem less common. E.g. in Java I would have some executor services, and could post all tasks to the same single threaded executor to avoid concurrency issues. But it's clearly a different way of thinking than with channels/select.

You are also fully right about waitgroups. They are really nothing special. But they might see more use in Go since some structuring patterns are more often used there: E.g. starting multiple parallel workflows with "go", and then waiting for them to complete with a waitgroup before going on.


Select is a bit less common (at least in use) from what I've seen, yea... until you start looking at Promises and Rx. Then it's absolutely everywhere, e.g. `Promise.race(futures...)` (which trivially handles a flexible length of futures, unlike select) or any merging operator in Rx. More often though I see code waiting on all rather than any, and Go doesn't seem to change that.

Channels though are everywhere, and sample code tends to look almost exactly like introductory Go stuff[1] - Java has multiple BlockingQueues which serve the same purpose as a buffered channel, and SynchronousQueue is a non-buffered one. Though they don't generally include the concept of "close"...

But streams do have the "close" concept in nearly all cases, and streams are all over the place in many many forms, and have been for a long time. They generally replace both select and channels, but are usually much easier to use IMO (e.g. `stream.map(i -> i.string())` vs two channels + for/range/don't forget to safely close it or you leak a goroutine). Some of that is due to generics though.

[1]: https://docs.oracle.com/javase/7/docs/api/java/util/concurre...

---

some alternate stream concepts are super interesting too, e.g. .NET's new pipelines: https://blog.marcgravell.com/2018/07/pipe-dreams-part-1.html


I think more languages should have typed channels, but untyped channels are readily available via OS primitives. Unix has had pipe() since 1973.


True! Haven't thought about it, but it's actually an OS-backed untyped channel, which even supports select().

Guess the differences are: It operates on bytes and not on objects, so a custom protocol is needed on top of it. And it can't provide the same guarantees as an unbounded channel, where there are certain guarantees that sending an item actually reached the other site, and which makes guaranteed resource handoff through channels a bit easier.


> It operates on bytes and not on objects, so a custom protocol is needed on top of it.

That's why typed channels are useful, at least at the copying level (when it's more than just an array of lockable memory addresses underneath--data has to actually get transferred)--the typing is the message delimiter, which itself is the protocol.


Nothing you're saying is wrong, but your perspective is totally off.

For someone experienced with C++, or say, Rust (ahem), obviously Go is a bit backwards when it comes to concurrency and race conditions. Obviously you can mimic go's goroutine stacks, obviously you can obtain fast thread switching.

But Go isn't targeting C++ or Rust, and it's not targeting the domains those languages are best at (although admittedly there are some overlaps between Go and Rust). Go is trying to replace Ruby, Python, and JS. For programmers who only know those languages, or haven't had the opportunity to work in more "heavy" languages, Go makes it dead simple and intuitive to do things that previously would have been completely out of reach. All you're arguing is that if you go farther "down" the stack so to speak you can accomplish everything Go does, which of course is true, since it's turtles all the way down.

The fact of the matter is, if someone boots up a brand new linux laptop goroutine switching _will_ be faster than thread switching. That doesn't mean Go is super crazy performant, or better than C++ or Rust, it means someone who's only programming experience is Rails apps can now write performant multi threaded code with orders of magnitude less domain experience. Same with race detectors. Multithreaded Python is a complete minefield for race conditions. Sure, you might not segfault, but you have to think if you use fork() or spawn() depending on the OS, you have to install libraries to detect races, you have to write special tests. With Go all of that comes out of the box and it makes it _easy_.

Go does nothing new, and a lot of languages do things a lot better. Erlang is mentioned in other comments, Erlang is a fantastic language for concurrency and a fantastic language in general. It's also incredibly hard to find programmers who code in it, or are willing to learn it. It's also incredibly hard to sell to the business people higher up. It's also very hard to find Ops people who can competently support an Erlang stack. C++ gives you the power to build formally correct real time systems, but it also gives you the power to blow your whole leg off if you don't know _exactly_ what you're doing. I can go even further down the stack with C and assembly but I think you get the point, it's all about tradeoffs. Go allows programmers to reason and think about concurrency without having to worry about linux kernel PRs, without having to worry about how to share memory, without having to worry about stack performance.

*edited for clarity


"But Go isn't targeting C++ or Rust.. Go is trying to replace Ruby"...

If you listen to Ken Thompson and Rob Pike talk about the first days of Go, it was directly targeting C++. They mention the absurdly slow compilation times of C++ code at Google, and the high complexity of code that folks were writing. I believe Steve Francia's "Standing on the shoulder" talk goes into this, but I don't have time right now to re-watch and make sure I'm citing the right talk.


This is correct, but they quickly pivoted to targeting the Python codebases. Rob Pike has a talk where he talks about how Google used C++ and C to rewrite hot Python paths and how they wanted Go to be able to completely replace that whole pattern. Russ also has a blog post (maybe? it also could have been a comment in a github issue, tbh I can't remember) where he mentions converting python programmers was orders of magnitude easier within Google, as the C++ programmers often had rose colored glasses about their own abilities and the tradeoffs of C++.

It's interesting, as I don't think Go would have been successful without the pivot, but I also don't think it would have been as successful if they had started off trying to replace Python.


Well, why did not they just use C#? It already had most, maybe even all of Go's current features.

Hell, the main thing of Go, the go op is basically await.


C# would be a good option today, but at the time it was an expensive proprietary closed-source blob maintained by one of their primary competitors, and async/await was still years away. I don't like Go very much, but it was a reasonable choice for Google.


In 2008 Mono was already in quite a good shape.


Why exactly do you say Go is backwards to C++ with regards to concurrency and race conditions? Really curious.

I worked on a sizeable C++ codebase, and it had a home-grown, buggy thread-pooling & task cancellation (similar to Go's context.Context) engine. Go's builtin goroutines were a breeze afterwards. Also, I debugged a race condition in this codebase. Once, and it took me a few weeks full steam digging, thinking and mental construction. I didn't even know I have one at the beginning, just had this nagging feeling. (Valgrind would take the app to a crawl, and it was speed-critical.) In Go, I can just run tests with -race flag, and it finds me truckload races at a blink, pointing to the exact place where they happened, with a stack trace sprinkled on top. I really find it hard to understand how you find Go experience backwards here. But I'm also open and listening, and very curious whether I could maybe learn something enlightening!


Your experience mimics my experience almost exactly (although it sounds like you've worked on much larger and more complex C++ projects than I have), however I've been lucky enough to work with some extremely talented C++ programmers that have been able to accomplish astounding things with good, clean, C++ code. The problem, of course, is that 99.9% of us are not extremely talented C++ programmers.


Ok; so if you confirm my experience, I don't understand how you can at the same time claim Go is "backwards to C++" w.r.t. concurrency & race conditions. Did you mistype "superior" as "backwards"? Btw, we also had extremely talented C++ programmers. Some of them sent ISO C++ proposals from time to time, and I think were even accepted.


It seems weird to switch from python to go. Python is lispy in that it maximises flexibility and developer power at the cost of speed and inbuilt correctness checks. Go is Javaish in that it limits what the developer can do heavily (crippled typesystem) for speed, correctness.

I checked my assumptions, you're kind of right about python developers switching but it's hard to know the real base distribution of people who know those languages (there might just be more python/js developers not more switching proportionally). Not much ruby guys though and plenty of Java/C/C++

https://blog.golang.org/survey2017-results


> but your perspective is totally off.

Well, his perspective is well known in all Go threads on HN and this is not only my opinion (look here [1]). He is repeating the same things[2] again[3] and again[4] and again. About M:N in Go, about why Go is worse because it doesn't have generational GC but when asked if he reached to Go team about that there is no response. If you look at his comments from last month you will see how much downplaying of Go is there + other languages. It seems that he only praise one language (ahem) in his comments.

1. https://news.ycombinator.com/item?id=17886153

2. https://news.ycombinator.com/item?id=18101986

3. https://news.ycombinator.com/item?id=17886144

4. https://news.ycombinator.com/item?id=17886122


> It seems that he only praise one language (ahem) in his comments.

So, I've seen this pop up a few times, but I'd also like to make this really clear: Patrick formally stepped down from working on Rust a year and a half ago, and was inactive for a while before then. At this point, he's the same as any other user.

That is to say nothing about my opinions about his opinions, but let's be clear, rather than insinuating things: Patrick speaks for himself, not for the Rust team.


Thanks for clarification, I didn't know about that. That doesn't change anything I wrote but it's good to know this was not coming from current member of the Rust team.


His perspective while familiar to you is new to me. While he is adding value, you are simply a distracting commenter on a goose chase.


> Finally, Go doesn't do anything to prevent data races, which are the biggest problem facing concurrent code. It actually makes data races easier than in languages like C++, because it has no concept of const, even as a lint. Race detectors have long existed in C++ as well, at least as far back as Helgrind.

While it can't guarantee correctness outside of runtime (and even then, obviously only if whatever you are running actually triggers the race), a race detector has been part of the Go core since 2012[1].

[1]: https://blog.golang.org/race-detector


Also there's a "best effort" concurrent map access detector that runs even when you don't compile with race detector support enabled.


So what's preventing Linux from adopting switchto support? I'd love to see the discussion around it.


Beats me! The author no longer works at Google from what I can tell—emails to him bounced.

I'd love to see 1:1 threading become competitive with M:N for the heaviest workloads. It just plays so much nicer with the outside world than M:N does.


> I'd love to see 1:1 threading become competitive with M:N for the heaviest workloads. It just plays so much nicer with the outside world than M:N does.

After working lots of years with all of the available async paradigms (eventloops, promises, async-await, observables, etc) I would tend to agree. Even though many of those things (including Rusts futures implementation) are very well-engineered, the integration problems (as e.g. outlined in the what color is your function article) are very real. And the extra amount of understanding that is required to use and implement those technologies might often not justify the gains. One basically needs to understand normal threaded-synchronization as well as async-synchronization (e.g. as with .NET Tasks or Rusts futures) to build things on top of it. Same goes for the extra level of type indirection (Task<T> vs T), or the distinction between "hot" and "cold" tasks/promises.

If it would be possible to get 1:1 threading into the same performance region for most normal applications (e.g. everything but a 1 million connection servers) it seems to very favorable.


Doesn't catch everything, but it catches a lot: https://golang.org/doc/articles/race_detector.html


It's great when it works though; I thought I was running into a race condition once but wasn't sure (new to the language), and having something verify it was indeed a race condition was really awesome.


> Goroutines are threads. So the idea the "Go has eliminated the distinction between synchronous and asynchronous code" is only vacuously true, because in Go, everything is synchronous.

Abstractly, I agree, but practically the distinction is important. In Go you don't have to use a frustrating async interface to get decent performance. You don't have to manage a threadpool or other tricks. You pretty much get the threadlike interface you want to use without the difficulty.


Let's be clear: the one-OS-thread-per-connection model does yield decent performance. We used to call async I/O "solving the C10K problem"—i.e. serving 10,000 clients simultaneously.

I'm speaking from experience here, having tried to implement M:N and abandoning it in favor of 1:1, which yielded better performance. Can M:N yield better performance than 1:1? Sure, in some circumstances. But I think that, ideally, we should be striving for 1:1 everywhere.


All true. But I have been waiting nearly 30 years for a 1:1 implementation that scaled to the #threads I want to have. Still waiting...

The point of M:N schemes isn't to achieve better performance but rather to achieve higher scaling.


How many threads do you want?


I'm not the person you're asking, but I want one thread per connection, and I want one million connections per host.


But then with a million of threads either M:N or 1:1 you have other problems, namely the whole shared memory multithreading model breaks and you can forget about all the locks/channels if you want to actually do something useful with that.


I believe there are boxes in production with 1E6 live connections.


If they exist, they are running software written on either C or Rust, that can avoid memory efficiency traps and can schedule their workers on a more optimum way than a general purpose language.

That said, I'm not sure such thing exists. Just the memory overhead some common libraries impose on connections is enough to fill some 32GB.


Erlang works just fine for that many connections (depending on how much CPU you spend doing real work per connection). Socket buffers are tunable. 32GB isn't that much ram for a server, Android phones are shipping with 6gb, you can get 6TB into a server without getting too exotic.


>That said, I'm not sure such thing exists.

Just to clarify : my knowledge on this is pretty deep so the only reason I didn't say "they exist" is that I don't personally have one in my DC (I've only managed up to 200k live connections). I know people very well who do. But yes of course the application would be written in some efficient language fit for the purpose like C, Rust, Go, Erlang. Probably not Java.

Also 32GB is not much memory these days.


3 million connections: https://medium.freecodecamp.org/million-websockets-and-go-cc...

Heck, you can probably handle a million connections using PHP(with swoole).


In the article they got rid of goroutines to get to 3 million connections.


Can you elaborate on what you mean by "the whole shared memory multithreading model breaking"? I would love to hear ;)


> Finally, Go doesn't do anything to prevent data races, which are the biggest problem facing concurrent code

I've been working on a multiprocessing library -- built a wrapper function that makes any function, an atomic operation on the state.

https://zproc.readthedocs.io/en/next/user/atomicity.html

(Since it's protected by the actor model, not locks it's an enforcing mechanism)

Do you think this is a step in the right direction?


I'm trying to find out what you mean by "switchto support". Do you have a link?


See the bottom of the post you’re replying to.


Just focus on C++


> I'm happy to go on record claiming that Go is the mainstream language that gets this really right. And it does so by relying on two key principles in its core design...

The unmentioned third principle that it relies on is: "Curly braces, so it looks almost like C if you squint". That's what makes a language "mainstream" these days.

It looks like Go is very good at concurrency, but from everything I've read, I don't see how it's any better than Erlang or Clojure. The only controversial part of Eli's claim is the implication that other languages that get concurrency right aren't "mainstream". That's not a well-defined term and so naturally this is going to irk many people.

Perhaps the title would have been more accurate as "Go hits the concurrency nail on the head, using the hammer of K&R style". :-)


I don't think Go has better concurrency than Erlang, but go does have generally better raw performance. Ultimately I think both are good languages they just make different trade offs.

If I was designing a command line app I'd probably choose Go, if I was designing a web service I'd probably choose Elixir/Erlang. Of course those can easily flip; there are classes of command line apps where I might choose Elixir/Erlang and there are classes of web services where I might choose Go.

I can't speak to Clojure though.


Go doesn’t need a VM. That’s key. You get good concurrency with an easy to deploy binary that can target the major chips.


That's a good point. Modern programming languages are all pretty big and complex and whenever someone tries to nail down "this is why it's good/popular" there always seem to be other significant factors that got missed. After all, if it were just one factor that leads to programming language popularity, we could design the Next Big Language by just following that recipe!

In the case of Go, I can imagine many of its attributes are significant:

- Good concurrency model

- Familiar style for imperative C/Algol-family programmers

- Requires no VM

- Backed by major corporation

- Runs on all major OSs

- etc

Actually, I think I'm changing my mind. I'd put "corporate backing" higher on the list. Some languages have gotten a huge boost by being backed by a major corporation (classic example: Objective-C), and I'm having trouble thinking of a general-purpose programming language backed by a major company that did not become popular (Dart would be my best guess but even that seems to be doing alright).


I like Go because everything is super simple. From dependency management to unit testing to performance profiling to its build/deploy story, everything just works. I don't need to learn a new configuration language and project configuration format and complex dependency management system just to build my project. I don't need to pick a unit test framework and test runner. I don't need to figure out how to wire said framework / test runner into my build tooling. I don't need to figure out how to ship my app along with its dependencies or make sure that my deployment target has the right version of a VM installed. I don't even need to worry about learning a new IDE. I could keep going, but it's things like this that matter to me even more than the language itself.


Go is also statically typed while Erlang/Clojure are dynamically typed. That seems like another pretty significant difference aside from the syntax.


Modula-2 used to fit all those points, except backed by major corporation, unless we consider GM a major corporation.


I consider Go as a Modula-2 successor. It shares most traits I liked a lot with Modula-2. I prefer the C-style syntax to the more long-winded one though. The familiarity is no surprise though, considering that Robert Griesemer is a student of Wirth.


Oberon (which is a successor of Modula-2)


While I used Modula-2 a lot in a long distant past, I unfortunately never got into Oberon, so I didn't consider it in my comment. Though if Go is a successor to Oberon and Oberon is a successor to Modula-2, Go is also a successor to Modula-2 :).


Go's method definitions are copied from Oberon-2 syntax.


> Backed by major corporation

I don't think Go success has anything to do with Google. Google engineers probably favors Java/Python.

I think the key thing to make Go adopt is because the Docker project and Hashi Corp.

The second most important thing is it is super easy to compile Go program and run it and adopt by DevOps teams


Eh? I can't speak for all of Google but I like me some Go. It has a culture of minimalism both in language and code, which makes it easy to pick up and read others' work. I wouldn't recommend it for all problems, but it's at least nice for backend services.

Java is a victim of its own success: it has lived through style evolutions which have lead to fragmentation in the ecosystem. Some code uses mutable datastructures with for loops, others immutable datastructures with the stream API. Some code uses dependency injection, while some doesn't. As a result it takes more time to mentally calibrate to the style when pulling up a .java file. Another imperfection: GC pauses can cause request timeouts and are annoying to debug.

Python's not perfect either: aside from the obvious performance issues, use of the awkward retrofitted type system makes me wonder why a statically typed language isn't being used to begin with. The lack of good static analysis is especially annoying at Google where the tooling is quite good.

Just my 2c


> it has lived through style evolutions

The same can be said for C++, C#, etc. that have been around for a long enough time. Golang is relatively speaking still new. Look back at this again if golang adds generics and other features that will change how code in it is written.

> Another imperfection: GC pauses can cause request timeouts and are annoying to debug.

That's not an issue with Java, but how the GC is tuned. Golang provides only 1 gc that is tuned for latency. And even that is not hard real time. The JVM has several GCs you can choose from, including latency optimized ones.


>Look back at this again if golang adds generics and other features that will change how code in it is written.

Absolutely. I was surprised by the number of proposed changes in Go 2, especially regarding error handling. Curious whether old code will be migrated or left behind.

>That's not an issue with Java, but how the GC is tuned

Good point, I haven't done much GC tweaking personally but that sounds worth learning more about. However I think language might have something to do with it: In Java almost everything is a referenceable object, while in Go one can put values directly inside structs. So Go has less work to do. Without that advantage I doubt it would seem competitive with Java GCs.


Java will be getting value types in an upcoming version. C# already has value types (called structs). The JVM also does escape analysis to avoid heap allocation when it can.


You're welcome to your opinion.

Anecdotally, I've heard from many people that Go only took off in China because chinese programmers like google and it was sold as a language by google.

> Google engineers probably favors Java/Python

That's irrelevant. The fact of the matter is that Go has a company backing it and providing a constant stream of well-paid developers and infrastructure. Most small languages can barely afford to have two poorly-paid fulltime developers.

I doubt go would exist if there weren't several people getting paid very large quantities of money working on it.


Let's look at it from different angle. When first release, AWS Lambda or even Google Cloud FUnction didn't support Go. NodeJS always get there. Python is there always.

So the point of big company backing it doesn't add much value. Even Google didn't support Go for their Google Cloud Function.


Major backing can get you publicity, but don't think that's the deciding factor. What really matters is that major backing gets you the resources to produce a really solid, complete standard library. That is a critical component to language success.


Failed programming languages are not popular, which is why it is hard to remember them. For example, Ceylon (backed by Redhat) never got any traction, and most people forgot about it or never heard about it.


Modula-2 did it first in 1978.

With co-routines and static compilation.


Go made tradeoffs that maybe didn’t make it “better” at concurrency, but it really did make it easier than alternatives.


You can do this with many VM languages too - you package the VM with the code you want to ship. The VM-wrapper still needs to be built for the target arch (like Go), but the code it runs does not (like Go).

I will absolutely agree though that Go makes this easy, which is quite a large benefit. But it's not in any way the opposite of a VM.


Yeah, irritatingly. Erlang had this nailed years before. It's a bit too weird syntactically, and doesn't have the full force of Google pushing it, so it gets ignored :(


True, but Go and Erlang don't occupy the same niche.

Go is imperative, while Erlang is mostly functional.

Go is AOT (ahead-of-time) compiled, while Erlang runs on a VM.

Go is statically typed, while Erlang is dynamically typed (I know about Dializer).

Go concurrency primitives are designed to coordinate goroutines living in the same process, while Erlang concurrency primitives are designed to coordinate Erlang "processes" living in the Erlang VM or even in multiple Erlang VMs forming a cluster.


Erlang is AOT, the vm reads compiled and optimized bytecode.


You're right. I oversimplified a bit too much.

I should have write: Go produces native binaries, while Erlang runs on a VM.


The point is that Erlang (like Java etc) needs a runtime, Go doesn't.


Every programming language has a runtime, even Assembly if the CPU is micro-coded.

How do you think that the GC, go-routine scheduler, cgo marshaling get managed?


I assume what people want to say is that Go requires nothing which other languages install via the system package manager. While "runtime" is not the correct term, the desire is real.

With Java you first install a JVM. With Python you install the interpreter and batteries. This always leads to version conflicts at some point.

In contrast, with Go your CI builds an executable, you transfer that to your server and it runs. You don't have to care about the version of some "runtime" already on your server.


Using the wrong terms leads to urban myths and misunderstandings.

You don't need to install Java at all, because the JVM can be bundled with the application, and for anyone that actually cares to pay for them, the large majority of commercial Java third party vendors have AOT compilers on their JDKs.

Likewise with Python, there are several solutions how to bundle a set of scripts with an executable.


And if you deploy with docker then your build server slaps your java Jar and JRE in a docker image and you deploy one thing. No functional difference.


well to be fair you can do the same in java, python whatever. it is just not as easy as go install/go build. heck in c# you actually only need to have dotnet runtime >= your higehst installed version. basically I think .net core 2.1+ does it better than any other tools, it can be published via `--self-contained --runtime linux-x64` or without. (the runtime is needed since you create a launcher executable)

it's just dotnet publish with the switch or without and you get a folder that you can copy where you need the runtime or where the runtime + start script is inside the folder.

(Java 9+ has something similar but it's way more complicated than just running java publish, etc..)


I thought it was pretty obvious that the OP was using "runtime" to refer to an external interpreter program. The significance being the simplicity of deployment.


I didn't think it was all that obvious, and after I read the OP's answers I thought he was complaining about distributable size, instead of deployment procedures.


That is an interpreter as you very well mention.


Sometimes things have multiple names :shrug: I really like your contributions; this one seemed unnecessarily pedantic.


Which any CS book clearly clarifies.


> full force of Google pushing it

This trope is getting old. Variations include "Go is only popular because Google spends millions marketing it!". There is a small team at Google that work on the language along with the open source community. I'm pretty sure none of Google's marketing team works to promote the language, and I'm sure that its success is much less important to Google than Erlang's success was to Ericcson. I don't know very much about Ericcson, but it wouldn't surprise me to find out they spent many times the amount on Erlang that Google spends on Go.

There are probably lots of reasons why Go is successful, but I'm positive that it has more to do with simplicity, learning curve, tooling, ecosystem, etc than it does with corporate sponsorship or marketing.


Ericsson abandoned Erlang, or Erlang abandoned Ericsson, depending on how you look: Ericsson banned it internally, and shortly afterwards when the Erlang team managed to get it open sourced, they resigned from Ericsson and founded their own Erlang company.

(source: http://webcem01.cem.itesm.mx:8005/erlang/cd/downloads/hopl_e...)


Armstrong was rehired by Ericsson in 2004 [1], and a team at Ericsson maintains the language today (see e.g. [2]).

1: http://erlang.org/pipermail/erlang-questions/2006-July/02136... 2: http://blog.erlang.org/


So does Ericsson still use Erlang in their recent network hardware etc?


I recall hearing no, somewhere. I am under the understanding, (don't know where from) that Cisco DOES use Erlang in their stack, and there is a group of people at VMware that does.


Thanks. This is something where simple internet search yields nothing much. Possibly because these network switches are proprietary platforms and they do not publish how internals are implemented.



Yes. After a few years of ban, Ericsson changed gears. The recent head of tech at Ericsson look at that past as a real miss. All Ericsson 3G switch/router are in Erlang.

Ericsson has a small team maintaining erlang and they are the steward of Erlang.


> This trope is getting old. Variations include "Go is only popular because Google spends millions marketing it!".

I chose Angular.js because of google actually. And then Google did me dirty with Angular 2.

Corporate sponsorship is actually a huge driving for my decision and many company too.

Having a big company is a good indicator of success and it says that that programming language will have a stable contributors. Google also are using Go so they also have a stable programmers within the ecosystem.

Getting developers to adopt a tech and be in it is hard. Google have bunch of their dev in it and which increase the chances of these dev will present and talk in meetup to evangelicalize others.


Very true. Go has proven big success outside Google.

So many people here argue here that Go is not good enough else why Google keep using other languages even now. But same people argue in Apple/Swift case "of course Apple need not rewrite all perfectly working applications in Swift". But somehow Google has to do to prove language is fine.


> There is a small team at Google that work on the language

How many languages are lucky enough to have paid developers working on them?

Many languages have been labours of love with 1 developer getting paid, at best, pennies or doing it on the side.

I think there's a reasonable middle ground where you can claim "Go made good choices, but it also wouldn't exist as it does nor be as popular without google throwing money at its developers and without the brand association"

For example, the D programming language was wonderfully made and had many great features, but it never got all that far, in part because "Built by Mars" isn't as good as "Built by Google".

Sure, Go didn't just win by default because google was there (Dart is a good proof that Google doesn't instantly make languages succeed), but I'm certain having their backing and name association sure didn't hurt.


If Pike, Thompson, Cox et al. had stayed at Bell Labs, they would have created Go there as salaried researchers? Or because no one would have forced them to do C++, they maybe would not have bothered... ;-)


Who do you think pays for almost all of Go's development?


I don't think most of this is true... the Go core team is at least 20ish people, and that's just engineers not PMs, Managers, or other support staff. Plus the tools and kubernetes/gcloud teams. It's also kind of disingenuous to call any team lead by Rob Pike and Ken Thompson "small"... yeah they might not have hundreds of people but they can certainly wield a hell of a lot of influence.

Google also spends quite a decent amount promoting Go and donating to OSS that promotes Go.

You're not wrong about the comparison to Ericcson/Erlang though, or most of the reasons for Go's success, but you can't discount the Google factor.


Anecdotally, there are at least three people at work who say that "go is used at google so it will be around forever and I can be sure there will be support for it".


I'm sure somebody said the same thing about GWT too.


I actually really loved GWT and it was a shame it withered and died.


Tooling and ecosystem: Yes, but how much did Google spend on developing the tooling and ecosystem? Forget the marketing - libraries, tooling, and ecosystem are where corporate support pays off for a language.


I think syntax actually matters a lot more then people tend to give credit for in a language's success. I suspect this is why functional languages have struggled to became very popular while languages with C like syntax have added some functional features instead.

The learning curve for imperative programming structure just seems easier for people to understand and work with.


Syntax is a crucial component of any programming language.

I would disagree, though, that there's anything inherently easier about imperative languages. Could it not be that it's simply more familiar to people today? People don't (IME) learn to program in school. They learn by futzing around with whatever free thing is on their computer. In the 1980's we had BASIC, and today we have JavaScript, and Python/Ruby/Java/C# also seem to be popular. At this point, imperative programming is winning because of path dependence. Nobody ends up using Erlang by accident.

Once you get to concurrency, functional languages are clearly punching above their weight. I find it impressive that Go makes imperative programming work well here, but it still looks a bit old-fashioned to me.

Is Go leading the pack by having a great concurrency model, yet with an imperative language that runs on bare metal? Or is it trying to hold back the tide, when almost the entire rest of the industry is solving concurrency by moving to functional languages running on a portable VM? I don't know.


It's hard to disambiguate familiarity, but I'd suggest that imperative languages are more similar to spoken/written human language, particularly english.

I'd suspect if you gave someone who didn't know lisp or java a simple program and had them try and explain what it was doing they'd have more luck with understanding the java syntax.


It seems like you’re saying the primitives don’t matter at all?

I see three major programming styles: imperative, declarative, and functional.

Declarative seems extremely popular these days (see Rails, CSS, Webpack, etc). My guess is it’s because it’s fairly obvious how to design a declarative API. You just think about what you’d want as an application programmer, write that down in English, and then target that with your implementation.

These interfaces are totally unstable, and so they just degrade with time but they’re so easy to stand up they dominate the field.

Imperative interfaces are a bit harder because you need to explicitly pass all of your data through every call. This is laborious at first and you have to do lots of refactoring of the interface as you implement it. API developers generally don’t like this feeling, and would rather have a stable interface to target and only futz with the internals. And it takes longer and requires more pondering when you can’t just reach directly into arbitrary parts of your code base and do whatever TF you want. And since APIs are usually released before they stabilize, application developers also don’t like seeing their interfaces move.

Functional APIs are like the imperative ones but even moreso. Not only do you have to pass around data explicitly all the time, you have to model every intermediate state in your data explicitly too.

This is even more constrictive, which just makes all of the above even worse.

Personally, I think this pain pays off in the end, and I code in a purely imperative style deep in a haunted wood. The suffering over moving interfaces eventually leads to a stable interface that’s actually well thought out and composeable. Code can become “finished” whereas declarative code almost always just rots til it’s replaced.

But in terms of “Don’t make me think” which is most pro developers dominant mode of working, there is a clear declarative > imperative > functional hierarchy.

In theory a purely imperative or purely functional ecosystem could gain a kind of network effect of good code that would eventually outweigh the work slowdown.... your application code would be harder to write, but you’d be writing on top of a richer, more composeable library base.

However we don’t seem to observe this in practice.

My theory is that imperative and especially functional languages tend to attract masochists, who get yakshorn into radical experiments into purism, trying to bend every aspect of an ecosystem into a perfect Q-dimensional prism. This leads to just less effort in churning out pragmatic tools. And it also leads to a kind of dazzling conversation around the languages that turns away people who are just trying to get something done.

With sustained effort these effects could be overcome. Over time I am building a library of pure imperative JavaScript modules. They seem to be reaching “finished” one by one. We’ll see.


The impact of syntax is really undervalued. Syntax matters because many people are exposed to new languages through familiarity with other languages, as opposed to cramming a guide in isolation. And since programming languages are for humans, the cognitive load of understanding through reading and expressing through writing depends on one's ability to map the concepts to the code and vice versa.

This is easier when similar languages look and -- through the power of abstraction, appear to -- work in similar ways. Go benefits from looking similar to the wider C family, while Erlang suffers from a syntactic heritage that never achieved as much prominence. The prevalence of languages in the syntactic style of C allows people unfamiliar with Go to be able to glean a lot of what's going on, and develop their understanding of Go-specific features gradually.


My simple smell test for language syntax readability is whether you can write a conditional statement with a multiline condition in it, without it looking ugly, and with the condition being clearly separate from the body. For example, in Lua or Ruby:

   if
      something()
      or other()
   then
      do_whatever()
   end
It's very easy to read; your eye doesn't "stumble" anywhere it doesn't have a reason to. In C (and Java, C# etc), on the other hand:

   if (something() 
         || other()) {
      do_whatever()
   }
This makes for some confusing indentation, and now it's hard to distinguish what's condition and what's body! Sometimes people indent the entire condition to make it clearer:

   if (  something() 
         || other()) {
      do_whatever()
   }
but now you have holes in the middle which draw attention to that spot for no good reason. Or you can put ){ on a separate line:

   if (
      something() 
      || other()
   ) {
      do_whatever()
   }
but so much punctuation hanging by itself is still an eyesore. And Python is hardly better:

   if (
      something()
      or other()
   ):
      do_whatever()
Go doesn't need the () in condition, so it's slightly more tolerable if you do that (but go fmt will object):

   if
      something() 
      || other()
   {
      do_whatever()
   }
But I'll take the Lua/Ruby syntax any day of the week.

That's just one example. In retrospect, I think that C-style syntax was a bad idea in general, and adopting it as the "default syntax" across a large part of the industry was a monumental mistake. It favored compactness and speed of writing over readability and clarity. I really wish something like Modula or Ada would become the syntactic basis of modern languages today. I'd rather spend a few more keystrokes typing out things like "var" and "end", but end up with code that reads smoothly in a code review, or when debugging some ancient codebase.


Anecdotally, it was really difficult for me to convince myself to learn SQL specifically due to the syntax and the fact that unlike the imperative languages I normally use it was declarative.


I don't agree. It's just that the familiar is easier. If you first teach someone programming with haskell, they'll have the same hard time switching to C.


Every time I see people saying stuff like that I remember of this:

http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...

> His lambda calculus is ignored because it is insufficiently C-like. This criticism occurs in spite of the fact that C has not yet been invented.

Anyway, it's not because C-like languages are any easier. It's just that nearly all programmers learn to code in a C-like language. Most of them won't ever learn a second language, the number that will do the work of learning something actually different is minuscule.


Yeah, but is it because people actually like the syntax? Some languages seem to be attractive to people right from the start, like Python. Some take a while to grow on you but then you realise their brilliance and don't want anything else, like Lisp. C isn't either of those.


Really? Python's syntax looks clean at first OK then later I realized that it has no variable declaration (and old memories of time lost in BASIC due to spelling errors start coming back), no static typing, is slow and isn't particularly multicore friendly .. uhm how about using Go instead? (No I don't know Go but I'm quite sure I can learn it without too much difficulties)


I don't know about that. I grokked C's syntax immediately, just from reading K&R, without actually having a compiler to try anything on. It just made sense to me.

I suspect (but cannot prove) that certain languages and/or programming styles make more sense for certain people, and others do so for other people. (There also is almost certainly some bias toward familiarity.)


FWIW, I've disliked the look of Python from the first time I saw it and loved Lisp-like syntax immediately. Can agree on C though.


It's also not simple... at all.


You’re mentioning functional languages. That’s not for everyone or every usecase. (I love Erlang.)


So, in the world of concurrency, are imperative languages good for every person and every use case? :-)

My intention was not to single out functional languages. Those were just the first languages I thought of with the best concurrency support.

(I don't think it's mere coincidence, though, that so many concurrency-aware languages are functional. If concurrency is a major concern, perhaps one should consider that "mainstream" or even "imperative" might not necessarily be a hard requirement.)

How about Occam or Crystal, then, which use CSP and are imperative yet don't have curly braces?


> So, in the world of concurrency, are imperative languages good for every person and every use case? :-)

Clearly, no.

I think it comes down to the question of shared mutable state. Functional languages say "don't do that, it's evil" - with some justification. If the problem doesn't push you toward shared mutable state, then consider functional languages.

When could the problem push you toward shared mutable state? I worked on a video router. You had maybe 100 video sources, maybe 80 video destinations, and six different sources of control. All six sources of control needed to see the same state of what inputs were connected to what outputs. So the fundamental nature of the problem was one giant shared mutable state.


Where this shows up a lot is the types of business problems which functional programming finds itself in. The least surprising thing in the world is that Haskell and functional Scala work pretty okay for Hadoop ETL work, for example.


Neither is concurrency. But if you need concurrency, functional is very visibly the way to go.


Definitely agree. For work I built a highly concurrent job scheduler in 3 months from zero in elixir, and being confident about the internals due to its functional nature let me focus on smoothening the interface edges


I think the author might be overstating how unique Go’s position is in terms of making concurrency easier. Haskell has the best concurrency story of any language I’ve used. Super lightweight green threads, simple concurrency primitives like MVar and STM, good performance (possibly requiring tweaks, but not bad out of the box). Referential transparency (by which I basically mean immutable data) makes sharing data across threads much easier and in some cases more performant by allowing you to avoid copying. Plus, you have a type system which makes it much easier to reason about your code.

All that being said, I haven’t written Go and can’t compare the two. Also, Haskell doesn’t support the actor/message passing model out of the box (although libraries exist for it) or prevent deadlocks (although the immutability helps a great deal here). BEAM languages, clojure, rust, pony and others all have their strengths here — again, this doesn’t discredit the article at all but the idea that Go is the clear winner is debatable.


In terms of raw capability, Haskell is the concurrency winner. Not only does it have basically every paradigm, they even all work together reasonably well. (Not perfectly, but reasonably well.) But I don't think you can argue Haskell is mainstream. It continues to be on a fairly slow growth trajectory from what I see, but it's not in any danger of cracking into the top tier language set anytime soon. Go just might in another 5 years.


True, but I think that the "mainstream-ness" of Go (debatable as it is) was of secondary importance to the article, which didn't even mention that there are other languages out there which solve the concurrency problem in other/better ways. Whether or not a language is mainstream is more or less a matter of opinion, in any case. Certainly Haskell, Rust or Erlang (to name a few) are probably less widely used than Go, but none of them are obscure, so to not mention them at all suggests that the author is either being disingenuous, is not aware of those languages' capabilities, or simply forgot.


I think the BEAM languages "pure functions... well except for message passing" is perfect for concurrency. With haskell you have to use monads which are less convenient. And idiomatic BEAM defaults to actor model if you consider OTP to be "out of the box (you should)". Finally, pattern matching function guards are incredibly useful for parsing incoming messages that might be polymorphic.

Haskell is kind of designed to make programming as pure as possible and the BEAM was designed with concurrency as the first priority; it's "pure functional" to the point where its advantages (referential transparency) help concurrency and the impurities are the precise set of compromises you need to make concurrency easy.


Well, there are trade offs in both directions. Haskell has a steep learning curve and no clear one choice for concurrency, while Erlang is more approachable, has a built-in answer for concurrency, and well-established patterns for building fault tolerant applications at scale.

On the other hand, Haskell’s system has more flexibility, offering a few powerful primitives which can be used as building blocks for higher level abstractions. On top of that it is a general purpose language capable of implementing traditional imperative patterns, etc, so you only need to use the concurrency when it makes sense, rather than using it for all stateful and IO operations as in Erlang/Elixir. Its performance ceiling is certainly higher. I don’t think monads in Haskell are a problem except in terms of the learning curve.

All in all, which one to use, if either, will depend on the circumstances. But both of them should be a part of any thorough discussion of languages which are “good at concurrency”. ;)


According to the author Go make concurrent programming "the best experience, by far, compared to other popular programming languages today."

I beg to differ. I fail to see why I should choose Go over Elixir/Erlang for concurrency. Elixir's cuncurrency mechanisms are at least as good — and I would argue better — than Go's, and Elixir as a language has an expressiveness that Go lacks.


I think the main argument is that Elixir isn't as mainstream which is a fair point.

That said I agree with everything, Erlang pioneered in this space and has shown to scale very well[1] in a proven way over the last few decades.

[1] https://phoenixframework.org/blog/the-road-to-2-million-webs...


Yes. Because mainstream is what you should select for when choosing your tools.

On a more serious note: I’m very wary of people that have discovered the one true language, framework, etc.

That’s how you end up with 100 lines of go instead a one line bash script. That’s how you end up with “java developers” that are more concerned with design patterns instead of actual working code. I could go on.

Learn about as many things possible, form your own opinions, choose the right tool for the job.


I've found that being mainstream is the most important factor in choosing tooling. Being mainstream has incredible advantages. It means you're going to find lots of help, resources, answers to questions, sample code, and high quality and well maintained libraries.


It's the only thing really holding me back from going more into Rust.

At the same time, there is a sweet spot in popularity, when there is a very good quality/noise ratio in the library ecosystem, I'm not sure though, if rust's there at the moment.


Losing static compilation, Go-style, holds me back from Rust. The sheer number of platforms I can target with Go without doing anything special is amazing, whereas Rust forces me down the libc merry go round again.

A reliable recipe for 0 dependency Rust binaries so long as I stayed in the Rust ecosystem would be a good motivator to use it.


As long as you’re on Linux, it’s quite easy to use MUSL. Other platforms don’t offer something comparable, so we’re kinda stuck.


I'm sorry but that argument does not hold much water. Any developer with [maybe] the exception of people fresh out of school can and should be able to learn new things. Fast. On-the-job. If you really believe that you're going to use X because it's mainstream and really popular and be okay you're kidding yourself. Also, as you grow older, you learn to see the matrix and appreciate new things that come along when/after you use them and are able to see how they are better or worse iterations on ideas that were already out there.

Also to answer your comment directly. Being mainstream != lots of help, resources, answers, samples, high quality/well maintained libraries. Being mainstream means that a lot of people have heard about you and a lot of people try using you / pick you up. Sometimes mainstream things do get to that place, sometimes you find yourself in an immense "the emperor has no clothes" ecosystem where everyone wants to use X because it's the cool/hot new thing. If you don't understand what the tool you want to use is good for and you forge ahead, most of the times, you will have a bad time.


> If you really believe that you're going to use X because it's mainstream and really popular and be okay you're kidding yourself.

Actually, if there's anything that someone fresh out of school learns in the industry, is that you do use things that are mainstream and really popular (in the particular niche you're targeting), because that's what your colleagues and management expect.

OTOH, when people come and say, "we'll rewrite this in X - it's the hot new thing, and it can do it all so much better, so don't worry about IDE support etc!", and push it through, the usual consequence 3-5 years later is a bit-rotting codebase that is hard to work on and maintain, because the people who pitched it have moved on, the tooling was never great and now doesn't even see bug fixes, and new developers on the team have to undergo a long initial ramp-up process to be able to do anything.

Sometimes it works out, sure. Usually when the hot new thing becomes mainstream eventually. But most of them don't, so unless you like to gamble, the safest bet is to wait and see and then adopt it. Let someone else be the guinea pig. The more immediate productivity gain is very, very rarely worth the pain.


.. and of course, the old standard, if its mainstream .. you are replaceable.


Totally agree, with the added observation that as a software engineer, I want to be replaceable. If my software can't be divorced from its creator, it can't outlive me. I don't want to be glued to my work inextricably; if you can't be replaced, you can't be promoted.


being replaceable is a noble goal. the problem is that maybe some people don't want to be replaceable and maybe they want/seek to gain some kind of job security through the tooling they use, their idiosyncratic processes or just through rejection of new technology. those people are IMHO doomed to obsolescence.


Anecdote time: making yourself too specialized makes you easily replaceable too.

At a previous gig, a problem dev, who insisted on only using Erlang, was painted into a corner by eng at large who did not want to learn/support another language (Ruby, Python, JS, Java, and R were all over).

This dev eventually got fired, as they didn't really contribute much to "the big picture". Since their efforts were so limited in scope, their Erlang work was quickly replaced using the other languages, and consumers of the results never noticed.

YMMV. But that's a old standard that should probably die


If everyone chose mainstream languages we would never invent and adopt any other language ever.


Luckily we don't live in that world and never will.


We had a developer interview here who was an Elixir zealot - and I use that word entirely purposefully. He was very adamant that we needed to rewrite our entire platform in Elixir because it was obviously better.

We ended up not hiring; he refused to touch certain technologies that make up a core part of our stack (Node being the biggest issue), and he was honestly something of a massive tool -- on our take-home thing (takes most developers maybe a few hours) he limited himself to one hour, didn't get it done, wasn't even solving the right problem, and, when he sent in his solution... "It's ok, I know you won't get it, but that's OK."


Being selective about the language you work with is entirely valid. But why would you go to an interview where they're not looking for your language and try to convince them otherwise? It seems futile.


I had a weird experience where I was offered an interview at a company I respected, but who used ruby a lot. I explained that to everyone in the chain that I'd want to try and work to change that, and eventually got rejected because I didn't like ruby.

It was a weird experience. But I got a nice lunch out of it. I'm still not sure what they got out of it.


a zealot is a zealot is a zealot. you don't get much info from the strong preferences of a zealot. you just quickly learn that you need to avoid them. refusing to touch the tech stack of the person that pays your bills if flat out stupid. if you don't touch it why am I hiring you? depending on how much experience this person had this behavior may be correctable if he/she is lucky enough to find a more senior developer that can show them how their [stupid] choices impact the project and how they can eliminate whole classes of problems by choosing the right tool.


Run, run far away... (I realize you did, but man...)


>Yes. Because mainstream is what you should select for when choosing your tools

It's a hugely important consideration. More devs, larger community, better support, more/better tooling, more libs, etc. If you don't think those matter then you're only concerned with pet/toy projects.


There's also a zero chance of working with anything that's better than mainstream.


Ummm... ok? I don't know about you, but I like building things that solve problems. Tools are important, but they're not a goal unto themselves. If your 'better' language is beautiful, but lacks the ecosystem people solving real problems actually need to get work done, it's not actually better.


Language is too restrictive to consider in isolation if we're talking about "the tools". And I'm personally more interested in effectiveness than beauty, although I'll grant that beauty could have some influence on effectiveness perhaps.

Ecosystem is not a binary proposition: either there, or not there. At some point the ecosystem becomes solid enough for some purposes.

Finally, if no outliers were ever a better choice than the status quo, the status quo would never change. Therefore, there are always some outliers that can be chosen for greater effect at the cost of accepting some perceived risk. Using tooling smack in the middle of the average zeroes the potential increase in effectiveness as well as the perceived risk.


>Yes. Because mainstream is what you should select for when choosing your tools.

Ease of hiring experienced developers should absolutely be a part of selection criteria, but of course it should not be the only one. What good is it going to do you when you picked Elixir/Scala/Rust over Python/Go/Ruby, you need to hire senior engineers who can hit the ground running ASAP, and you have limited resources/budget?

It's going to be harder to find them (especially if you're not in SF), it's a harder/longer initial learning curve if you hire senior engineers without prior experience, it's going to be harder to find non-seniors, you're going to have to pay more to get what you want...the list goes on.


a senior developer, by definition, will hit the ground running with mostly anything you use. that's the senior part in senior developer. also if you believe that people need years of use to be good in any language/tools/framework you need to figure out how to attract better people.

also, the question you need to ask yourself is: do you want to build something with a technology you've selected and think it's the best or do you want to have someone that can pick the right tool for the job pick the tech? sometimes, not building something or various parts of something is more valuable that building something that you don't need fast.


>a senior developer, by definition, will hit the ground running with mostly anything you use. that's the senior part in senior developer.

a senior developer also gets to be picky in what stacks they want to work with. usually it's what they are familiar and comfortable with, or something similar to it

>also if you believe that people need years of use to be good in any language/tools/framework you need to figure out how to attract better people.

even the best engineers have ramp-up time when starting a new job that involves a new code base. that ramp-up time is increased significantly when it's a language that they aren't familiar with. feel free to convince me otherwise, im all ears

>sometimes, not building something or various parts of something is more valuable that building something that you don't need fast.

what about when it's not?


I don't think it makes sense to group Scala with Elixir and Rust because it interfaces so easily with Java and is used a lot for Spark applications


How are the deployment, library ecosystem, build, and tooling stories for Elixir?

Note I don't really care about the answer to the above, I just wish as a profession we could get past the tribalism and boosterism and have rational technical dicussions about things that matter as opposed to banal declarations about 'expressiveness' etc.


The article concludes: "If concurrency is central to your application, Go is the language to use."

If we as a profession are to "get past the tribalism and boosterism and have rational technical discussions", surely other languages/technicques like Pony, Haskell, Clojure, Kotlin, Scala/Java combined Akka etc. should be considered and discussed? The article reaches its conclusion much to easily.

With regards to expressiveness: Go is a lower level language and has fewer language constructs and abstractions than most other modern languages. This is by design, and many seem to appreciate that simplicity. I was simply stating a fact when I said Elixir was more expressive and did not intend to offend anyone. I seem to have done so regardless.


Implementing a 3D vector or quaternion library in Go feels like a really bad fit, mainly due to the lack of operator overloading (which can easily be abused, like >> in C++). But it is an example of a field where Go allows programmer to express their ideas in a less direct way.

I think Go is great, but I don't think describing parts of it as less expressive than some other languages is banal.

Expressiveness has a sweet spot, though. Too many keywords and constructs makes a language harder to learn, and makes it too easy for developers to build their own little weird worlds that discourages team work and lowers readability.


That exists, so we can look at at least one concrete attempt to evaluate the question. [https://github.com/go-gl/mathgl].

And right off the bat, the first thing a person notices is that there are two identical sub-libraries, mgl32 and mgl64, to work around the language's constraint that the core numeric type in the library cannot be parameterized at the language level without performance-losing reflection.


mgl64 is generated from mgl32 though [0], so no big deal, no? Who's hurt by this?

[0] https://github.com/go-gl/mathgl#contributing


Well, let me just do a quick text-based search for the Vec3 implementation and...

... oh. There's three. The one in mgl64, the one in mgl32, and the canonical one that a developer should edit to make changes. And they only all stay synchronized if the developer remembers to follow the contributor practice and run that gen script, which is not enforced by anything.

On a small project like this, not a big deal. But it's indicative of the over-arching problem of the approach go is necessitating here. There's noise here that a developer has to think around, and as a project scales up, that noise is going to become louder and trend towards intractable complexity.


Libraries are considerably better than Golang and more all encompassing because you have 20 years of Erlang libraries that you can just use.

Deployment is getting better and you can just great a release in a docker image and deploy that like you would anything else. Obviously it's no way near as small (or simple) as FROM scratch is with Golang binaries.

Tooling is fantastic in some ways and poor in others - for example you can get an interactive IEx terminal running onto your Elixir cluster to debug issues, although I'm not sure about the other parts. Dependancy management is fantastic and something Golang struggles with. Not sure about other tools for Golang though!

The final thing I'd say is once you have a runtime that has Supervision and restarting of processes you never want to go back to worrying about what to do in the cases you haven't considered.


"Libraries are considerably better than Golang and more all encompassing because you have 20 years of Erlang libraries that you can just use."

I'm wouldn't be sure of that anymore. For instance, Go has an official AWS SDK but Erlang does not. Go's been around for 8 years now and it is almost certainly significantly more popular than Erlang. It's been a while since I reached for a library for Go and couldn't find anything at all. (Though I have recently been in the "three libraries that all seem like 80% of the job is done with varying degrees of quality" situation. But you still get that in Erlang in similar places too, as far as I know.)


It's the whole importing packages is just a checkout from master that makes me concerned about package stability. Semver exists for a reason. I realise this is nearly fixed but it's very slow in coming.

To be honest I feel you really will struggle in Golang to make software that is as reliable as Erlang/Elixir simply because the language is immutable and allows for concurrency by using many serial processes (Actor model) and has supervision of processes (let it crash) and also worth noting once you have used compile time macros the way Elixir does it you never want to go back to repeating yourself the way Golang forces you to. In fact I should write up my proposal for macros being added to Golang as an alternative to Generics...

If you are building something like Docker I'd argue Golang is maybe a more suitable choice but for Web software and stable concurrency Elixir is miles ahead.


Deployment: super easy, highly maintained and ready tool that output fully instrumented and tooled tarball with start, stop and remote console script. Even come with configuration solved. Neex no dependencies outside of possible dynamically linked C one like openssl if you use it

Build: same, come fully ready out of the box.

Library: there is everything and you can use any erlang library for free. 35 years of production experience.

Tooling: see above. Everything to debug and instrument on the fly. Dynamic tracing for everything for free. Logger. Etc etc. Fully interoperable runtime dynamically for free. Advantage of using a runtime that has spent 35 years optimising for reliability in production.

So yeah. We could but then people would not understand. Erlang/Elixir have a decade of advance over Go for that kind of tooling and production-readiness


I really wish someone like Mike Pall (of LuaJIT fame) would work on BEAM to make it computationally as fast as Go on raw performance.


The recent Erlang release (21) changed a lot of the internal interpreter structure with a nod in Mike Pall's direction. Just by shortening the typical opcode size down, they gained 20% speed boost. And the BEAM is a pretty good interpreter anyway.

JITing the beast has been a want for a while, but there are also other paths taken right now which can provide speed boosts, among other a translation of the compiler to use SSA-representations.


Erlang, I think, is crippled out of the box wrt performance by its emphasis on immutable data, live troubleshooting, small processes with lots of messaging, etc.

These are the things that make it a brilliant language for use cases where it's well-suited. Not every language has to be a kitchen sink language like Java, thankfully. I like opinionated languages with well-defined strengths and weaknesses.


The architecture of the BEAM is quite complex.

I don't believe you could reach the same raw performance that you have with go.

However it is an unfair comparison, BEAM processes do much more than coroutines.


The BEAM is already much more performant than, for example, the Ruby reference implementation. And look at what people have managed to build with Ruby!


Erlang 19 is only 10% faster than CRuby 2.3 on a simple factorial bench. Web throughput is even closer as lots of it is C extensions.

I haven't made the comparison myself for a while but I suspect it's still very close.


I don't think it's going to happen, due to the stack+heap per process and lack of shared memory (ok yea ETS sure) which you need for fault tolerance, necessarily requiring more copying than conventional languages. I think there are different tradeoffs that make good CPU bound performance (like Java or Go) on BEAM pretty much impossible.


I think most reasonable people wouldn't have trouble defending a position that Elixir isn't a particularly "popular" language - at least compared to Go.


I agree Go is more popular than Elixir — but is it actually a popular language? At least I wouldn't consider it mainstream in the sense Java, C# and Python are.


It doesn’t really matter where you draw “the” line for mainstream languages, so much as that there is some reasonably drawn line between Go and Erlang/Elixir.


Well, just ask the Toibe index:

https://www.tiobe.com/tiobe-index/

It looks like Elixier ist by far less popular than the other discussed languages.


I personally prefer the RedMonk rankings [1] over TIOBE, since TIOBE's entirely about Google hits, which lags as an activity indicator by a few years, because sites for languages that have stopped being used stay around for years. Also, Google hits counts are really noisy, especially for languages that have names that are also words, like Go and Elixir. RedMonk uses recent GitHub and Stack Overflow activity as a proxy for popularity.

In this case though, Elixir is still way lower, at somewhere around 30th compared to Go's 14th.

[1] https://redmonk.com/sogrady/2018/08/10/language-rankings-6-1...


Huh, I'm super surprised that VB .NET is growing. I used it at my last job briefly for unfortunate historical reasons and we were trying to run screaming away from it as fast as possible to C#. It felt like the runt of the litter of the .NET ecosystem; so many things worked better in C#.


You’re not wrong. No matter the available constructs, there on concurrency guarantees that you can’t make in any language with a shared memory model.

Everything has tradeoffs.


> Programming with threads is hard - it's hard to synchronize access to data structures without causing deadlocks; it's hard to reason about multiple threads accessing the same data, it's hard to choose the right locking granularity, etc.

That's a list of problems that are specific to mutable state that is shared among threads.

As the old saying goes, "If it hurts, don't do it."

We've had ways of doing multithreaded code that are easier to reason about for decades. They really do work quite well. Why people doggedly insist on pretending they don't exist is a perennial mystery to me. Even if your programming language wasn't kind enough to include a better concurrency model in its standard library, there are always third-party libraries.

I realize my experience isn't universal, but, personally, I've discovered that there's precisely one scenario where I ever need to resort to code that involves mutexes: When the business requirements and the performance profiler have conspired to kidnap my children and hold them for ransom.


"We've had ways of doing multithreaded code that are easier to reason about for decades. They really do work quite well. Why people doggedly insist on pretending they don't exist is a perennial mystery to me."

The real advantage to Go in a lot of ways was just starting over again with a couple decades more experience with multithreaded coding, and making the community default to a set of those more sane concurrency primitives. Nothing nominally stops you from doing the same thing in a number of other older threaded languages, but you're trying to bootstrap a new set of libraries from scratch, and that's not just a neutral operation, you are actively fought by the existing bulk of libraries for your existing language.

It's sort of weird that sometimes it's literally easier to start an entirely new language than fix an existing ecosystem and I can't say I've necessarily gotten my head wrapped around it, but observationally the evidence seems quite strong.


> We've had ways of doing multithreaded code that are easier to reason about for decades. They really do work quite well. Why people doggedly insist on pretending they don't exist is a perennial mystery to me.

Not a single one was mentioned during my CS undergrad; shared-memory threads were, many times. I think simple ignorance is the answer.


I think a simple love of complexity is also part of it. It's fun to do things that make you feel clever.

One experience I've had more often than I'd care to admit is diving into a multithreaded module with the intent of fixing a race condition bug, and finding that I could simultaneously remove the bug and realize a healthy performance improvement by making it single-threaded.

Which, for that matter, is another reason to be wary of mutexes and shared mutable data: Memory barriers do really impolite things to pipelines and caches in a modern CPU.


I don't believe this holds.

We are suppose to be engineers, we need to do our research on design and methodologies before to code.

And even if we really don't find simpler solutions, there is supposed to be a more senior engineer check our code and provide feedback.

Ignorance should not be an excuse...


Its fairly interesting that this article doesn't mention actors, futures/promises and async/await style co-routines which are all extremely available in all of the major languages available today and broadly used (with the possible exception of golang).

Frankly, I think the concurrency story is one of the weaknesses of golang. Contrary to what this article says you cannot stop thinking about concurrency in your code in golang like you can in some of the more avante garde concurrency models (STM) and it doesn't provide nearly the sophistication in type or post build checking of other languages.


All of those interfaces are trivial to implement in Go precisely because Go implements a much stronger abstraction. By contrast, if you only have those other abstractions you're quite limited in how you can structure your code. (You might not realize how limited you are, however, if those are your only options.)

As the article says, most languages settle for those interfaces because they can mostly be implemented as libraries with minimal refactoring of the existing implementations and their execution model.

They have their uses but they're not universal enough. If you think actors, futures, and async/await are so great, imagine having to write every single function invocation in that manner. It would be unimaginable outside of some special-purpose declarative DSL. By contrast, the concept of a thread--linear flow of logical execution of statements and expressions--is fundamental to programming. Even in functional languages with lazy evaluation or automagic parallelism. It's a basic building block similar to functions. And much like functions, it's incredibly useful to be able to easily compose those blocks, which is where channels and message passing come into the equation. One way to compose them is with actor and future interfaces, but that's not the only way and not always the most appropriate way.

Threads ended up with a bad name because of (1) performance and (2) races, but that conflates the abstract concept of a thread--a construct that encapsulates the state of recursive function calls--with particular implementations. Goroutines are threads, pure and simple, but [mostly] without all the baggage of traditional implementations. (That they can be scheduled to execute in parallel complicates the construct but also makes them useful for the same reasons traditional OS threading constructs were predominately used, except with much less of the baggage.)


The biggest problem with go is that it’s not easy to implement those as libraries. This is a combination of the golang story around generics & their opinionated strategy on concurrency.

Conversely most other modern, mainstream languages can mimic golang concurrency as a library.


You absolutely cannot mimic Go concurrency. You're not understanding (or not appreciating) the function color problem mentioned in the article. See, e.g., http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


Or it’s possible I don’t agree with its premise that go solves the problem better than other languages because it hides async behaviors from the type system.

In practice it doesn’t. Asynchronous functions leak into golang implementations in worse ways in golang. Everything from the near universal use of channels as poorly implemented promises to the horrendous Context being passed to everything for cancellations. The golang concurrency story is weak compared to any language that has a story at all.


> If you think actors, futures, and async/await are so great, imagine having to write every single function invocation in that manner.

I don't see how:

    val f = Future { foo() }
    // do work...
    f match {
       case Success(res) => useRes(res)
       case Failure(error) => handleError(error)
    }
Is any more difficult than:

    done := make(chan bool)
    var res int
    go foo(done, &res)
    <-done
    useRes(res)
As a matter of fact, the first one is much easier. I don't have to create and pass a channel, and manually handle returning a success or failure. The first approach is much better


You're not understanding (or at least not appreciating) the function color problem mentioned in the article. See, e.g., http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


I read that article before. Here's the thing, the only thing that the "go" keyword makes easier is having the ability to have any arbitrary function run in a green thread without having to modify it's signature. However, the moment you want to actually do something useful with it (e.g. communicate with it, cancel it, or read its returned value), you're going to have to pass a channel (or more) or a waitgroup to it anyway, modifying its signature and changing its "color". The issue remains pretty much the same.


> By contrast, if you only have those other abstractions you're quite limited in how you can structure your code.

OK... but what languages only have those abstractions?


I think it's fair to say that neither JavaScript, Perl, nor Python, notwithstanding Web Workers or rough POSIX threading support; but Lua and Scheme do even though their threading construct is not based on the system threading model (though strictly speaking it's likewise available to those languages).

What I really meant to get at was that Go provides a flavor of threading that is lightweight, simple, and ergonomic. The threading construct is a first-class citizen and fundamental to both the language design and its implementation tradeoffs. Threading, lexical closures, and GC were designed and implemented holistically. Go takes a performance hit for its seamless support of closures (lots of incidental heap allocation), for example, but they did it because notwithstanding the Go authors' utilitarian goal it was important that the core abstractions remained relatively unadulterated and natural. If this wasn't the case (if it was just about avoiding manual memory management), Go could have required explicit capturing of lexicals, like C++ and Python do. People complain that Go doesn't have a lot of sophisticated features, but that's because everybody is focused on typing and generics. But modeling execution flow is at least as interesting academically and important commercially. While Go seems simple, supporting these constructs the way Go does is actually really difficult. Which is why it's so rare to find.

I intentionally didn't mention channels because while syntactically nice the real internal complexity (and payoff) comes from the closures. Channels are something of an implementation detail which you can easily hide inside closures in a functional-style of programming; threading + closures allow you to implement coroutine semantics very cleanly (i.e. no async/await necessary) and, if you so desire, transparently. (And it just occurred to me that async/await is such a misnomer. In a coroutine caller and callee are logically synchronous. The fact that languages like Python, C#, and C++ use async/await terminology shows how they put the cart before the horse--these constructs were designed and implemented for the very specific use case of writing highly concurrent async-I/O services, and they stopped at making that use case moderately convenient. They're a very leaky abstraction. See function color problem mentioned in the article.)


> I think it's fair to say that neither JavaScript

Ah yes, how could I forget Javascript :)

> The fact that languages like Python, C#, and C++ use async/await terminology shows how they put the cart before the horse

Of these, I'm very familiar with C# (15+ years' experience). C# had great threading constructs long before it added async/await, and still does. It actually provides a great variety of threading constructs - you can go low level with mutexes, wait handles, and threads; then there's the Task.Run abstraction, Parallel.ForEach, the TPL...

When async/await in C# was first announced, it was hailed as making concurrency much simpler for devs, but I've always found threads much simpler to reason about and debug, while async/await gives you a variety of footguns that can be difficult to debug.

FWIW, I've been using async/await in C# for years now, but coming from much more of a threading background, I confess it's only now beginning to feel intuitive. I dunno, maybe if new devs come to concurrency from the async perspective first, it's easier to grok.


Describing its concurrency model as one of its weakness is a bold statement, because it's probably one of the biggest reason it became widely adopted.

Everyone is different, but Go 's concurrency model has been the easiest for me to reason about. And I've dealt with my fair share of "async/await".


Maybe. But I’ve been developing golang full time in high scale concurrency environments for 4 years and working with a team of similar people. It’s an opinion that is near universally shared on that team.

At high concurrency levels almost everything abandons standard golang concurrency patterns and tools.


Would you be so kind and explain what term "high concurrency levels" implies?

thank you in advance.


Consumer facing systems. System wide throughput between 6-12 million QPS (daily low/high) (query body average size is 1.5KB). Each server tops at ~130K QPS. On a system with 2 1 GB NICs we pop the NIC. On a 10G we pop the CPU.

Current bottleneck is the golang http/net libs. Would likely need to rewrite it from the NIC up to do better.


That's an issue with the http/net libraries, not the concurrency model.

At really high throughout you can run into into issues with the kernel's networking and driver stack. I've encountered situations with my own homegrown event libraries (mostly C or Lua+C; I've never used Go) that were bottlenecked in the kernel. I've also seen issues that were fundamentally related to the use of poor buffering and processing pipeline strategies that resulted in horrible performance. For example, I can get an order of magnitude greater streaming throughput using my own protocol and framing implementations than when using ffmpeg's, though I use ffmpeg's codecs and a non-blocking I/O model in both cases, all in C. And that's because of how I structured the flow of data through my processing pipeline.

There's is no general model of concurrency that can solve that, and I've never seen any model that was easier in the abstract to tweak than any others. Those are implementation issues.


Do you think the JVM would have faired better at this? Otherwise, perhaps something like Rust would be a better fit.


I don’t know if it would have been holistically better, golang has lots of advantages.

But the concurrency would have been more straight forward on the JVM because the language allows for more choices & their are lots of options that get you there.


He does. He referenced it when referring to the color of functions: "I've enjoyed Bob Nystrom's What Color is Your Function[1] for explaining how annoying the model of "non-blocking only here, please" is."

The linked article [1] explains his take on why those futures, etc not good enough.

[1]http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


I don't know who owns the term "futures", but the limitations of javascript promises are just choices that javascript made.

In Java, you can write:

    List<Future<String>> futures = new ArrayList<>();
    ...loop that populates futures
    List<String> strings = futures.stream().map(Future::get);
The loop will run a series of futures (potentially asynchronously...the use of a Future is decoupled from the actual choice of thread pool, etc). The map will collect the results in a blocking fashion. And Future.get() will rethrow any errors that were uncaught in the execution.


For most use cases of async programming you never need to touch channels and goroutines. If you build a webserver in go it will be highly concurrent out of the box. as a developer you just write synchronous code. here's a nice tutorial: https://getstream.io/blog/go-1-11-rocket-tutorial/


Just my own anecdote, but I write go and typescript daily and I’d take goroutines and go’s other concurrency primitives over aysync/await any day of the week.


What is the difference?


> actors, futures/promises and async/await style co-routines which are all extremely available in all of the major languages

Language features are indeed available. Runtime features backing them are only available in Go, erlang and .NET.

If you only need concurrency for CPU bound calculations, even C++ has decent options, e.g. OpenMP works great for my tasks. However, OpenMP offers nothing for IO. The point of go’s co-routines or .NET’s async-await, they allow to run a mix of CPU bound and IO bound code, and do so in parallel utilizing all available hardware threads.


Green thread schedulers are available (and I’ve had direct experience with them) in the JVM & C++ as well.


Scheduling is not enough. For efficient IO you need a scheduler that’s tightly coupled with OS-specific kernel mode APIs for async IO.

Java has support for that in java.nio.channels but that’s limited and is not integrated with the rest of their standard library.

C++ can do that, too, but quite hard in practice.

Runtimes like Go, .NET and erlang already have that stuff included. There’re some limitations (e.g. erlang doesn’t support that on Windows, BTW they call the feature “kernel poll”), but still it works great and very easy to use.


Take a look at quasar for the JVM. In practice I’ve found it outperforms* golang in my workloads.

* there is a small fixed overhead increase in memory.


There’re indeed multiple third-party libraries for that in many languages. E.g. for C++ we have lubuv, boost.asio, etc. However, built-in solutions have upsides.

They work out of the box i.e. deployment is simpler.

They’re integrated into the language, e.g. most network IO in golang is asynchronous under the hood, compiler and runtime do that automatically.

They’re integrated into standard libraries, e.g. in .NET all streams be it files, sockets, or transforms like compressors and encryptors support asynchronous operations.

The support is usually better, too.


The JVM is getting coroutines with Project Loom. The same Quasar devs are integrating it into the JVM.


Quasar is going to be integrated into the JVM by means of project Loom.


Don't disagree with the challenges about other languages being equally applicable. My first thought was also "Erlang does it at least as well as Go". Reasonable challenges on whether Erlang (or Elixir/Pony/...) are mainstream though.

For me the more valuable point is not that Go specifically gets it right: it's that async - as implemented in javascript/python/java/c# and so on - is fundamentally wrong. These two quotes get to the heart of it:

>The core idea is that in the asynchronous model we have to mentally note the blocking nature of every function, and this affects where we can call it from.

>The fundamental issue here is that both Python and C++ try to solve this problem on a library level, when it really needs a language runtime solution.

I've said for a while that async as implemented in javascript et al is the "GOTO" of concurrency - and should be considered equally as harmful as Dijkstra's observation on GOTO, for many of the same reasons [0].

[0] https://en.wikipedia.org/wiki/Considered_harmful


Shared-memory threads are worse than async/await. I'd rather have GOTO than data races, since, while it might be hard to model in your head, at least it's deterministic and using it wrong doesn't cause UB.

p.s. https://vorpus.org/blog/notes-on-structured-concurrency-or-g...


Erlang doesn't do shared memory. It's a message passing, shared-nothing model. I believe GO is the same.


Go is not shared-nothing: if you write a slice to a channel between two goroutines, for example, the memory that the slice references is now shared between the goroutines


thanks for the correction.


Do people really find CSP a good approach to concurrency? I've always thought that channels were a relatively poor choice of fundamental primitive -- channel-based concurrency is tricky to get right and not very composable.

Most viable concurrency primitives are in some sense equivalent (you can build condition variables out of channels and vice-versa, say) but that doesn't mean they're equally good.

Java has per-object "synchronized" and "notify" as its core constructs, but it's a bad idea to use those directly for day-to-day concurrency tasks. Much better to use library classes like ThreadPoolExecutor and Future.

In Go, do you tend to use channels directly, or do you use higher-level wrappers?


Exactly. I get the feeling that people who claim golang gets concurrency right have not used the much more powerful constructs in the Java standard library. They keep getting better with things like CompletableFuture which was added in Java 8, and coroutines which are going to be added to the JVM sometime down the line by means of project Loom.

Composability in golang concurrency constructs is non-existent. Having to manually manage channels to indicate errors, return values, and completion is error prone and does not compose, and is subject to race conditions. I don't see anything that I can do in golang that I wouldn't be able to do using Java's futures, while still being more composable and easier to reason about in Java.


I think composability especially is the key point. With futures (and async/await sugar on top), everything is naturally composable. It's much trickier with channels.


Regarding one of the last comments, I'd say the two biggest reasons for using Node over Go, at least initially. Prototyping speed, and a stronger connection to a web front-end. I've really not seen any other language/platform work faster for developing a huge variety of implementation details than JS/Node. IT's a really good balance of performance, flexibility and ease of development.

Is it a Panacea? Of course not. That said, I think that starting more monolithic an breaking pieces off as needed is a strong approach, best started with Node. From there, you want different pipelines/processes/queues/workers in other platforms, great. Write that piece in go+grpc and call it from your api endpoint server.

So many times I see devs want to go the optimal X, without even considering if "optimal" is needed, and if it's prudent to start with.


And the main reason to not use Node: JavaScript.


What a lovely fusion of the blub paradox and a middlebrow dismissal.


Kind of, though your tone seems a bit dismissive. In the end, the Node/JS community, mostly because of npm, has a lot to offer and that is the ability to create a working product more quickly than most other options in most conditions.


Pardon my snark! In all seriousness, isn't it possible that you're more productive with Node because you're more familiar with it and its ecosystem?


Not really, I've also been active in C#, at one point more Java, and a few other communities (Ruby on Railes) along the way. I'm not really tied to it so much as choose it.

I've never had the vitriol towards JS that other devs have had in the past, I've always recognized DOM issues vs. language issues, and many of the "good parts" long before the book.

In the end it comes down to the massive ecosystem. Which has some drawbacks, but in the end allows for unparalleled productivity gains.


Ruby/Rails users will say the same.


Sure, if you ignore everything else in tracker1's original comment, which can't be said for Ruby/Rails.


Also, can always front with gopher (written in go) and defer to other systems via reverse proxy.


For me, "great tools like the race detector" sums up the article - the detector is fallible, and although every point in the article has some validity, they also could all be argued against, and the result is a bit of a house of cards. So for me, Go at work by order, Erlang at home by choice.


Mixing threads with event loops is possible, but so complicated that few programmers can afford the mental burden for their applications.

This is just Apple's Grand Central Dispatch model, or the event loops used internally in Chromium. It's not complicated at all, it's a very practical and productive approach.


Or you could just use the Actor model, which I like much better than CSP. (Or at least I think I'll like it better when I finally understand it.)


The Actor model is trivially implementable with CSP.


CSP can be implemented with Actor model and rather trivially. You can think of an actor as a programmable channel. But not the other way around. Because you can't implement unbounded nondeterminism with bounded.


And CSP is trivially implementable using the Actor model. What's your point?


Depends. I would argue that e.g. Go's CSP approach is not implementable on top of Akka, since actors there are not allowed to block on reception of single message. They will always get messages pushed, so the blocking Go concurrency constructs can not be built on top of it. It's obviously slightly different with Erlang, where Actors can block.

I think the basic Actor definition does only say that Actors need to send messages to others, but not whether they can block on reception of selected responses. So it might be a bit undefined.


Lack of blocking support in runtime cannot prevent you from implementing waiting. Either through higher-order event driven code or just plain busy-waiting style message exchanging until you receive your message.


Busy waiting won't work, since there will potentially never run another thread that changes the thing you are waiting for (that's why e.g. in JS everything needs to be async). There might be ways to model things a bit different (like using Promises and other monadic abstractions), but things will never look the same as in a blocking environment.


It's a model, it doesn't matter how it looks.


My point is that you can program with the actor model in Go, if you want to.


You can program with it in C, if you want to.

But Go is clearly not designed with that in mind. It's a language that is very opinionated about how concurrency should work. And so whether it's a good opinion or a bad opinion becomes very important.


The actor model in Go looks like this:

    type actor struct {
        c chan message
    }
    
    func (a *actor) foo(...) {
        a.c <- message{...}
    }
    
    func (a *actor) run(...) {
        for {
            select {
            case msg := <-a.c:
                a.process(msg)
            // ...
            }
        }
    }    
This is perfectly idiomatic Go for a large class of problems. Therefore it's not correct to say that "Go is clearly not designed with [actors] in mind."

To be fair, it is probably correct to say that Go provides CSP natively, and leaves actors to be implemented by the programmer, and that this "hierarchy", that CSP makes a better foundation for actors than vice-versa, is an opinion codified by the language.


But not in golang (see generics, or lack thereof).


They are computationally equivalent. Except that you can't typecheck actor calculus (since you can send anything to an actor) but you CAN typecheck pi calculus (channels).


> Mixing threads with event loops is possible, but so complicated that few programmers can afford the mental burden for their applications.

Correct me if I'm wrong, but doesn't basically every UI framework from the last 20 years do exactly this?


They do, but I think also that possibly every user of such a framework has at least once created a concurrency issue once in their life. All of these the issues below are things I have seen (and created) over and over in these environments:

- Trying to access UI framework elements from the wrong thread

- not understanding that a callback inside one of the included libraries is not running on the UI thread but from somewhere else

- callbacks capturing references to objects which already had been destroyed (but the callback had been queued and can't be cancelled), and accessing those objects later on

- deadlocks, due to mutex locking in both callbacks as well as the "normal" code.

There are ways to minimize the amount of issues, e.g. always deferring asynchronous callbacks to the next eventloop iteration, making sure that callbacks are queued on the main thread instead of an arbitrary one, deferring object deletion behind all other callbacks, etc. But these are on the harder to learn, teach and enforce side.

Therefore I would agree with the author of that article that the mixture of eventloop-based programming and multiple threads is the hardest combination.


> Correct me if I'm wrong, but doesn't basically every UI framework from the last 20 years do exactly this?

Yeah, multithreading with event loops was never a big deal. Simple API to run something in a thread pool out of event loop is definitely much easier, than using threads. Apple did Grand Central Dispatch and I believe it's universally recognized to be easier than using threads.


C# does this server side too, async isn’t particular fun but threading doesn’t make it any worse when the runtime handles it imo.


> Proper use of channels removes the need for more explicit locking

If you're lucky. Sharing mutable state is unsafe by default (map writes can crash!) yet very common and the language doesn't help you avoid it. A good language for concurrency would also make it easy to switch between sync and async method calls; the trouble with channels is they don't support passing errors and panics without rewriting everything to wrap them in a pseudo-generic struct.


If you're using channels "properly", then you only have one thread writing to the map at a time. In this case, "proper" use of channels is easy enough, but there are plenty of cases where "proper" use of channels is (in my opinion) a fair bit harder than proper use of mutexes, which is one of the problems Go's concurrency scheme was meant to solve.

I will say that even in spite of the above criticism, I've found that writing safe, concurrent Go is quite a lot easier (channels are still useful if not the panacea they were made out to be), but it still takes a bit of planning to minimize the risk of creating race conditions (minimizing the interaction points between goroutines, not throwing goroutines at every problem, etc). This takes experience which requires social controls instead of technical controls, which is a bummer, but on balance still better (IMHO) than the problems brought by other languages with technical controls for this particular problem.


Go's answer to this is tooling, -race in this case. I think Go's general strategy of moving complexity/functionality to external tools is a little underexamined -- it's definitely interesting, especially as it's kind of a middle path between "language with lots of helper stuff" and "IDE with lots of helper stuff".


It's better than nothing, but it's so expensive they don't really expect anyone to use it at prod scale. That's a recipe for letting catastrophic bugs do damage for hours or days.


Leave aside the extremely interesting engineer behind th go scheduler and goroutine.

The abstraction that goroutine provides is a simple, indipendent, isolated unit of execution.

You start it, and that is all you can do with it.

No way to set priorities, decided when to stop it or inspect it.

After the goroutine start the only interface that you get is a channel where to push and pop stuff from. Which is just too limited consider the complete lack of genetics.

It is really the best we can come up with?


I'm not sure how people can't see that futures (as in Java's CompletableFuture) is much superior to golang's approach exactly due to the reasons you mention.

You can also have the actor approach (e.g. Akka) to set priorities, have monitoring hierarchies, and introspection.


Apparently there is a C library for green (Go-like) threads called "libdill" or "libmill".


This picture is a bit rosey, no? Literally the only deadlock I've come across in years has been in go.


What fraction of the concurrent programming you've done over those years has been in Go?


A language claiming to be designed to solve the concurrency issue needs to have a better story than what golang offers.


Same question. I've come across deadlocks in popular databases for heck sake haha.


halfish? the other half being akka.


> Programming with threads is hard - it's hard to synchronize access to data structures without causing deadlocks; it's hard to reason about multiple threads accessing the same data, it's hard to choose the right locking granularity, etc.

It's almost as if we need a language that has a focus on data races, concurrency and fearless threading.


Hah! Good one. To anyone else not in on the joke, parent is referring to the language that is often discussed on HN and known for having a sometimes overly enthusiastic community, Pony: https://www.ponylang.io/


Hah! Good one. Parent is clearly referring to Ada and the oft-quoted chapter 9 of NASA's Ada Style Guide (c. 1987).

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198700...

The discussion of pitfalls surrounding Ada's task facility is eerily familiar to what you're reading in this thread 31 years later.


Hah! You must be in on another joke, because OP was referring to Rust ;)

> Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

www.rust-lang.org

edit: /s


Wooosh

Edit: I made this post, but I thought afterwards, what if you were you being dense on purpose, and I didn't get the joke?


I think this thread is a nice reminder that it's really helpful to use /s when you're using sarcasm in written form.


Isn't Pony the language where x / 0 == 0?


Yes but you can also have it be an error by doing x/?0 using the latest code on master.


Actually it's very easy to avoid data-structure related deadlocks:

- avoid holding two locks simultaneously: that guarantees no deadlocks.

- if you have to hold several locks, always acquire them in the same order in all scenarios.

The "avoid holding multiple locks" rule also harmonizes very well with "minimize the durations/sizes of critical regions". That is, hold a lock over the minimum number of machine instructions necessary. That reduces lock contention, promoting better concurrency.

Of course, we don't get anything free and easy. Not holding multiple locks means that you can't lock in some consistent condition across multiple structures to do an atomic update. Any time you let go of a lock to go do something elsewhere and re-acquire the lock, the "world has changed". If the code hangs on to any previous assumptions about the state (e.g. cached info in local variables), there will be a bug.


"avoid holding two locks simultaneously" only guarantees no deadlocks if locks are your only form of inter-thread-blocking. If you have synchronization points or channels/pipes or anything else, that needs to be expanded to include "avoid using a channel while you hold a lock" and similar for every blocking primitive.


That's right; avoid doing anything that could block while holding a lock: acquiring another lock, waiting on a semaphore, and so on.

Note how condition variables have an aspect of this built-in: the wait operation gives up the mutex.


- Or if you have to hold several locks, grab them all atomically and if even one grab fails, release them all and try again later.

- If you really need all the locks right now, steal the ones you don't own from another thread, and make sure all threads can deal with lock theft.

It's not always easy to sort out the best way to deal with deadlocks. Each approach has significant tradeoffs.


Multiple locks are not always in the same scope. Method Foo::Update acquires a lock, then without releasing the lock calls into another class/object where Bar::Commit also acquires its own lock.


Yep. That's certainly one of the tradeoffs.


How would you do this in Go? If not Go, what language has the capability?


"...shared values are passed around on channels and, in fact, never actively shared by separate threads of execution. Only one goroutine has access to the value at any given time. Data races cannot occur, by design." [...if you stick to this style of programming...]


Rust evangelism strike force strikes again :)

I thought of Erlang initially, and so did others in the thread.


While I really do like Erlang I think everyone should take a look at what Elixir is doing on top of the Erlang VM. https://elixir-lang.org/


Is this a RUST reference?


I thought this was one of the "not strong" points of Rust? It had Tokyo, futures, async/await but I vaguely remember many libraries waiting for the async story to settle down? Is there a one standard way to do async now in Rust stable?


You can use tokio with stable Rust. Async/await is in nightly, but won’t be stable until early next year. As part of that work, some of the APIs are changing, but there is/will be a compatibility layer to make upgrading from the stuff today to tomorrow’s stuff easier.

That said, to address the question directly, rust-the-language provides a stronger guarantee than Go: rust code is free of data races at compile time. However, the APIs are much harder. Async/await will make them much easier, but still not as easy as “everything is always async.” The tradeoff is speed and safety; rust will be faster and have more guarantees, but be slightly harder to use (though some people do think the explicitness of async/await is easier, but that’s personal opinion).


Thanks for the patient and detailed response Steve. I don't have a particular need for Async at the moment, and commented impulsively, but ended up learning more about current state of affairs from your response :-)


You’re welcome! It’s been... confusing. Such is life on the edge!


Yes


Jefferson’s Timewarp did that with “virtual time” using optimistic rather than pessimistic concurrency constructs so popular today outside of transactions.


Have they moved on from Starship?


No connection with the band.


I disagree. You really need "shared nothing" and messages. The only language that really hits the concurrency nail on the head is Erlang, which we use extensively in our business.


Erlang is latency sensitive and Go is throughput sensitive


[flagged]


I think you are being down-voted for just making a generalized claim with no mention of why? care to explain, what areas does PHP out-perform Go?


PHP's concurrency model is really nice: Spawn a process per request, and otherwise don't do concurrency because that's a mistake.

Yes, you can use CGI to get the same model in practically any language, but we see very few languages these days making this tradeoff.

For some reason, people decided they wanted the performance gain of spawning threads per request instead of processes per request. Honestly, it's ridiculous. Reasoning about isolated processes per request is easy compared to many threads. Sure, in php you can't share a pool of db-connections between requests, and if you need to lock a filesystem resource you're stuck because you can't share a mutex between requests so you have to make an ad-hoc one some other way, and the overhead is like 50MB/request meaning the difference between a server handling 1000 requests with php and 60000 requests with go...

But being able to just have processes which were all isolated and request scoped was nice. The OS was your GC so you didn't have to worry about freeing memory, and the OS's scheduler did a damn good job of making sure php processes blocked on IO got context switched between.


PHP no longer works like this in practice. FastCGI etc uses persistent processes with a thread per request, and thread per request is pretty standard for most platforms that don't have good async support at this point.


Could you please expand how it's better in your opinion?


low quality bait




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: