Channels in Go turned out to be less useful than expected. The original pitch was "share by communicating, not by sharing". But in practice, large amounts of data tend not to be sent over channels. It's common to send references over channels instead, which implies sharing across goroutine boundaries. Go doesn't have strict single ownership, like Rust, so you can send something to another goroutine without the sender losing access. This creates a potential race condition. (There are checkers for detecting this at run time, but it's not a compile time error, as it is in Rust). Channels are thus equivalent to a queue module coupled to a lightweight thread system.
Go is garbage-collected, with a good concurrent garbage collector, so all this is memory safe. Mostly. Maps (Go's dictionary type) aren't concurrency-safe for performance reasons, and there's an known exploit involving slice descriptors.
Channels have some other annoyances that make them less useful than they seem at first glance.
One is the ownership model. You can close a channel, and sends will then panic (there's no way to check if a channel is open), so ownership is always in the hands of the code that writes to the channel. You can -- and must, if you ever want multiple producers with a single channel -- abstract ownership into a wrapper that keeps the channel and a tracks whether the channel has been closed, but it's tedious to implement it over and over (it needs to be thread safe, too).
It's tempting to use channels as part of a public interface (between two cleanly separated packages), but this nearly always is a bad idea. Channels just don't compose well as an iteration construct, and you encounter ugly surprises such as realizing the caller is using an unbuffered channel causing stuff to block. It's almost always better to offer a callback interface (push) or an iterator interface (pull), and then either side can use channels internally, or not at all, as needed.
In practice, it's sometimes surprisingly hard to avoid goroutine leakage with channels. It's a simple mechanism that gets complicated fast, especially as you combine them with mutexes and error handling (though ErrorGroup is a timesaver that ought to be part of the standard library).
And finally, the behaviour of nil channels (they block forever!) is unfortunate. It is touted as a feature, but for me it has been exclusively the source of surprising bugs.
To be exact: you cannot "non-destructively" check if a channel is closed. I.e.: the way to check is by using a select statement to do a "soft write", I.e. "try to send this value but don't panic if it fails." Same for reading. This means you can't check the state of a channel without modifying it if it isn't closed.
I know nothing about go, but assuming the reading end of the channel is running concurrently, the only way to meaningfully check for closeness is actually trying, any other check would be meaningless as an open channel could have been closed immediately after the check would have returned 'closed'.
Sending to a closed channel is fatal, as is closing it. There's no way for multiple senders sending to a channel to tell if somebody closed it without risk of crashing.
Graceful shutdown of systems with lots of asynchronous communication is hard, but this makes it harder than it has to be.
I was wrong; this is true. the soft-reading thing only works for reading. You can't soft-write with select to a closed channel, it will crash even with a default clause.
*sigh, after all that time I still make this mistake. Goes to show how often I actually use channels. :/
That reminds me of Unix FDs and the distinction between whether your user has permission to write a file, and whether an open file is actually open-for-writing. I was thankful to find out about fcntl(2) + F_GETFL, and get out of a 'dummy write' train of thought.
Fun-fact: In the POSIX model you cannot write into a private mapping created through a FD with the writable bit not set. (Or, more precisely, Linux won't let you set the write bit of the mapping). Even though that would be a rather useful thing to have, and wouldn't be hindered by hardware or kernel (which already does CoW of pages).
Furthermore, channels are slow. I only use them when performance doesn't matter, and even then I end up rewriting them into a mutex half of the time and simplifying the code in the process. It was a nice idea in theory, but the implementation is full of usage pitfalls and performance issues.
I use queues of my own implementation when I really need performance. But then I have lots of experience with that, and I have the implementation of maybe the world's highest throughput SPSC queue, outperforming the Lynx queue that was featured here on HN earlier in the year. I'm still trying to make time to write that up properly as a blog post (I hate writing blog posts.)
I looked at the "select" code in the early days. A "select" with one case is compiled inline and is fast. A select with more than one case calls a very complex library function for N-way selects. In practice, most selects are N=1 or N=2; N>2 is rare. N=2 needs to be handled as a special case, if it hasn't been yet.
They have Dmitry Vyukov, who's better at that kind of thing than me. If they want to fix the problem, they don't need my help. I would rather publish my work, and if they want to adapt it for Go's channels, they can do that with my blessing.
To me the tricky part seems to be how they need an implementation good for all cases. Performance engineering is all about tradeoffs. Memory usage, throughput, latency are often competing requirements and you have to choose one. Furthermore, unless they can somehow statically determine how many threads access a channel, they have to use a MPMC queue, which is a performance anti-pattern.
I would modify the language (maybe just make() builtin) to allow specifying what type of Channel is needed where, SPSC, MPSC, SPMC, etc. But we all know how that would be received.
If you control both the compiler and the scheduler these optimizations become somewhat easier and without any syntax changes, no? Like removing naive synchronization on channel operations and having scheduler to deal with them, tracking states, amortizing synchronization, etc.
Some become easier, yes. But I don't think it's that easy to tell how many threads access a channel. We have runtime race-checkers precisely because we don't know how to check that at compile time. Whether the specific case of channels in Go is easier, I don't know. You can in Rust because the language makes you specify which threads owns what and two threads can't own the same thing at the same time (as I understand it anyway.)
This is how the language works. Go doesn't have any sort of deep copying mechanism for channels to use, so if you want to send a buffer on the channel, it's on you to not reuse its slice.
I agree that the way Rust does type checking for ownership is pretty cool, but transferring ownership without having static analysis verify it seems pretty normal. I don't think it's a reason to avoid channels.
> Go doesn't have strict single ownership, like Rust, so you can send something to another goroutine without the sender losing access. This creates a potential race condition.
Rust has very thorough ownership semantics baked into the language to avoid such race conditions. But anyway, the idea of goroutines is that they should primarily be used to transfer values, which results in both goroutine's having their own separate copy of the data, and thus no races.
My "aha" moment when trying to understand channels in Go was when I realized that everything about how this feature is designed comes from their answer to the question "how could we 'fix' select() in C?". In fact, pretty much every feature in Go is designed to fix some perceived flaw in C, and that's the entire philosophy behind Go. They aren't interested in language theory, or in innovating or solving problems outside of C. They were just trying to write C 2.0. Not that that's a problem, it's just, it threw me off. When it came out, I fully expected Go to be a new competitor to Java or even Python, not to C.
Go's authors didn't really set out to fix C, they wanted to fix C++ [1].
To say that channels is the answer to fixing select() ignores the fact that Go doesn't support non-blocking I/O at all. select{} only works on channels, and I/O operations are always blocking.
It seems like a bit of a missed opportunity to not let file descriptors and other data structures support select{}, much like they decided not to let "range" work for anything except built-ins. To wait on multiple sockets, you have to start a goroutine per socket and communicate via channels, even if all you want to do is service one event at a time. That's fine, it's Go's way. What I'm less happy about is that because of this design, you can't ever interrupt a blocking operation — reads in particular — other than by closing the file descriptor. That's why SetReadDeadline has to exist, to at least let you set a timeout.
Fix for some value of fix. Go's lack of generics make the data structure offerings pitiful. Just look at heap: it's worse than c.
There is a role for a language like that, but calling it fixing c++ ignores 90% of the reasons people use c++ in the first place. In fact, id say that go reminds me most of the version of c++ that operated as a preprocessor. It's almost exactly the same template implementation!
"Fix" in quotes, really -- I don't think the Go team ever pretended they were building a replacement, although they are certainly replacing C++ for their own work. That said, if you read Rob Pike's history of Go (previous link), they were mighty surprised to discover that Go didn't really entice C++ programmers at all; the crowd that Go appealed to were Ruby and Python developers who wanted a faster language that was still expressive and fast to compile.
yes, Golang is really can be viewed as c plus.
But it also absorbs many features from other languages.
Personally, I think Golang is a Java killer. I never write one line Java code since I became familiar with Go. The main reason is it is painful to maintain a Java web project, slow compiling, slow startup, large memory consuming, so many concepts (of all sorts of frameworks) to learn, etc.
While I'm sure many people will agree with you, I (respectfully) don't for a couple reasons..
Golang doesn't have the library or tooling maturity or breadth to make it a real competitor to Java in the enterprise space IMHO.
Golang lacks a package manager (and "go get" is not a real substitute when it completely shirks semantic versioning). There's no solid IDE with Golang support. Most of the web server and database tools are very low level, and while they provide the necessary features for a smaller project or if you're writing exclusively microservices, they leave a lot be desired if you're writing a run of the mill web app, or an enterprise system for batch processing (orders, transactions, email etc).
For smaller projects, most languages are better than Java for the reasons you mention (including dynamic languages like Python for example). Golang is a novel language and it definitely has a niche, but I find it right inbetween a language like Python and a "heavy" language like C#/Java.
Honestly, there wasn't a solid IDE for Java for a while, and the makers of IntelliJ are working on Gogland.
However, IMHO, the only thing I miss in an IDE for Go is inline debugging (but in truth, I never used IntelliJ's refactoring tools, so I could have missed out on all the benefits of a powerful IDE).
Except for that, VSCode and nvi are all I (personally) need.
Obviously Go isn't going to replace every existing piece of Java software. That's a very strange takeaway from the parent comment.
The point is rather that Go occupies the same niche as Java, and does so arguably better. As such, "Java killer" means new software projects are going to increasingly favor Go over Java.
I'm not really a go person. I might go as far as saying I'm a go hater. But I feel compelled to defend it in this case. Golang is a deliberately simple language that lends itself well to inspection & tooling. For example, having special syntax for returning errors vs a more generic solution like multiple return values or returning a tuple. It feels awkward but makes detecting unhandled errors very simple.
Seems like there's decent enough tools + emacs/vim wrappers to me. In my mind this is one of the primary merits of go.
> For example, having special syntax for returning errors vs a more generic solution like multiple return values or returning a tuple.
Even if this were true (which it isn't—Go uses multiple return values for returning errors), returning a tuple would in no way make analysis harder. Detecting whether a function returns (T, err) for some T is utterly trivial.
>Golang doesn't have the library or tooling maturity or breadth to make it a real competitor to Java in the enterprise space IMHO.
This is why my company is being forced to scrap Go. It was good since it was easy to learn by devs, type checked, and fast. But the lack of libraries vs Java means it won't work for a lot of projects. One big issue is the lack of good database drivers.
I can't recall anywhere in the literature that refers to being inspired by UNIX's select(). Golang's select comes from the Bell family of languages such as select in Newsqueak or alt in Limbo. In turn, channels in these languages were inspired by CSP. Here is a link to Russ Cox's "Bell Labs and CSP Threads" (https://swtch.com/~rsc/thread/)
An alternative design that does not require generics is to make channel types implement a Channel interface, which has methods close(), len(), etc. This seems less magical, and would enable functions that act on channels as existentials.
Why generic builtins instead of a Channel interface?
Why is len a function and not a method?
We debated this issue but decided implementing len and friends as functions was
fine in practice and didn't complicate questions about the interface (in the Go
type sense) of basic types.
Also worth noting is that debate happened in 2009 or earlier, which is 7 years ago. After that, Go 1 has been released with its promise/guarantee of compatibility [1], so changing the language is not possible. It is what is, and significant language changes can only happen in Go 2. Until then, many, many people and companies are using Go 1 with huge success and happiness and have built great things with it. In practice, I find it works really well.
> Inside appendf we might have to do something gross, like duplicate the entire object, and append to the copy.
That's why you have to care about the return value in normal Go. The slice x opaquely references a backing array with a certain max capacity. If you append one too many times then it's going to return a completely new value that references a new backing array that has more legroom ... with all your existing elements copied into it.
(p.s. I just noticed that Go actually considers that "Probably Not" case a compile-time error)
I know it's a controversial point among users, but I think that if Generics were such a show stopper, Google would have implemented them.
Go was invented for internal Google use, and then shared with the world, and Google is an engineering company. If engineers would have rebelled, higher management probably would have done something about it.
-----
Although I don't understand why they keep fighting it. A simple style generics (Java style) shouldn't make code that ugly.
> Go was invented for internal Google use, and then shared with the world, and Google is an engineering company. If engineers would have rebelled, higher management probably would have done something about it.
Not necessarily. Go was invented for internal Google use, but most of Google's code is still in C++ and Java. New projects are often started in C++ and Java, even, as they depend on libraries written in those languages. I think this would be true even if Go were perfect; switching languages is hard. But it means you shouldn't assume Google has solved whatever problems one would encounter while implementing Google product X in Go. Or that all Google engineers believe Go is a good language; as with any large group of programmers, there are people with opposing viewpoints about the virtues of various programming languages.
Google management doesn't tend to impose top-down technical decisions such "product team X must use Go" [1] or "the Go team must add language feature X". They generally give broad direction and then trust the engineers doing the work to find the best technical approach to accomplish the task.
[1] Although you generally must stay within the handful of approved languages: C++, Java, Go, JavaScript (for frontend stuff), Python (mostly for internal stuff, and getting less socially acceptable over time). One reason for this list is that when you write a Google system in a new language, you need a bunch of support from the rest of the company to do it: build infrastructure, libraries, etc.
Go was invented by a small group at Google with the aim of solving some of Google's problems as they saw them. It's not as if the whole of Google came together and collectively designed the language they needed. I wouldn't read too much into the fact that it happened to be invented at Google, rather than wherever else Rob Pike happened to be working at the time.
Parametric polymorphism alone is a simple yet very powerful feature, but this assessment changes drastically as soon as subtyping (which both Go and Java have) enters the picture. The combination of parametric polymorphism and subtyping creates the pesky issue of determining whether Foo<T> should be a subtype of Bar<U>.
Java-style generics are a lot of things (notably “bolted on”), but “simple” certainly isn't one of them. Wildcards are complex. F-bounded polymorphism is complex. Non-denotable types are complex.
Go doesn't have subtyping. It has coercions, which are not the same thing, and they do not lead to issues around variance.
Variance is straightforward anyway. A lot of things in Go, like the semantics of "nil" and zero values, are more complex than variance. And, as a designer, if you're worried about the complexity, just make all type parameters invariant, like C++ does. It's a bad excuse to not have generics at all.
Rust works exactly the same way as Go (coercions, no subtyping), except that there is subtyping in Rust via the region system. But Go doesn't have that. No region system, no subtyping.
I googled for a bit and didn't find anything current about Rust's region system from an implementation perspective. Is there a good place to read more in depth about design decisions, but containing less process than the RFCs and PRs?
It would be the other way around. It is intuitive to say that interface{} is a subtype of all other types, but it is easy to show a contradiction. Assume that interface{} is a subtype of int, which means interface{} can be used in place of int in any type. So define a function with type func(xs []int). If you pass a []interface{} to this function you will get a type error. More concretely, []int and []interface{} have different in memory representations.
Can you explain why the empty interface would be a subtype of [] int and not the other way around? Intuitively, I think it's the other way around, as every type implements interface{}.
I guess neither direction is especially intuitive because structural typing makes the relationship a bit strange. In both cases, it's easy to show a contradiction. `interface{}` can't substitute `int` in `[]int` and `int` can't substitute `interface{}` in `[]interface{}`.
I'm not sure how you concluded that I said interface{} is a subtype of all other types. Perhaps you meant supertype?
Also, from “Foo <: Bar”, you can't automatically conclude that “[]Foo <: []Bar”. That requires the additional assumption that [] is a covariant type constructor, which in Go it isn't.
I didn't conclude that at all. It is trivial to produce a contradiction if you assume A <: B because Go will not let you substitute A in place of B for every type in which B occurs. Therefore A is not a subtype of B. This has nothing to do with []A <: []B.
But that's not at all what subtyping means! It doesn't mean that you can substitute every occurence of “A” with “B” in the program text. It means that every term of type “A” also has type “B”. (Source: I have TaPL p. 182 right in front of me.)
OK, I see where I was wrong now. You are right to emphasize the word "term" because that's precisely the detail I was missing. I can no longer come up with a contradiction.
The next obvious step is getting an answer to "what is the meaning of saying Go doesn't have subtyping?", which is essentially what you asked above:
> What distinction between coercion and subtyping are you making?
But this is a question for pcwalton, not you. :-) However, I have some thoughts on the matter.
While exploring the extent of my wrongness, I found it very interesting that the Go language spec doesn't mention the words "subtype" or "polymorphism" or "structural" at all. The FAQ mentions "subtype" and "covariance," but doesn't lead to anything conclusive. What I think this means is that the language designers of Go probably don't think of interfaces as being tied to the formal notion of subtype, but rather that of assignability or coercions. This just takes us in a circle though, which leads me to guess that this is a simple matter of two groups of people having different names for the same thing.
The litmus test for “typeness” is to ask whether it makes sense to ask whether a term has the type in question. For example, it makes sense to ask whether 2 has type string, and the answer is “no”. But it doesn't make sense to ask whether 2 has type 3 - the closest thing to an answer is “what the hell I don't even?” Then:
(0) Structs and interfaces are unarguably types.
(1) If a struct S has all the methods required by an interface I, then every term of type S also has type I.
As for coercions, they are one possible implementation of subtyping. Another possible implementation is guaranteeing that, whenever “Foo <: Bar”, then “Foo” and “Bar” have a common in-memory representation.
But languages shall never be conflated with their implementation details!
Yes, I think pcwalton saying that Go doesn't have subtyping is very confusing. What I think he was trying to say was that Go's subtyping is strange because everything is invariant, and that the use of coercions makes the questions you initially raised perhaps not so interesting.
For example, if you start with some base assumptions about which type constructors in Go are invariant, e.g., pointers, slices, arrays, etc., then you could feasibly come up with a rule like this: If T <: U, then Foo<T> <: Foo<U> if Foo's type parameter doesn't appear within an invariant type constructor in the definition of Foo. If a rule like that is all you need (and those types of rules have precedence in Go, like whether a type permits equality comparisons or not), then I think that more or less backs up pcwalton's point that subtyping isn't necessarily the thing you have to worry about when adding parametric polymorphism to Go. It's the other stuff, like the mentioned zero values and presence of `nil`.
The concern I initially raised is just that retrofitting parametric polymorphism into a statically typed language with subtype polymorphism and a sizeable codebase worth not breaking... has a history of not going so well.
Although zero values are inelegant, I don't see why they would interact so badly with parametric polymorphism. In Rust terms, it's as if every type implicitly implemented the Default trait. (EDIT: You're right, steveklabnik. It's closer to Default than to Zero. Thanks!)
As for nil, I'd rather not comment, since I'm not familiar with the specifics of how it works in Go. (But a previous conversation with pcwalton made it perfectly clear that it doesn't work the same as in Java or .NET.)
> Aren't Go structs subtypes of every interface they conform to?
No. There's a defined coercion between values and interface types, which is not the same thing as subtyping. This affects the semantics: for example, you can't pass a value of type []MyStruct to a function taking []interface{}.
Generics are not a show stopper, since they already exist (built-in support) for everything the language creators care about.
Whether or not they are a show stopper for the users - well, that depends on your use case and preferences. Since show stoppers are a subjective thing. Objectively you could implement everything in Assembly or even Machine code, and people used to write rather complex programs this way.
I personally know there are a lot of types of systems I would never implement in Go if given the choice, because it would be too bothersome, and I don't like RSI. YMMV.
Do we know that Google actually uses Go extensively? AFAIK the Downloads web server is the only piece of Google publicly disclosed to be written in Go.
IIRC most of Google is C++ and Java, which both have generics.
Not widely, as far as I know. Google has two existing systems, Borg and Omega, whose ideas were distilled into Kubernetes. According to an answer on Quora in May [1], Kubernetes is only used for some things that run on the Google Cloud Platform. Some more background here [2].
No, in general. Is there a solution that is actually well tested in production? I can go through the marketing spiel claiming that the solution in question is "scalable", but I'd rather see that in practice.
Kubernetes is definitely being used at scale in production.
Spotify uses Helios [1] to run their stuff. We evaluated it briefly and decided it was too limited and immature compared to Kubernetes.
Mesos + a framework such as Marathon or Aurora was the most needy choice before Kubernetes came on the scene. Mesos probably scales farther than Kubernetes in terms of pure cluster sizes, but it also depends on what framework you use on top (Mesos itself is just a scheduler). I don't know if any of them are as flexible as Kubernetes in terms of things like volume management, config/secret management and security. It's also worth pointing out that Kubernetes can run on Mesos.
>"Mesos + a framework such as Marathon or Aurora was the most needy choice before Kubernetes came on the scene."
How is Mesos "needy"? Can you elaborate?
Needy has a negative connotation and it's not word I would necessarily associate with the Apache Mesos project. I've run a number of clusters in production now for just over a year with Marathon and it pretty much "just works." I have done 5 or 6 rolling upgrades now without issues. I haven't found it to be needy at all, quite the opposite, its been rock solid and the management overhead has been nominal.
>" I don't know if any of them are as flexible as Kubernetes in terms of things like volume management, config/secret management and security."
I think that Mesos is actually more flexible as it allows you to cherry pick the non-scheduler specific components to fit your use case. As an example for secrets management you can use something like Consul Vault or integrate Keywhiz or completely roll your own.
I feel like with Kubenetes you buy into the "whole thing". Using the example of secret management - you have one choice for secret management and the last time I checked they were stored in etcd in clear text. So if that doesn't fit your security requirements it seemed like you were out of luck.
The container scheduling bit of Cloud Foundry (Diego aka Elastic Runtime) is used to run Pivotal Web Services, which is a public cloud thing that is reasonably big, although I don't know the actual numbers. Baidu run some big instances in-house, but again, I don't know how big.
Perhaps the downvotes are because the comment is pretty empty. You could say that about effectively anything. It doesn't add to the discussion. If, for example, you have information indicating that generics are on the horizon or that Pike et al. have discussed the possibility of adding generics, that would be a lot more interesting and substantive.
I downvoted it because it added nothing to the conversation. The word "yet" by itself does not add anything meaningful, isn't funny, has no depth, does not provide any new information, is not a good conversation lead.... yeah, so that's why I downvoted it.
The indications from the people who've replied to you are that they downvoted you for the same reason you're now upset: they didn't think you were making an effort to converse.
Go is garbage-collected, with a good concurrent garbage collector, so all this is memory safe. Mostly. Maps (Go's dictionary type) aren't concurrency-safe for performance reasons, and there's an known exploit involving slice descriptors.