One thing that I've appreciated now that I've spent a decent amount of time writing Go as an individual developer is that it's much easier to jump into open source code and make small improvements and fixes because the language is very "context-free." When you're reading code, the control flow is always spelled out, property accesses never magically invoke getters, and it's generally hard to make things too complicated. There are things that annoy me but the downside is almost always bounded.
Code readability is the original intent behind striving for simplicity.
Though I find some of the design choices a bit frustrating, e.g., generics, lack of min/max; for each design choice I disagree with there are dozens of other choices I do agree with. Structural typing, built-in concurrency, a rich standard library, the syntax, the list goes on.
There is a certain brutal elegance to the way the Go standard library is designed, it is really easy to read and comprehend. It took me less than half an hour to make sense how the net/http server was implemented.
This doesn't only apply to the standard library: because the language style is standardized--there are no coding styles, there is just a coding style--other libraries or programs are very easy to understand. If the program architecture is easy to understand, it doesn't require any significant domain expertise to understand.
Languages aren't supposed to be cargo cults, though some culture can be nice, let's not kid ourselves: programming languages are means to to an end. Go is pragmatic, it gets things done, it is a tool, and most importantly, it doesn't get in the way of design or architecture. You are free to build programs in any style or way you want.
Ultimately, languages are just expressions of grander designs that are far more important than arguments for and against a particular syntax or language feature.
This oversells golang's simplicity I think. Not a lot, but enough to rub me the wrong way a tiny bit.
---
> Go programs are built from just their source, which includes all the information needed to fully build the program.
Still have to deal with GOPATH, vendor your dependencies, and have everything a `go generate` comment wants to invoke. It's certainly better than makefiles, but it's hardly just the source.
> C# is joined at the hip with Windows. Objective-C and Swift are for Apple. Java and Scala and Groovy might benefit from JVM bytecode and its independence… until you realize that Oracle isn’t interested in supporting Java on anything other than Intel hardware.
C# has Mono, you can use Objective-C with gcc, the JVM has a bajillion implementations. I suppose Swift is more or less unportable at the moment. 1 out of 4 ain't great.
> Go is helping pioneer a command-line renaissance that reintroduces a generation of programmers to the idea of writing tools that fit together like segments in a pipe (the original Unix philosophy).
This never went away. Heck we were going over this in college, which for me was in Scheme, Java, C, and C#.
This oversells golang's simplicity I think. Not a lot, but enough to rub me the wrong way a tiny bit.
True. Go is a good language for server-side web stuff. That alone is enough to make it very useful. It is a good language for when you want to get server stuff done and it has to go fast. That's enough. It doesn't need to be overhyped.
Go is helping pioneer a command-line renaissance that reintroduces a generation of programmers to the idea of writing tools that fit together like segments in a pipe (the original Unix philosophy).
That's inherently a batch processing approach. The "strings through a one-way pipe" approach was never that great. It's brittle. If anything goes wrong, the options are to abort the whole thing, or print a message to stderr which will probably be ignored.
Better ways to plug programs together are needed, but Go doesn't do much new in that direction. Today, plugging programs together probably involves a number of programs which use protocol buffers to communicate, running on, say, Amazon AWS. There's much work going on in tools for doing that. They aren't typically pipe-oriented.
The core goals behind Rust and Clojure are very different from those behind Go. This article/presentation would not be appropriate for those languages. I don't think anyone would say that Rust or Clojure are simple languages, or that simplicity is a core goal for them. Rust's core goals are performance, memory safety, and lack of race conditions (AFAICT). And clojure is a LISP which puts it in its own category, really.
It's like saying a jeep can't compare to a ferrari or a minivan - other than having 4 wheels, they're really not at all designed to do the same kinds of things... Sure, maybe driving to the corner store they're all pretty much the same, but which do you want in 8" of mud? Which do you want in a car chase on the highway? Which do you want to bring your 4 kids to soccer practice?
> I don't think anyone would say that .... Clojure is simple language, or that simplicity is a core goal for it.
Good god you are so wrong.
Watch yourself some of Rich Hickey's trove of excellent presentations, including the one where he breaks down the detailed etymology of the word "simple" and how much he strives for that.
> Rust's core goals are performance, memory safety, and lack of race conditions (AFAICT)
We usually formulate this as "memory safety without garbage collection," which has secondary implications on speed and concurrency, but yes. (Also, 'data races' rather than 'race conditions,' technically).
The reasons we aren't seeing more lisp/clojure are not technical/objective ones but attitude, social and nework factors - which are just as relevant though.
You also install runtimes once for other languages.
GOPATH isn't awful, but it's something more than "just source" as claimed in the article; which is my point. Go is simple, but not as simple as claimed.
note that symlinking is not actually the recommended flow ;) Just put everything in gopath.... I even put non-go code in my gopath... it's just a nice way to organize - by the VCS url.
And? Presumably some of these packages get updated, handed off, or are collaborative in the first place. Re-generating is part of continuing development.
With other languages, you might do code generation through powerful macros (e.g. as rust has) or some other tooling which is not literally just "run a program in the user's path".
it is actually more difficult to code in a language that's simple
Why? Because a more feature-complete language allows you to ELIMINATE the concept of `nil` through an `Option` type.
An `Option` type is an `enum` that consists of either `Some(x)` or `None`. That means it is always checked. You can never accidentally use a value that is `None` because the type checker would not let you use `Option<T>` instead of `T` itself.
The code snippets on the websites are far from simple. You HAVE TO remember to do `if err != nil` in every single function. A more advanced type system would actually make this a requirement.
So what is more important, the simplicity of the language or lack of bugs?
Isn't this just layering another, non-standard type system on top of the language itself? After all, a type system is no more than a tool to check the correctness of programs. (And often, document the assumptions that the programmer made.)
I don't think this is a bad thing in and of itself - I like how languages like Python, Lisp, Javascript, and Erlang have been able to layer typesystems on top without building them into the language itself. But I wouldn't exactly hold it up as an example of simplicity, particularly since in those languages the community hasn't agreed on any one type system.
That's the answer for everything in the Go community. More ad-hoc tools to replace the type system. Have concurrency bugs? Use a tool that detects some types of data races. Have problems with errors? Have a tool that detects not checking for errors.
How is approaching every single problem with a different tool more simple than using the type system as the one tool for static checking? The Go ecosystem is creating a ad-hoc, informally-specified, bug-ridden, slow implementation of half of the type checker of Haskell.
The jury is out on whether, for large time frames and for large communities, having a centralized type system is better than having a decentralized set of independent tools.
I'm not very excited by GHC's model of language extensions, https://downloads.haskell.org/~ghc/6.12.2/docs/html/users_gu.... And in spite of the large number of Haskell extensions, there are still pragmatic niches that aren't covered, see for example Rust's encoding of memory management in the type system.
In a sense, GHC contains a ad-hoc, informally-specified, bug-ridden, slow implementation of half of the type checker of Coq ;) Which is to say that there are many flavors of sophisticated type systems, and it's not clear which flavor is most conducive for writing good software on a tight time budget.
> Q: Lack of generic collection classes like in Java’s Guava library?A: For 90% of cases, slices and maps do what you need. For the other 10%, you might consider whether your package should own the logic of those special containers, instead of using an external package.
I read this as:
"We're not willing to put in the hard work on thinking of a decent generics implementation, despite decades of working solutions with a myriad of choices in tradeoffs, so you'll have to do the hard work of integrating a dozen different collections libraries and who knows how many different, after-the-fact, mediocre strategies to the generics issue, each slighly off, all of them conceptually incomplete somehow, and with the slight bugs that come from not having a well-trodden path for an essential component of most programming languages."
This is my own personal opinion on core dev's position and no one else's, but to me generics and dependency management have always been the pink elephant in the room.
I can wait for features; I'm patient. But the downright refusal of Go's core team to even begin to address this issue is not just a technical problem, but a communications one. This is clearly big for a lot of people, and I have yet to see any serious responses aside from either "you domain model's wrong if you need generics" or "deal with it".
In contrast (just picking this language for its community outreach, not because of any perceived technical competition), Rust's core devs have been forthcoming about practically every objection they've gotten. Their answers are clear, detailed, not demeaning, and constructive. When they don't know, they're honest about it, and when answers are hard, they take the time to explain. But I have never heard of anyone being belittled for not understanding lifetimes or the borrow checker.
Yet it seems that somehow if I have a bone to pick with Go not being able to dispatch functions by argument or arity, or with how its simplicity ends up with codebases a lot of people would consider much more verbose than necessary, it's just that I don't "get it". The problem isn't even in the accusation; I just don't even receive examples or reasonable explanations, something I'm used to in related language discussions.
I was willing to give the language a pass on these things when it was just getting started and a lot of the classical, early product criticisms were abound. But it's gotten tiring; the number of unanswered questions is remarkable by now.
I'll be completely honest: it's one of the most BS reasons I've ever read in a technical discussion.
How can we talk about high performance when:
* The current "generics" mechanism (interface et al) does runtime introspection, which is just about as slow and unwieldy as it gets
* There is no option for pervasive, truly high-performance data structures since everything is a map (which comes with its own type parametrization as an exception to everything else), and if you don't like the hashing algorithm, tough luck.
* You have a garbage collector running in the background, which is barely tunable compared to the options other runtimes have
Talking about performance when it's convenient as an argument against generics but disregarding the other holes in the language is not reasonable, because I would say that the choice of implementation for native maps is probably far more important for high performance. Yet here we are, no one complaining.
So we can really discard the performance argument, thus there is now an ample, valid set of choices for generic programming, several of which don't go against the goals of fast compilation.
Speaking of fast compilation; pretty much everything is going to be faster than C++ templates, since the entire compilation chain in C++ is slow.
It would be unreasonable to have designed a generics implementation into Go 1 that did not cover the builtin polymorphic map, slice, and append. A simple set of orthogonal features is an important principle in Go.
For these, performance is most certainly critical.
He is not putting words in your mouth, he's stating his own interpretation.
No sane person would read "I read this as" and assume that this is in fact exactly what the speaker said, nor should any speaker assume "I read this as" is meant as a defamation or misquote.
In addition, you being a contributor not a core team member does not change what you said in the least. A language's design, even if it seems like there's a tight core, is ultimately decided in part by all contributors and the community around it as well.
The use of the opening phrase "We're not willing ... " implies that the OP interpreted my statements as a policy of the wider Go team.
I have no knowledge of any such policy and represent only myself on stage. You can think what you like about my statements, just don't generalise them to anyone else.
But isn't that effectively what the core team did indeed say? They just dismissed everyone's implementations of generics as having a downside, couldn't come up with anything better, then just left it at that. (Edit: This is just my knowledge from a while ago. There's some mailing list thread where every existing way to do generics is dismissed for one reason or another, and well, Go still doesn't have that one feature.)
Also, "Sans runtime"? When did that happen? Last I heard, Go had a fairly substantial runtime to it, making it unsuitable for many places, and not trivially possible to just link right into any old program. And without a runtime, it'd be hard to have a GC, eh?
I think you meant "Statically linked", and nothing at all about runtimes. Rather large difference. FWIW, you can statically link many things, including C#. Which is how C# is deployed e.g. in bestselling iOS apps (and running on iOS has gotta be far from being "tied to the hip" of Windows).
(As a comparison, Rust is actually what is generally meant by "sans runtime", as in you can just call right into a Rust function without setting up anything else (just don't like, call panic or something).)
Which "working" solutions? Has anyone solved creating a type system that offers OO/inheritance, generics, mutability and isn't mind-bogglingly complex?
What is your definition for a "working" solution? I don't think any of the current languages we have fit this description, since all of them are capable of producing type errors that are way too complex to comprehend.
> with a myriad of choices in tradeoffs
Yeah, but how do you decide on which tradeoff to settle for? And why is the tradeoff to not engage in this mess not a just as valid one?
> so you'll have to do the hard work of integrating a dozen different collections libraries and who knows how many different, after-the-fact, mediocre strategies to the generics issue, each slighly off, all of them conceptually incomplete somehow
You're missing the point. Solving the generics issue for a specialized case, even if just on a library-level, is much easier and has much less impact on applications than solving the general case and forcing that solution onto every bit of code in that language.
I don't think the author is advocating using a general purpose 3rd-party library for containers, but rather writing your own special-purpose ones for the few cases where the on-board tools aren't enough.
> with the slight bugs that come from not having a well-trodden path
Localized bugs are easier to solve than the type errors common to that "well-trodden path" you talk about, which may span an entire application.
..not that there is any consesus on what exactly that "well-trodden path" is, since as you mentioned there's a myriad of tradeoffs.
> Which "working" solutions? Has anyone solved creating a type system that offers OO/inheritance, generics, mutability and isn't mind-bogglingly complex?
Pretty sure OCaml would match your definition of "not mind-bogglingly complex" as well as "type system that offers OO/inheritance, generics, mutability".
It is interesting that the OP things that static binaries is related to being born in Google. The thing is that Plan9 doesn't have shared libraries, all binaries are statically linked. And the reason for this is that plan9 is a networked operating system, needing to load multiple files at runtime would severely harm startup time for a binary.
Run trace on a Linux binary and see the slew of "file not found" errors from syscalls looking for shared libs at startup and then imagine each one of these is taking place over a 9600 baud connection.
Good design realises benefits that authors never needed to consider.
Static binaries used to be common, the normal way of doing stuff. People tend to think that package management killed them, but it was actually glibc which cannot make proper static binaries as it insists on dynamic functionality for some functions, such as name resolution. Now we have Musl libc there may well be a revival in static binaries from C applications, and the C-derived ecosystem.
As a demonstration I traced date(1) on CentOS. Here are the file accesses, if you are running diskless, each of these needs a round trip to the file server. (except the final 3, of course)
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
Static binaries are bad design as soon as a library has a security flaw. Remember when there was a double free in zlib and Apple had to release a 1.3GB patch to update everything that linked it in statically - and even that only fixed the problem for Apple-official programs, not for anything the user had installed?
So how would one implement Erlang-like functionality in Plan9, where the code of a server can be hotswapped while still keeping all existing sockets to the clients open?
I'd argue that in most cases "easy > simple" with the definitions of Rich Hickey. Easy means you understand it quickly. Simple means it uses few concepts.
Go doesn't want to use the concept of generics. However, if your code uses "List<IP>" instead of "List", it is easier to understand, because it additionally tells you it is about IPs. Python is a language which tries to be easy by resembling pseudo code.
If you really want simple, you could as well use SML, TCL, or Lua.
Indeed Lua came to mind with his "Dave can’t think of any language in his lifetime that didn’t start out with simplicity as a core goal. Yet he can’t think of any language in his lifetime that didn’t eventually become more complex and “powerful”." quote - it has stayed simple and removed features.
I dunno about smalltalk, but Scheme and Forth start off simple until a programmer writes tens of thousands of lines of code, at which point it gets harder to read and follow the code exactly.
That tends to be the flip side of the "simple language".
And at least the 3 languages quoted are so simple that they must provide the tools for building new abstractions (which are the tools for building the language in the first place), so you can cut down on code by building a reusable toolbox of abstractions.
Go is complex enough that they can get away without that, and even get praised for pushing the complexity to userland code and providing no way for users to manage that complexity.
How many times do we have to read this... It's not that Go doesn't want to use generics, it's that the use of generics doesn't come for free.
For generics to be introduced, the gain have to overcome the costs. It seems we're not there yet (I'll be honest, I didn't follow all the arguments closely)
IIRC, Canonical joined the go community around 2010/2011 when docker has not been created. They are actually one of the early adopters of Go. Some major projects from Canonical using Go are juju[0], mgo[1], etc.
I was using Go recently and I ran into some simplicity issues. It is not straightforward to create a map of net.IPs or manipulate netmasks. You will have to copy to and from a separate array/integer.
"Something that is simple may take longer to write and might be more verbose" -- from the article.
In a simple language, there should not be functions that do exactly what you want to do; that would be a sign of the language being complex and featureful, which are enemies of simple.
OK so I've read it. Slices sound exactly like one would guess, using the word from other languages. A view into an array.
So exactly why are they not suitable for equality? Even that article starts with "Slices are analogous to arrays in other languages". Please elaborate on this basic thing.
For an array (which are value types in go) you have the obvious element-wise equality.
For slices? Also element-wise? Even with different capacities? If the refer to the same window in the underlying array? Is there a need to copy the slice for inserting it to the map? Probably yes, because otherwise you could mutate the key from outside. But then it would be inconsistent to assignment (slices are reference types).
Lack of generic access to data structures is one of their bigger fails.
However, they don't see it that way. One point of Go was to prevent needing to describe things before being able to compile it. Most things that people regard as "failures" in Go were deliberate choices to enable large codebases.
No, that's not why you don't use go. What the parent comment is dealing with is the fact that you can't have maps keyed by a net.IP, which is implemented as a []byte. Byte slices are not valid keys. This isn't really about generics.