I hesitate to say I thought this was a good article because it was highly negative and I don't think it focused enough on the positives (of which I think there is some - for example, it has very nice and highly performant lightweight threads with full blocking semantics via goroutines). That said, I'm not really a fan of Go even though I wrote it for a few years and I think the article covered pretty well why I don't generally care to write it (even though I don't agree with everything they said).
I think I would say overall that I like the idea of Go better than I like Go itself. I like the idea of a simple language, but not the choices they made. I like the idea of simple tooling, but don't find their tooling simple to use (ironically, Rust's tooling is much simpler to use IMO). By being less expressive, it actually makes it more confusing to use IMO (in the same way I find dynamic languages more confusing because without static types, I'm less sure about what args a function takes). By keeping null, not having sum types, etc., I don't trust the code I write. In the same way I don't trust any code I write in Python unless covered by a test, I don't get much confidence in Go's static typing due to their implementation choices.
In summary, I don't think 'simple' is the right metric because it doesn't necessarily make writing code 'easy' or safe. For me, languages that are highly 'logical' and 'compose well' are much easier to use in practice. A language should try and find the simplest way to express X, not remove X as a concept IMO, or else code simply will not scale or compose once past a few thousand lines.
> highly performant lightweight threads with full blocking semantics via goroutines
If you've ever tried to doing any actual high-performance work using the Go mantra "share by communicating" with channels, you'll quickly find that it isn't high performance. Any Go program that actually needs high performance will break all the rules for thee and use undocumented things like sync/atomic.
I have not once had to use sync/atomic and I wrote and maintain a massive data aggregation and quantitative algorithm platform 100% in Go at work.
Channels in Go are not a function, they are a primitive type.
In contrast to semaphores aka. mutexes, channels are highly recommended since, when used correctly, they can serialize concurrent access very efficiently. Take care of how to use them - do not pass tiny amounts of work over channels, pass around a chunk of work and work in batches.
Normally the work you will do per item will greatly exceed the 90–250 ns it takes to move the item through the channel, so it’s just not worth worrying about.
Channels are slower than copy() for just shifting bytes, but in a simple scenario are about as fast as a naive self-made channel implementation using the sync package.
The choice of channels vs. mutexes is one of design, not implementation. BOTH use LOCK XCHG and pay the price.
Also, channels are blocking data structures. APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose. If you just want to synchronize access to shared memory then use a mutex. These are not good use-cases for channels to begin with.
As they say: Share memory by communicating, rather than communicating by sharing memory.
An easy example of this: Use a channel with a buffer of 1 to store the current number, fetch it from the channel when you need it, change it at will, then put it back for others to use.
That last example, where you store a single number in a channel, seems to contradict your recommendation to "not pass tiny amounts of work over channels?"
It's supposed to be an easy example of the concept of sharing memory by communicating and has nothing to do with the recommendation to not pass tiny amounts of tasks.
Interestingly, even though Go is most popular for writing web services and promotes channels, we almost never use channels in our microservices written in Go because they are not persistable. If you care about not losing your data on power loss or a panic, your work queues have to be persistable and retriable. Channels IIRC are just in-memory queues with mutexes under the hood and provide nothing of that. We only use them for a few tricks/hacks like listening to signals to safely stop goroutines.
That makes sense: channels are a way of organizing concurrent programs, not a way of organizing distributed systems. They're a tool you might use to build a persistent work queue (or, like many programmers, including people working on the Go stdlib, you might use other synchronization constructions), not a work queue themselves.
When I first learned Go, my impression (and also tutorials usually imply that) was that they are a nice elegant way to build concurrent pipelines where goroutine A sends work items to goroutine B which can, in its turn, send some work items to goroutine C etc. But in practice for robustness we prefer persistent queues for such pipelines because on power loss, a panic, or if your application is simply being killed on redeploy, your program can lose data or end up in an inconsistent state, because whatever was the in the channels is completely lost (and the work is already half done). So it leaves us with only a few use cases where they're really useful such as basic goroutine coordination. I think what you are saying is that they're a synchronization primitive akin to mutexes, atomics etc., no more no less, and I'm fine with that, but then it's not clear why then channels have a special syntax and why they are sold as one of Go's strong points, if it's just a niche synchronization primitive. The only useful case I found for them was to send an empty struct to a goroutine to signalize that there's a new work item in the persistent queue, to avoid having to poll the external queue too often. I wonder if other web devs have experience similar to ours, or maybe there are other use cases for channels we are not aware of.
The point of channels is to have communication between threads that's easier to reason about than explicit locks. Rather than, for instance, unlocking a map to update it, you treat a single thread as a "server" for that map. If you don't want to structure your programs that way, you'd just use mutexes.
In older web languages, like PHP, I would reckon it’s like making a self-http call where you don’t care about the result and just want to trigger some work elsewhere in the application async. I guess with Go channels, you’d lose out on concurrency, automatic retries (depending on infra), and debuggability — you could do the same with Go and http calls though and just drop the channels completely.
But yeah, sometimes things break. We rely on things like nginx options to retry GET requests and idempotency in the design; failing gracefully (via a shutdown callback to always return something to a caller); ensuring work is completed before writing a successful response anywhere, along with being idempotent; and, ensuring there’s observability in every long-running task.
> These functions require great care to be used correctly. Except for special, low-level applications, synchronization is better done with channels or the facilities of the sync package. Share memory by communicating; don't communicate by sharing memory.
There was a long mailing list discussion discussing use cases and certain desired behaviours that the people on the list agreed upon. In the end they chose not to make those behaviours explicit and leave it somewhat open-ended and instead put in the phrase "great care" to cover it off, rather than define it.
I can't find the original thread that had a specific example of a sequence of instructions where the desired/actual result wouldn't be guaranteed by the documented behaviours, but was decided to leave it as-is. The closest I can find[0] is lacking specifics, though does say somewhere that better docs would be nice. And the 'great care' warning/disclaimer is still in the docs.
I found it while using sync.atomic to make some implementation faster, then wanted to know exactly what the language/runtime behaviour/guarantees were and was surprised but not shocked to find the 'for the go authors but not thee' attitude of the conclusion. It did have some reasonable founding where they mentioned that by formally defining it, it could be over-specified precluding future optimizations on even newer processors/architectures.
I would have preferred if the documentation explicitly listed the parts that are not fully defined with reasons for doing so given. That's better than a generic danger signpost. The irony here is that Go was supposed to be not-Java but has the same 'for mainstream' mindset rather than being the sharp tools for the sharp developers.
"Channel ops are wait free if the buffer has extra space though, so that should be fast"
But, I looked it up and channels indeed use locks even in case of buffered ops. Sigh. I guess MPMC with scheduler yield isn't the easiest thing to write and maintain.
Unless you're using something like an RCU queue, you're still going to have to lock operations around moving the queue pointers, or you could have producers writing over each other or writing into a slot a consumer has already read from.
I don’t think channels were ever billed as peak performance abstractions. I certainly don’t use them as often as I use mutexes or atomically for parallel code, but frankly I rarely write parallel code in the first place because it’s harder to maintain and I rarely need the performance (single threaded Go is really performant already compared to other languages in its class).
Heh. For the past 3-ish years I operate a flock of microservices in Go. I don't really write the code, but do quite a bit of reading to debug and fix stuff. This month I've seen a channel in production code. I did a double take and took my time to reflect on the language.
> I don't think it focused enough on the positives (of which I think there is some - for example, it has very nice and highly performant lightweight threads with full blocking semantics via goroutines).
I've tried to focus on things that I ran into. I haven't had the chance to write a lot of concurrent code in Go, that's why I didn't comment on this aspect. Most of the code I've been working on has been serial, with concurrency and parallelism managed at a different layer than the one I was working on.
That said, the same point applies for a bunch of negative points that you may seen in other articles criticizing Go. I didn't mention them because I didn't run into them in practice.
> I think I would say overall that I like the idea of Go better than I like Go itself. I like the idea of a simple language, but not the choices they made. I like the idea of simple tooling, but don't find their tooling simple to use
I think I just had a revelation moment. This is exactly how I feel about Go. I keep trying to use it for side projects and I think I'm lying to myself... It's not Go what I like, but the idea of it.
so.. Haskell and Scala? Those would be the top two that come to mind for me when someone says "highly 'logical' and 'compose well'", "having sum types", 'more expressive', etc.
I've heard they're great languages (haven't used them much myself), but I probably wouldn't choose them for say an early-stage startup. Go's value is in being a mundane, repetitive, boring language. It makes it easy for any random kid to dive into an existing code base and become productive, that can be super valuable.
There's a theoretical gradient of programming languages where one side is "has a single expression for exactly your problem, a single unique expression you've never heard of exists to solve every possible problem" and the other is "requires thousands of expressions to express your problem, but only a thousand are available" - or something like that.
My point? There's no perfect language, yet people are always trying to find one. It's OK if you don't like Go and prefer another language, the real devil / tradeoff is in the fact that conformance to a single language (or set of languages) is such a strong social phenomena. I think that's why people end up so angry with viewed-as-subpar languages like Go gaining so much traction: it limits personal choice of which language to work in.
I'm positive linguists aren't happy with English becoming the global language either, but you've gotta admit - having a global language is valuable.
> I'm positive linguists aren't happy with English becoming the global language either
Nit: Linguists do not hold prescriptive opinions about the "quality" of languages. They analyze languages to form descriptive theories about the structures and features of certain languages. I only nitpick this because, while having a lingua franca is certainly valuable and there is nothing "bad" about English any more than literally any other language, the closest thing to this in programming "languages"/notations is actually C, not Go. It's hard to overstate the invisible influence on C on basically every mainstream language and how we think of programming and computers in general.
An interesting analogue to your example of English is the varying efforts to transliterate most languages into using the Latin alphabet, much like how many programming languages today need/greatly benefit from a compatibility layer with a C compiler/the C standard library.
> There's no perfect language, yet people are always trying to find one. It's OK if you don't like Go and prefer another language, the real devil / tradeoff is in the fact that conformance to a single language (or set of languages) is such a strong social phenomena. I think that's why people end up so angry with viewed-as-subpar languages like Go gaining so much traction
I hope my post didn't come across this way. To be clear, I think Rust and Swift (as two examples that I mention multiple times) have a lot of problems (slow compilation being a very big one), and are by no means perfect. I'm not angry at Go's popularity as much as wanting improvements to the developer experience.
This is like the C++ argument that you should just use C++ because it has almost every feature so you can just use the ones you want and ignore the others. Of course, in practice you still have to deal with colleagues and upstream packages that use features you don’t like. Same deal with F#—you could write F# in a Go-like style, but you’ll be swimming against the current. And the only advantage is a little less verbosity, which really isn’t anyone’s bottleneck—people over-index on minimizing localized boilerplate and ignore the costs of gratuitous abstraction.
Type-inference is a zero-cost 'abstraction' and any place you choose can make explicit narrowing of them. The libraries I needed to make a web application didn't have any mind-bending patterns, but rather did the most obvious things with the basic language features.
Scala and Haskell are pretty much "dead" language, I worked with Scala in the past, it was impossible to find poeple willing to work with it so everything was re-written in Java.
Im not sure its fair to call Scala dead. Its still pretty widely used, at least in companies in London. Hiring is a bit more competitive but we still get some great Scala devs interviewing for the more senior roles.
Disclaimer - I work with Scala every day, so am definitely highly biased.
I worked at a Scala shop about 10 years ago. Everyone had their own preferred "dialect", kind of like C++, resulting in too much whining and complaining during code reviews. IMHO, the language is too complex. Also, the compiler was slow, and the IDE support was plain awful (Eclipse was especially bad, IntelliJ was better.) Keep in mind this was over a decade now, so I'm sure things have improved.
This was my experience as well. I worked at a place that used Scala primarily in the "better Java" style and enjoyed the language a lot. I moved to a different company and the lead programmer there was a functional purist who insisted on putting scalaz/cats into everything and using Scala as Haskell lite, despite it really not being appropriate for the use case. It really soured me on the language.
I work in a Scala shop with 500k lines of Scala. We're moving to a Bazel-based development workflow because SBT is a slog and doesn't handle incremental compilation well at all.
Lol. Why would people be unwilling to work in Scala. I moved from Java to Scala and it was a breath of fresh air. We’ve had no trouble hiring people who didn’t know Scala but learned it on the job. It’s not rocket science.
Scala just got a new version, which fixes plenty of shortcomings. It also has an option to exclude the null value from the language, so I wouldn’t call it dead by any mean of the word.
It is also running splendidly at plenty of companies.
Good, it is still the official language in Vatican documents, has an updated dictionary, and some European countries see it as a CV requirement by most HR departments when hiring for top management positions.
I can't confirm the specific example, but this sounds like a smoke cover for nepotism and/or classism. If you're not allowed to recruit solely from your personal friend group, requiring that applicants be able to speak a dead language let's you select that same group of people while giving a flimsy justification.
In France for example, it used to be that all good family kids studied Latin, so high league universities and was seen as plus on the CV, at least for 20 years when I used to live there.
France would be a typical example, except that this is an urban legend.
Context: I graduated from the 3rd best high school in France, then from one of the Grandes Ecoles (~ Ivy League). I was born in the early 70's in Versailles.
German and Latin were seen as languages where the best students meet. This was a vision of the parents without getting in the reality. The best high-school students were everywhere.
I had Latin and Greek in mid-school and high school like anybody else. Complete loss of time when you are not interested.
At university it did not matter at all.
Today, when recruiting even for the "really French" companies nobody would think about Latin as a discriminant. It literally wouldn't cross anyone's mind.
There are urban legends everywhere, this is one about French education (which is sometimes great, and sometimes completely backwards to the point where I doubt the decisions makers have ever seen kids in real life)
As for today (and not 40 years ago), in the class of my son in that same elitist high school, one student does Latin and Greek. Because he is interested in the languages.
> so.. Haskell and Scala? Those would be the top two that come to mind for me when someone says "highly 'logical' and 'compose well'", "having sum types", 'more expressive', etc.
I don't think Haskell composes that well; monads don't combine well (not that I could do better) since you can only stick them together in an ordered way, the lazy evaluation makes it complicated to see how your program will run by reading the code, and so on.
Btw, "sum types" means enums; it's not actually a super-complicated Haskell only feature.
Well, being pedantic, sure almost every modern CPU uses OOE. But they must do so with the huge caveat that from the outside it should be equivalent to executing it in serial (notwithstanding CPU vulnerabilities). But this is an implementation detail only, we could just as well use an older CPU design.
Having written a lot of Scala, I generally agree until you can form a team that knows what they're doing or has a background in Scala. After that, Scala shines as the codebase(s) scale.
The point is the barrier to entry for Scala is higher but once you have a skilled team working in Scala, you probably get more velocity than others due to the language ergonomics.
> so.. Haskell and Scala? Those would be the top two that come to mind for me when someone says "highly 'logical' and 'compose well'", "having sum types", 'more expressive', etc.
I wrote Scala for years - it might be a bit too complicated but overall pretty decent, but I hear good things about Scala 3. I think the ML's, of which it takes inspiration from, are probably a better match. OCaml (EDIT: or F# as another comment mentioned) for instance is a pretty nice balance.
Haskell is very nice and I think qualifies except that it is pretty hard core purely functional with no punches pulled (unlike ML), so is foreign enough for most people not to be deemed a candidate.
I like Rust probably best atm as it is "imperative but with a functional flair", but doesn't qualify as easy I don't think, but is definitely highly logical and composes very well (even if obviously not finished yet).
> I've heard they're great languages (haven't used them much myself), but I probably wouldn't choose them for say an early-stage startup. Go's value is in being a mundane, repetitive, boring language. It makes it easy for any random kid to dive into an existing code base and become productive, that can be super valuable.
I would probably agree on the novice programmer and picking up Go quickly which is arguably the prime feature of Go. I just question the quality of that code, and honestly, don't feel Go is nearly as easy as touted. It is simple for sure, but not always easy. I always had to look up simple things like those magic comment compiler directives and is that interface{} param a pointer or a pointer to a pointer? I just remember the lack of expressiveness actually causing real world confusion (for me at least).
> There's a gradient of programming languages where one side is "has a single expression for exactly your problem, a single unique expression you've never heard of exists to solve every possible problem" and the other is "requires thousands of expressions to express your problem, but only a thousand are available" - or something like that.
While true, I would argue we have found a small amount of constructs that fit well 80% of the time, and is demonstrability better than half the constructs that fit well 40% of the time. Trying to solve every problem with a new construct is not worth it, but nor is the opposite extreme IMO.
> My point? There's no perfect language, yet people are always trying to find one. It's OK if you don't like Go and prefer another language. A real devil / tradeoff is the prohibition of using languages that do not conform with past (company) choices.
Honestly not sure we've found the ideal language yet, and agree there is subjectivity and trade offs at about every turn. In fact, the only thing I'm certain of is that Go missed what I would look for in just about every category except a few (but I agree it is easy to learn, but does it matter if you can't write good code with it?). That said, very intelligent and respected people disagree with me, so to each their own.
My disappointment with Rust, that in no way diminishes how much I enjoy using it in my own time, is that I have a hard time recommending it for microservices, which are arguably an average project at an average company. The ecosystem just doesn't feel as fleshed out or complete as in Go. It's a shame because there's libraries in Rust that I adore like clap, serde, and diesel but when I last wanted to write something that integrates with AWS, I found a deprecated unofficial crate and a non-production-ready official crate from AWS.
I don't know whether to attribute this disappointment to the breadth of what Rust can be used for and the difficulty in doing all of them well, or a lack of wider/corporate buy-in for these use cases. It's a pity, because after getting past the initial learning curve I struggle to find anything wrong with the language itself.
They already mentioned AWS support was poor, which I can't confirm. I can say that GCP support is subpar. Comparing Python's GCP library with Rust's is a world of difference.
Yeah, I think your view here is pretty close to the way I feel.
It's unfortunate, because I think Go generally gets a lot of stuff really right. But it too often feels that it prizes simplicity in implementation or specification over simplicity of how code is read and written.
Writing Go feels like having a little rock in my shoe. I think it comes from that situation where you know some property of the code you are writing (e.g. "this thing cannot be null"), but there is no method to assert this to the computer. And so instead of my very fast and logical computer with lots of memory enforcing this for me, I have to carry that knowledge around in my own lossy memory as a little bit of baggage and hope I never put it down and forget about it. And I find that this comes up in Go all the time.
That sucks because Go is still—despite all this—probably the best overall solution for the kind of apps I end up writing a lot. I really do think that there's still a space in there for a language sitting on the complexity scale between Go and others around the level of Java/Swift/Typescript, which can benefit from both some excellent design decisions of the former, and some of the expressiveness- and correctness-enhancing features of the latter.
> I like the idea of a simple language, but not the choices they made.
That’s the problem with simplicity, and it’s why kitchen-sink languages like Java will always be popular.
People complain about the complexity of Microsoft Office too — they only use a subset of the features. Why does it need to be so complicated? Because every customer uses a different subset of the features, and what you end up with is the combined superset of what everyone wants.
It’s a funny thing about simplicity. It’s easy to make something simple by foisting the complexity on the consumer. A lot of Go’s features are simple in that they take few words to explain, but using them uncovers so many edge cases that they wind up being quite complex.
The sad part of lack of sum types in Go is that the select operator is kind of a typical sum type operation to pick up one of sum type branches. So the language has this notion, but it is extremely limited.
Yep, and multiple return types are like tuples, minus the ability to compose them and actually use them as a single type. Map and lists were generic, but a special kind the user couldn't make (fixed now I think w/ generics in 1.18). 'range' worked over a magic iterator, but not one you could ever make. Their "enums" are variable bindings minus the ability to be sum types or any other decent property of an enum. Nil is nothing more than "None" or "Empty" in an Option type, but since they didn't use the type system you can get a classic null pointer exception. Their "attributes" are just comments, but now you need to remember their special formatting since each one is effectively arbitrary text.
In so many instances, they made things "special" to avoid bringing in a concept, but you have to learn that concept anyway, but now as a special case. It is almost as if they said "we can have 5 features and I don't care what #6 does..it is out...we must make a language with 5 features!". Instead of saying: "What is a reasonable set of features that compose well, are logical, expressive, and relatively simple such that people can both easily learn and scale their code bases".
A great example is Brainfuck. Everyone would agree it is 'simple' and it only has 8 constructs, but it is not 'easy' to write programs in. 'simple' is not the right metric for a language.
This is one of the warts I wish Go would fix (implementing Rust-like enums, and ideally getting rid of zero values and nils, but these things won’t happen). Even still, Go is the most productive language I’ve used because it turns out type-systems (cool though they are) are overrated—you only need enough of a type system to keep things documented for humans and tools. 95% type safety seems to be the sweet spot (peak productivity) after which productivity begins to rapidly diminish. It’s more important to have decent performance, good tooling (simple with sane defaults), small learning curve, great deployment story, etc.
You can have robust and ergonomic code around nils if they are part of your type signatures and/or get some good language and standard library support:
In Clojure (Lisp) a nil is treated as a value which flows nicely through code, there are many idioms and utilities that are built for this and compose well. In Kotlin, Typescript, PHP there are unions which I find much more ergonomic than sum types for this use-case.
> You can have robust and ergonomic code around nils if they are part of your type signatures
Go has `nil` built into the type signature, but the problem is that all reference types are inherently nil-able, and value types can't be nil (although they can have their own in-band zero values). This means you either pass around values (with the entailed overhead and copy semantics) or you pass around nil-able references.
Why would unions be more ergonomic than sum types? Presumably they would have the same ergonomics?
> Why would unions be more ergonomic than sum types? Presumably they would have the same ergonomics?
For this use case (can be a some type OR nil) they are more ergonomic because it is an actual OR, not a separate container. You want the thing as-is and handle the case where it isn't there not go through an intermediary type.
Yeah, I have no illusions that Go will walk that back, especially considering its compatibility promise. It's a thorn in the side, but people also exaggerate it dramatically while overlooking major issues in other languages.
> I think I would say overall that I like the idea of Go better than I like Go itself. I like the idea of a simple language
Agree.
I'm new to GO, and I keep bumping into language issues that upon further inspection where unilateral "opinionated choices" made by some secretive cabal. Don't get me wrong, I see plenty of public discussion, but ultimately some core GO team person abruptly closes the issues. Perhaps I'm exaggerating a bit, but there have been several times I've tracked an issue report to find the issue closed without sufficient discussion.
For example, The whole thing with exported vs unexplored things using the uppercase first character. This is both brilliant, and simultaneously prejudicial. The brilliant part is one need only to look at a variable's name to understand it's exported, rather than exhaustively searching for the declaration where something like "exported" reserved word was used... Instead the first character carries that information. But... many languages do not have the concept of upper/lower case...
Or the related issue of variables declarations or aliases having to be a Unicode "letter", and the designations of Unicode letters being narrowly defined in ways that prevent folks using their native language. To elaborate, that means there is no possibility to write GO in a world language like Chinese, Japaneses, Arabic, Hindi, Russian, etc.. The language must have the properties of 1) being letters, not having any non-letter modifiers, and having upper case letters.
The problem is many languages have funky characters that are technically symbols (not letters) in Unicode, but are used in combination with Unicode letters to modify the meaning of a word. Similar to accents, or whatever hats go above some Latin characters, other languages have a whole other symbol left or right of the character to do similar things. But variables may only contain "letters".
So to rephrase this issue, GO is an English programming language, which ironic because one of it's creators was purportedly one of the people involved with creating UTF-8 as a replacement for ASCII, and yet GO is saddled with issues similar to how ASCII limited C way back in the days. So it's like these neck-beards propagated their biases and prejudices into the GO language.
Another criticism I've got pertains to GO's understanding of numbers. I'm so tired of numbers being an emergent property of the underlying hardware architecture. You know, when we went from 16bit to 32bit, and again from 32bit to 64bit... GO has perpetuated the silly idea of tying number types to the underlying architecture hardware architecture, and that has silly effects on writing portable software. In the year 2008 (GO's approximate inception) or the year 2022 it's entirely feasible to have numbers mean one thing always, and the runtime does the needful to ensure that number just works with respect to the underlying hardware. In your lifetime, there is a very real chance we may see the emergence of 128bit architectures, if anything for the expanded arithmetic,or expanded register size, less so for the absurdly huge memory address space. But now we are locked in to how GO was conceived at it's inception....
I would really like to use a language like the idea of GO, but not GO itself. It's seem s evident GO will not break the compatibility promise/contract to improve the lang or reverse bad choices, and the utterly brilliant people designing & implementing GO apparently hold bad opinions. I've seen this before in so-called meritocracies, you tend to see brilliant ass-holes fizzle to the top. Don't get wrong, nobody on the CO core team is an ass-hole, I'm obviously exaggerating to convey my point... it's a variation of reduction-absurdum fallacy.
As a person who likes Go a lot, and who finds Go bashing -- and really, bashing in general -- pretty tiresome to read, I thought this was a pretty decent article and the sort of thing we ought to encourage on HN. At least in contrast to the usual rants.
In particular I appreciated the author's interest in exploring the reasoning behind the design decisions they disagreed with, rather than stopping at "I don't like X".
I share at least some of the author's criticisms of Go. But it neglects to mention a few of my favorite things - the massive and mostly capable stdlib and the easy cross-compiles. How many compiled languages can you be on any of Mac, Windows, or Linux, and compile a binary for all 3 instantly with just a env var that always works no matter how complex your code or how many libraries you're pulling in? I can't think of any offhand.
I like Rust better as a language for a number of reasons, but it unfortunately falls down on both of those. There's also a noticeable difference in how they're used based on the number and maturity of packages available for various things. Most of the Rust world seems to be more focused on low-level stuff. You can do stuff like web services and query web APIs and talk to databases, but there's likely to be only 1 or 2 crates for it, and they're probably maintained by a single person who (like most normal people) sometimes just stops doing anything for months or years. Golang is really made for the web services world, and it shows in the packages available - it seems a lot more likely there will be many top-quality and well-maintained packages for doing any web service like thing you might want to do.
Go is my default language for any non-client-side project. That said, I agree with all the criticisms, except for the one about unused variables/imports and the lack of warnings in the compiler. I'd also add one of my own - I'm an expert go programmer, but I still panic sometimes when dealing with error handling. It is still not as easy as it could be to produce useful error return values.
Indeed. panic is not overly recommended but there's a few places you can nudge it in that just make sense, rather than ensuring the error will successfully propagate up the chain.
> `filepath.Clean()` is not called `filepath.Canonicalize()`
Canonicalization of a path is a different operation, as a name suggests it returns a canonical path to a resource. The idea being that if you happen to have two paths that refer to the same file (say, `/bin/sh` and `/usr/bin/sh`) then you should be able to pass those to path canonicalization function to get the same string for both. This is not what `filepath.Clean` does, so calling it `Canonicalize` would be confusing.
Meanwhile, Go's `Clean` function is only concerned about lexical processing and ignores the file system entirely and would return `/bin/sh` here, as none of rules `Clean` uses apply.
This is a fair point. To be clear, I wasn't suggesting Canonicalize as _the_ name, but as one potential candidate. As you've shown, it has some shortcomings. Perhaps Normalize is a better alternative (and also another candidate that I suggest), since "Normalization" is a commonly used term for converting to a standard/normal form.
Cannot make a type in a foreign package implement an interface
The idiomatic way of adding an interface to a foreign type is to declare your own local alias and use that to implement the interface. At least then the added functionality is confined to a known package and won't lead to surprising behaviour elsewhere.
In the recent Go critiques posted in HN I got really annoyed because the articles came across as snide and such-and-such miscellaneous things (that I expressed in a somewhat poor, reactionary way last time)
In my humble opinion this article is very well written. While it leans more negatively and is a critique, you're being very professional and respectful. I appreciate your write-up and perspective. Some of these issues flow into problems I've also had with Go, particularly in trying to use it for a GraphQL API (which was a horrible idea!). Feeling cautiously optimistic about generics :)
I'm going to mention my own tiny experience report as a frequent SQL user, of Go's simplicity pushing a lot of burden onto the user in the relatively simple task of querying a list of things from a SQL table:
There's a specific order of operations to be done here, and even a slight deviance could be a significant bug.
To start of with we:
1. Do query
2. Check err
3. Defer close
1 is straight forward.
2 you can forget, then something bad may happen.
3 you can forget, which will cause a resource leak.
If you switch the ordering of 2 and 3 something bad may happen.
Instead of defer close(), a user might opt to just close() at the end of the function, but there's a reasonable chance they'll overlook the unhappy paths.
Next we have iterating over the results:
1. For rows.next()
2. Scan into a &var
3. Check for scanning errors
4. Exit for loop, check rows.Err
1 is hard to mess up.
2 has some potential to mess up if you're not clear on which bit of memory you're storing the result in.
3 you can forget, which could cause you to get the previously scanned var show up multiple times, or zero values.
4. you can forget, which will cause your results set to (I guess?) be incomplete.
There's a lot of different permutations in this code that will compile, and I think has a good chance of passing the average paid dev's test suite (it's a bit of work to exhaustively test the unhappy paths).
In my experience people are especially prone to messing up closing rows. I don't blame em, we know from decades of C that manually freeing resources, especially in the case of errors, is tricky.
If I've made any mistakes, please point them out, as it supports my stance ;)
(There's more fun to be had in SQL with Go, like that SQL has optional types, but Go doesn't, which leads us to sql.NullString et al).
Bit negative. I agree on some points (e.g. error checks on enums); other points, you simply have to learn to adapt to (initialization). The rest are (very) cosmetic preferences (overloading brackets; the compiler doesn't inform about typos).
The biggest negative for me the author overlooked is the lack of non-nullable types. The positive thing the author doesn't mention is that it's easy to write in, nor that threading is really easy.
> The biggest negative for me the author overlooked is the lack of non-nullable types.
I did mention the pervasiveness of nil and the lack of sum types as negatives.
> The positive thing the author doesn't mention is that it's easy to write in
I didn't mention this because I think the "ease of writing" is superficial; thinking through edge cases is something that frequently consumes a lot more time than literally typing the code out.
Also, given that the compiler doesn't really give any useful suggestions when your code is wrong (something that happens more often when you're less experienced in a language, like I was here), ease of writing takes a significant hit.
> nor that threading is really easy.
I didn't cover this because I haven't had the chance to work on much code using goroutines, and I've tried to ground the post in what I have actual experience with.
In Rust the edge cases are often more apparent because you're often forced to at least acknowledge them. So while you do have to put thought into handling them, you don't have to spend much thought finding or worrying about missing them.
Compared to Go, Rust makes many kinds of edge cases mechanically discoverable. You cannot forget them.
As in Go, you can of course handle them poorly, and sometimes this is an ergonomic win. E.g. it's basically always trivial to panic (same as missing some critical cases in Go) or return the wrong value (literally all other cases in Go).
The primary difference isn't how you handle them, it's if you are aware that there are edge cases. Go is extremely lax here.
What are you trying to communicate here? That releasing one of the most popular languages in the industry without sum types is a form of original sin, and if they ever added enums and a match statement, you're be more right to hate the language? Some of the logic in these threads is just baffling to me.
Maybe? Generics were the most egregious absence, since it's challenging to express some constructs in a static language without them. Nullable types are another issue that other modern languages (e.g., C# and TypeScript) are trying to solve, but we it's a difficult issue to tackle when nil is already pervasive in the language.
I think generics are a great addition to the language, and I'd love to see sum types (which would allow Option to circumvent nullable types), but I think that's much less likely.
This is a good article. I am curious how the author compiled it. Everytime they came across a quirk they'd log it in a journal? Fun :)
Figured I'd comment on at least one item which I ran into recently.
> Sends and receives to a nil channel block forever.
This did feel bizarre to me too, until I realized how well this works with select blocks. That IMO, is the primary justification for that design choice.
...as long as you make an honest effort to understand why things you don't like are the way they are and don't just think "hey, cool, I found another thing to add to my list of things I don't like about Go!". Go is very opinionated and does some things differently from the mainstream. So you'll have to adapt your programming style in order to use Go successfully. Developers who aren't flexible enough will most likely hate it.
Go is my go-to language for cross-platform network application, the builtin https is enough alone, plus I can ship a true single binary in the field, can not be easier to upgrade. For other use cases, I use c/c++/python/typescript.
That was a great article and put into words a lot of the feelings I have had about my experiences with Go better than I could have. I also learnt a few new ways to shoot myself in the foot :)
The MIT vs New Jersey style of thinking was eye opening. And I also feel like my "values" lie closer to a language that prioritizes correctness over simplicity. I understand that means more work for the maintainers of the language though.
I wish I had that framework of thinking a few years ago. Rust's emphasis on correctness and requirement that the user deal with correctness too (with sum types + exhaustive matches etc) is I think the main reason I enjoy the language so much.
sum types are my number one. the frustrating part is that it almost has it with type sets in interfaces, which you can use in function signatures, but you can't use them in definitions of other types.
example, given an interface that is scoped to 4 types:
type PlayingCardSuit interface {
Diamond | Spade | Heart | Club
}
No mention of CSP makes me think the author is still writing Go programs in a non-concurrent, imperative style. Used in this fashion you only get limited gains from Go's simple syntax and nice tooling. But writing using CSP patterns gets you a different experience and feel and is where Go really shines.
Go's concurrency story isn't great compared to other languages tbh. Combining channels, wait groups and mutexes (because sometimes you do need to use all three or mix/match) leads to confusion on when exactly to use what, when.
Yes. And if the metric is "confusion when to use what" then Rust (and especially async Rust) might fare even worse: Now there's multiple types of Mutexes, Semaphores, Channels: Awaitable ones and not awaitable ones. And one needs to understand how they fit into programs, and interact with all the other stuff. Most people which just pick an async Rust runtime because they read it's a great thing are having a rather hard time with this.
I just so don’t get the comparison of a high level language like Go to a goddamn system programming language like Rust. They are nowhere in the same league.
Rust’s solution has to account for the true variability and complexity of the underlying platform, as otherwise it would be a bad low level language, while go can/could easily hide it behind a supposedly clever abstraction.
I don't understand this weird, embarrassing subset of Rust programmers who talk as if they're working for the first time with manual memory allocation and it's a revelation that somehow sets the language apart from all others. In fact, the overlap between problem domains for Go and Rust is immense. There is no language that has done more to halt and roll back the progression of memory-unsafe C/C++ than Go, which is, in fact, a systems programming language --- a term that does not in fact mean "usable in the Linux kernel" (a definition that would also have excluded C++ at times).
They are eminently comparable languages. Go gets some stuff very right, and so does Rust.
Then you can just as well expand the definition of systems programming languages to JS, Java and Python.. sure, don’t get me wrong, there are very very few niches where a managed language can’t be applied (a whole OS can be written in GCd languages) but there is a few and in these cases Go is simply a no-go (sorry for the pun).
Java, yes: I have worked on systems, as have others on HN I'm sure, where Java is the bottom level language: there's a JVM, and below that there's hardware, and that's the whole story. JS and Python, not so much. This isn't complicated: there is a general agreement about what "systems programming" is, and the only people who make a huge deal out of the divide between managed languages and unmanaged languages are, again, this weird embarrassing (tiny!) subset of Rust people. I think the Rust core team people must wince about these arguments.
It's not as if Java and Go programmers don't understand the value of "zero-cost abstractions" (a term from the same 1990s vintage as the first version of Java) and manual memory management. I don't see a lot of serious energy being applied to getting Go running in the Linux kernel. To my eyes, the only people who really seem uncertain about Rust's value are the "those aren't systems programming languages!" people. Rust is not good because we finally have a language to do systems programming in; it's good because it's a good, interesting language on its own merits.
Just by way of example: pay attention to how Niko Matsakis answers the first question ("what is systems programming") on this panel with Bjarne and Alexandrescu and Rob Pike:
(It's hard for me to disagree with any of them, except maybe that I don't at all agree with Alexandescu's argument that systems programming is defined in part by being able to forge pointers from integers --- something, of course, that you can do in both Go and Rust [unfortunately]).
I always find myself reaching for the same constructs in other languages. In Rust I go with crossbeam channels and use mutexes or RwLocks when I don't need the channel synchronization overhead.
Ada had only CSP at the start of eighties and then gained mutexes to allow for more expressive and efficient patterns.
It is very puzzling for me why Go went with CSP and added a lot of language support for it like the select operator. А data type like a priority queue to post messages to a thread would serve similar purposes as CSP while allowing for more patterns and simpler reasoning.
CSP is a low level concurrency primitive that lets you implement most other systems in it if you want. Implementing a priority queue w/ message passing would be simple.
I really wish nil didn't exist in Go. It causes those familiar runtime panics like "cannot __ on nil" we love from those loose scripting languages like node.
Nil values could be kind of tolerable if the language was always explicit about what is nillable and what is not. If you see something like "MyStruct" you can never be sure if it's an interface (can be nil) or a value (cannot be nil).
I understand the ambiguity you're talking about here, but how often does this actually bite you in practice? There are other context cues that you're dealing with a struct and not an interface, and the nil-ness of a return value isn't how you idiomatically communicate success or failure, even for things like map lookups where it would be somewhat natural in a C API to do that.
Zero values are one of the tenets of Go. It's like saying I which Go had exceptions which is to say you'd prefer not-Go. Ironically Go was promoted as the anti-Java which we know is littered with NPEs.
I've recently written a "system tool" in Rust, and, while it wasn't as bad, in terms of productivity, as I had imagined at first, there are certain domains where Rust is really a grind.
Working with filenames/paths is one of those - if one works with (transforms) filenames/paths, the code will be polluted with all the conversions between PathBuf/Path/OsString/OsStr (and the canonical String/&str); this makes it hard to reason about the abstract logic.
It absolutely makes sense that Rust forces one to consider the robustness of the code, but in some cases one just doesn't want such robustness.
Cyclic graphs are another very ugly thing to work with in Rust (without supporting libraries).
The distinction at the end with the "New Jersey" and "MIT" approach was enlightening, as well as the empathetic remark on Swift's approach to keeping complexity within the language implementation vs foisting it on users.
>I find that people with much more C/C++ experience (>5 years) tend to appreciate Go the most. Present company included.
In our shop there's plenty of former C++ devs who are now writing in Go and enjoying it, myself included. Initially everyone was skeptical and dismissive of the language to say the least ("no generics? ha!") but as soon as we started developing and releasing our first microservices people began to appreciate how smooth and painless the whole development process had become. I remember with horror our C++ projects' 2 hour long rebuilds, 1 GB debug binaries, days of chasing memory corruption heisenbugs... It's symbolic that the last C++ talk I attended before fully transitioning to Go was by a guy from Intel who spent 2 hours talking about how they abused templates to write allocators which allocate other allocators at compile time... Fortunately Go devs don't have this cult of overengineering that's plaguing C++, we're getting things done and release cycles are now much shorter and with less stress.
I like to describe Go as "what you wish C were while you're writing C". Not "what C should be", that's probably Rust or something. Go is C, with the pain points filed off.
No C++ coder I know (at least the true C++ coders, e.g. ones who have abandoned the 'Simplicity of C' mantra), find nothing of use in go besides a draconian single truth model of 'the best way to do things' similar to Python.
That is really opposite of the C++ experience, where you have millions of different patterns, techniques, and pitfalls at every corner. Those who enjoy this, are definitely not go enthusiasts.
Now I'm not saying ones better than the other, but I really doubt C++ core find Go attractive. Go was invented on the premise that 'We hate C++' or something to that regard.
I've written production C++ code for > 10 years, and I would definitely prefer to use Go for a lot of programs. Yes, it might be slower for some use-cases, it has some ugly parts. But it's also orders of magnitude easier to understand and the ability to just focus on the actual problem statement and build a solution instead of nerding out over language features is nice. I'm definitely not enjoying reverse engineering some smart SFINAE code, and writing any networking code with asio is also no longer very high on my "love to do things" list. I would rather deal with a few "if (err != nil) { return nil, err; }" statements.
But maybe it just depends on what "true C++ coders" means. If it's C++ enthusiasts for the sake of loving the language, I'm sure you are right. If it's "C++ users", because that was 10 years ago the best tool to solve a certain set of problems and the jobs where mostly C++, there's likely a few who are open to other tools to solve their problems.
> That is really opposite of the C++ experience, where you have millions of different patterns, techniques, and pitfalls at every corner.
Exactly right. And everyone who is serious about the mission and not “enjoying the process” is so so tired of c++ with its “millions of different patterns, techniques, and pitfalls at every corner” and c++ devs that need to be convinced not to do crazy shit just bc they can
When I finished coding C++ professionally the general sentiment of the team was to use less C++ than more. When possible, it was advised to treat it more like C but with classes. The goal was stability above all else and easiness to reason about (for everyone.) That's why when Go was announced its design direction made perfect intuitive sense to me and anyone that spent years writing mission critical C++ on large teams.
Yes but many organizations make agreeing on “using less c++” very hard these days. Google kind of managed with their strict review/readability process and pretty much mandatory tooling but it’s damn near impossible in many startups to do the same without a huge political capital to spend. Go takes that challenge from organization level and largely pushes it down to language level.
The loudest voices usually win. I've since exited the tech world but back then the people I surrounded myself with were committed to product and solving business issues in the cleanest way possible and not trying to prove to one another which one of us was smarter.
I've noticed that people coming from Python tend to appreciate it as well. The fast compile-times make the language _feel_ dynamic, the concurrency story is much nicer than in Python, the type system is helpful without being pedantic, and interfaces are safer duck-types.
> Cannot make a type in a foreign package implement an interface
I would only push back on this point, and this point alone. Sure, it would be very convenient for me, an author to be able to just extend the implementation of anything that I import. However, when I am not an author and am instead a reader of code, doing this is the number-one way to causally make code totally unreadable. By allowing anyone to extend a type anywhere, it makes it impossible to just read the code. If the package that declares a type is the sole controller of that type, then reading code is easier.
"Where is the definition of this method?" Answer: It's next to everything else for that type, in the package where that type is defined. However, if anyone can extend any other type, figuring out what methods even exist for a type becomes so difficult that you are forced to use a language-server (with all related dependencies installed) in order to use "go-to-definition". If someone wants to understand the code, requiring them to have a full-fledged IDE and/or development environment is pretty awful and quite onerous, and makes codebases tough to get into.
For example, take this line[0] from a tutorial[1] on creating a simple snake game using the Bevvy engine in Rust. The main() function wires up loads of entities, including a `snake_eating()` function. But it also calls a mysterious `.after()` method of the snake_eating() function. If you do a ctrl-f on the file, you won't find an .after() method defined for the function snake_eating(). Just reading the code in that file, the only file in the program, you'll never find what the `.after()` method does or where it comes from. Only by opening that code in an IDE with go-to-definition will you learn that it's set via an automatic macro in the bevvy library[2] which causes all functions of certain signatures to implement the System trait. Which... is pretty tough to get into, and means just reading code without computer aid is effectively impossible.
In your languages, please don't allow packages to extend the interfaces/traits/types of other packages.
I sympathize with your point that being able to use simpler tools (such as grep) is often a good thing instead of having to rely on heavy-duty functionality (such as a full IDE/language server). That said:
- Ctrl+F in a file specifically isn't guaranteed to always return a hit with Go since a package can span multiple files, and you can have methods in other files.
- Increasingly many tools (like Sourcegraph and GitHub) make rich code navigation available on the web, without having to use an IDE or check out the code locally. Yes, it's not perfect for all languages, but it's improving every day. Similarly for documentation tools in different ecosystems -- I think both Haddock (Haskell) and rustdoc support cross-linking references to definitions.
- If you have textual code search for dependencies, you can still grep through the code with a regex like `func \(.* *?MyType\) myMethod`, and it will give you hits in other packages too.
I also agree that if I had the full IDE like power everywhere, then that'd be great. And having those tools in more places is also great.
Ctrl-f being limited to a single file is a downside, but it degenerates to cases where you may have to ctrl-f in 5 other files in the folder up to maybe 50 files in the folder for a huge package. And from there, it's narrow-able based on things like names of files.
However, allowing any package to modify any other package causes that number to explode well past what is ever humanly tractable. It goes from "inconvenient" to "impossible".
Language designers, please design your language so we can read it with our mere human eyes.
> doing this is the number-one way to causally make code totally unreadable.
Absolutely, my immediate thought as well. I believe my experience reading and writing Go has been more pleasant than in other languages because people writing Go seem to shy away from hidden behavior, and I think rigidity like this helps.
I "love" computer language criticisms, even though I don't believe neither that the perfect language can exist nor that a language can fit all the needs.
Anyhow, here I didn't get what's wrong with some of the "issue". For example: you can't get the address of a literal, but you can take the address of a variable. This isn't surprising. And about arrays/slices/maps, append and make: it seems to me fairly logical if you don't disregard what they are. (If I've got the issue, and I am not sure about the exact point.)
I didn't mention it because I didn't run into date handling code myself over the past 6 months. I've focused on the positives and negatives that I've run into in practice.
Yeah, it took me so long the first time I came across it, I kept thinking I was missing something as it made no sense. But now that I understand it, it's fine.
> Go has a convention that doc comments must begin with the name of the entity they describe.
Does anyone know the rationale for this one? My only guess is that the other common style, where the doc comment starts with a verb, allows for variation in how that opening verb is conjugated, e.g. "Decompress the tarball" versus "Decompresses the tarball". Maybe the Go team figured they'd eliminate that ambiguity by establishing a convention that the doc comment always starts with a complete sentence with an explicit subject.
I think this very specific details is what summarize to me the pain of using go.
Everything is a "convention" but nothing is enforced in a way that would make it easy for developer. There is actually nothing preventing you to not write doc comments like that, nothing to enforce err check, to put interface{} everywhere, to assign nil to a pointer, etc...
By design the language is actually super loose and it heavily contradicts the goal of the language itself. When they say it was for junior/quick onboarding I have nervous laughs, I never saw that much coding mistakes happening to senior devs than in a Go codebase.
You only follow standards if you use golangci which is not even an official tool.
If you think Go is bad, I hope you never have to work on a project with eslint - its default configuration is pure OCD (complains about max line lengths, spacing before/after operators, using "++" etc. etc.), and 90% of the things I find most annoying could be fixed by using gofmt in Go - except of course there is no built-in formatting tool for the JS ecosystem...
> Doc comments work best as complete sentences, which allow a wide variety of automated presentations. The first sentence should be a one-sentence summary that starts with the name being declared. [...] If every doc comment begins with the name of the item it describes, you can use the doc subcommand of the go tool and run the output through grep. Imagine you couldn't remember the name "Compile" but were looking for the parsing function for regular expressions, so you ran the command [...]
My guess is that it dates back to the beginning when there weren't good documentation search tools? Very simple search tools like "grep" will return useful results if the function name appears in the doc string.
For the substantive critique: it's also a bit weird because it's pretty clear what the reason for it is: it's a simplifying convention, like the capital-letters-to-export thing (which I'm partial to, since it's a convention I use in my C code). It seems reasonable to have preferences in the other direction, but (and I'm not trying to argue that you wrote it this way) not to suggest that it's somehow a flaw in the language.
Like every language, there are plenty of 'problems' with Go. But many of these seem like off-the-cuff complaints instead of thought-out criticisms. To choose an arbitrary example:
nil is sometimes equivalent to an empty collection but sometimes causes a runtime crash.
var a []int // initialized to nil
_ = append(a, 1) // OK
var m map[int]int // initialized to nil
m[0] = 0 // panic: nil dereference
It’s not super clear to me whether there is some systematic rule governing when nil causes a crash when using a built-in operation. Are crashes specific to write contexts and non-crashes specific to read-only contexts? I don’t know.
The author is comparing two different operations. The fact that they're operating on nil variables is irrelevant.
The equivalent slice operation is not the builtin function "append", but rather assignment: a[0] = 1. This will panic in the same manner as m[0] = 0 (albeit with a different panic message)
A map-based operation that's equivalent to "append" would be a (hypothetical) builtin function like "func assign(m map[K]V, key K, value V) map[K]V".
The language could define define that map assignments auto-vivicate the map when necessary. But that would cause each map to incur an additional allocation, even if it's never assigned to. And then for consistency you'd need to apply the same thing to slices.
>A map-based operation that's equivalent to "append" would be a (hypothetical) builtin function like "func assign(m map[K]V, key K, value V) map[K]V".
But why isn't there an `assign` function? I've been writing Go for a long time, and from a design point of view, I really don't like append. It's magic, and as the OP has shown, it's trips up new programmers. If Go was consistent, there shouldn't be an append; but instead we got "generics for this one function". The problem isn't really `nil` that acts confusing (in this case), it's `append` acting like a function when it's really a magic compiler directive.
I personally think this criticism is a problem; zero-values, especially nil ones, can be a major footgun. I've had a couple experiences where a production service crashed because someone accidentally tried to use a nil interface even though they had already nil-checked it.
append is the opposite of magic. it's exposing the details of a realloc when 99% of the time you want to keep the same binding. That's probably also why map doesn't have an equivalent; in the case of hash tables it's more like 99.999%.
You cannot write the append function in pre-go 1.18. That is what I mean by magic. The reason it's given to you because it would be painful to build a function like append for every slice type. That's also why it gets to have blessed nil properties.
Well, you could, using unsafe. But you also couldn't write the index operator (without unsafe), and I don't think anyone would call that "magic". And in either case that has nothing to do with its treatment of nil.
It would be more magic if it somehow updated a hidden pointer, like maps do. (And this has also been my experience teaching Go, people grasp quickly that slice length/cap is immutable but often forget that non-nil maps are mutable and always "by reference".)
Some complaints also seem like a lack of experience using the language. For example
Initialization with make behaves differently for maps and slices:
m := make(map[int]int, 10) // capacity = 10, length = 0
a := make([]int, 10) // capacity = 10, length = 10 (zero initialization)
b := make([]int, 0, 10) // capacity = 10, length = 0
So not only is there an inconsistency, the more common operation has a longer spelling.
In my experience, the two-value (explicit capacity) form of "make" is significantly _less_ common than the single-value form. Indeed, gripping through the stdlib shows "make([]T, n) is much more common than "make([]T, n, m)".
I agree that appending to "make([]T, n)" is not an uncommon mistake. But, in general, you can avoid that problem by assigning to specific indices instead of using append.
I think the place I regularly use "make([]T, 0, n)" is when I'm collecting items from a
s := make([]K, 0, len(m))
for k, v := range m {
s = append(s, k
}
> Some complaints also seem like a lack of experience using the language.
> In my experience, the two-value (explicit capacity) form of "make" is significantly _less_ common than the single-value form. Indeed, gripping through the stdlib shows "make([]T, n) is much more common than "make([]T, n, m)".
I've written a fair bit of C++, where this pattern is very common. IME in 95%+ of the cases, what one wants is a vector with a capacity without initializing it, because it will be filled up right away.
I'd argue that make([]T, n) is more common in actual Go code precisely because it has the shorter spelling, not because it has the exact desired semantics.
I was curious what it looked like where I work, because I mostly encounter `make([]T, 0, n)` in codebases I've touched, so I did a quick grep...
And the results were roughly 6,000 cases of `make([]T, n)` vs 5,000 cases of `make([]T, 0, n)`, ignoring most generated files (afaict), allowing basically anything but `,` for `n`, and requiring `...)$` for regex simplicity. I didn't read all the results, but the couple hundred I did check in both looked reasonable, so it's probably not too inaccurate.
I'm not sure how representative that is of go code in general, but I think I can be reasonably confident in claiming that neither is a consistent preference.
So, my understanding is that: 1. Unless otherwise initialized, types have a zero value in go. For pointer types that zero value is nil. 2. It's encouraged for this to be meaningful. That's opposed to say java, python, or C++ where invoking a method on a null object is a runtime error (or undefined behavior). So for example, appending to a 'nil' slice is the rough equivalent of trying to add to a null List in java.
I don't see how the go approach makes anything simpler.
Value semantics are simply that the value of an object is all that matters. For example, two int objects '5' are the same from a value-semantic perspective. I guess this implies all objects have some value. This requirement can be satisfied by requiring they be explicitly initialized.
Equatability is a requirement of objects in java. Not all types of objects are equatable beyond their identity (some unique identifier) though. For example, how do you equate two functions? To me this parallels the decision in go to make zero values a thing beyond a runtime error.
> I don't see how the go approach makes anything simpler.
The "append" builtin doesn't "invoke" anything on the nil pointer. It's more or less (in Java-ish pseudocode):
static Slice<T> append(Slice<T> s, items ...T) {
if (s == null) {
Slice<T> newSlice = new Slice<T>(items.size())
newSlice.pushBack(items)
return newSlice
}
if s.hasEnoughCapcityFor(items.size()) {
s.pushBack(items)
return s
}
Slice<T> newSlice = new Slice<T>(s.size() + items.size())
newSlice.pushBack(s)
newSlice.pushBack(items)
return newSlice
}
which is a perfectly reasonable utility function in pretty much any language. See, e.g., realloc(3).
> Value semantics are simply that the value of an object is all that matters. For example, two int objects '5' are the same from a value-semantic perspective. I guess this implies all objects have some value.
>
> Equatability is a requirement of objects in java. Not all types of objects are equatable beyond their identity (some unique identifier) though. For example, how do you equate two functions? To me this parallels the decision in go to make zero values a thing beyond a runtime error.
In Go, having a "meaningful" zero value for a type means that you don't need to initialize the type before using it. For example:
var b bytes.Buffer
b.WriteString("hello, world")
instead of
b := bytes.NewBuffer(nil)
b.WriteString("hello, world")
Or, as another example:
type BinarySearchTree struct { ... }
func (b *BinarySearchTree) Contains(key string) bool {
if b == nil {
return false
}
[...]
}
Go makes this possible by not requiring constructors like other languages do (e.g., Java).
You can evoke a method on a nil object in Go because it handles methods as functions that have an extra first parameter. take a look here: https://go.dev/play/p/2SRU26mvfnL
> This seems even more strange; instead of giving a “missing type in composite literal” error, it gives a syntax error.
That's a syntactic limitation (wilful I assume): `{1, 2}` is not a valid expression in general, however it is specifically allowed as an item within an existing composite literal:
The "LiteralValue" item occurs only as a sub-item of the CompositeLit rule, which means it's not valid as a function parameter (or as a value to set on a variable, or as a return value).
If you look at the GopherCon talk I gave (linked at the beginning of the post), it is about reading the spec. So yes, I did read the spec, and I realize that this is not syntactically valid (it would be a very basic compiler bug if this were valid syntax and it was diagnosed as a syntax error).
However, the spec only states what _is_ but not _why_ it is that way. Sure, I could look at the git blame of the spec for every odd thing I run into, but there is only so much time in the day...
> However, the spec only states what _is_ but not _why_ it is that way. Sure, I could look at the git blame of the spec for every odd thing I run into, but there is only so much time in the day...
Parsing simplicity and disambiguity (and thus speed) is a pretty obvious reason: if you allow `{}` to be an expression, then you have to look ahead any time you encounter an unprefixed `{` to try and figure out whether it's an expression or a block opening brace. Or you have to make your grammar into an unholy mess such that {} is an Expression but not an ExpressionStmt.
It's in the May 2022 edition of ACM, so it is at least very, very new. New enough that the number of Go users and criticizers who have read it are currently a rounding error away from zero.
Man 10 seconds for a build and it’s already running? It’s like a 10-14 minute build on a high end i9 for me and then 2-4 minutes to flash the new binary on the hardware.
It is hard for me to believe that a 30 second to 10 second build time speed up would be noticeable in a day to day workflow. But maybe I’m just envious.
>The loop iteration variable is reused across iterations, so capturing it by reference (the default for closures) is likely to lead to bugs.
>defer inside a block executes not at the end of the block, but at the end of the enclosing function.
>defer evaluates sub-expressions eagerly.
Whenever I see this kind of complaint, I can't shake the feeling that the author struggles to distinguish between "bad design" and "learning how things work". There are many fair criticisms to levy against Go, but "it didn't behave as I first expected" is hardly one of them. If it were, there would be lots more to complain about in the languages that are routinely touted as superior to Go, namely: Rust and Haskell.
I disagree. Counterintuitive design is bad design, and leads to subtle bugs getting introduced.
This is especially damning in a language where so many decisions are justified by a desire for simplicity and ease of understanding. You throw that all out the window when you make design decisions that are opposite to most people's expectation.
I get what you're saying, but taken to an extreme, every language would be the same (or, we'd only have one language). After you get used to a subjective alternative it becomes the new normal.
Go keeps it core feature set very minimal by design. The end result is a language that is almost as fast as C++ and almost as productive to write as Python. If you care about performance, but want to work with a small engineering team it doesn't get better.
Sure you could use Rust to be a tiny bit faster in some use cases. But it's also less productive to work with for your team. You could use Python, it's a bit more productive but performance is 40x slower.
The value of Go is the balance it strikes.
That being said, yes a good enum type in the language, ideally like Kotlin's approach would be ideal.
"Almost as fast as C++" claims about go tend to come from people who don't write a lot of C++. The perf gap is big, even though it's not as big as the perf gap with Javascript. That said, I don't think anything but Rust does any better than Go on getting the balance right.
Yes, very negative. I can't stand how miserable it can be to fork a library and in general to ensure I'm importing the correct version of the thing I need to import when it's not in github.
There are other compensations - faster than python but still has decent regexps. Good crypto libraries. Channels (make life easy) and massively scalable goroutines. ...trivial cross compilation ... and many other things.
I still find python much easier but, for example, I have very little reason to bother with C++ or Java now.
People address some of these issues piecemeal with linters and code generators, but maybe there’s an opportunity for someone to create a Go++ language that accepts Go syntax but also extends it with sum types, nullability annotations, no default values, checked error handling, operator overloading for collection, etc. I remember people were excited (and dubious) of the Go-like language V.
My experience with Go is that it is pretty low level compared to languages like Ruby, but higher level than something like C++.
I'd be curious to hear what people build with it at work, is it mostly microservices or are there also monoliths around? Is it more APIs or background workers?
What do you think it's an interesting project I could build in my time that shows me where Go shines?
My experience with cgo has been less than stellar. I built a simple Fiber API with 2 Golang libs (gosweph and swephgo) which interface with the Swiss Ephemeris C library. Both libraries segfaulted when the API was loaded with more than a couple of concurrent connections via wrk or k6. I'm no C expert but the errors pointed to cgo as the culprit.
Take note everyone, this is how you do well-argued criticism of a programming language or a technology more generally. Much more polite and better written than what we recently saw here at HN re: the same topic.
I would be curious to know what he thought of Swift (having worked on the compiler). The language seems to be the opposite of Go in including every language feature under the sun.
> I would be curious to know what he thought of Swift (having worked on the compiler). The language seems to be the opposite of Go in including every language feature under the sun.
Are you asking from a language design perspective? Or from an implementation perspective?
From a language design perspective, yes, Swift has a lot of features. Most of these features exist for good reasons.
- First-class Objective-C interop: Needed for initial adoption and migration, since Apple's SDKs were all Objective-C. (See also: Kotlin and Java etc.)
- Protocols with associated types: Writing generic code with constraints makes surfacing type errors easier compared to templates. Associated types enable many natural patterns of programming.
- Library evolution: Being able to evolve APIs without breaking ABI is super important for a platform.
- Support for DSLs: Swift is used heavily for UI programming, and there is a convergence across languages in terms of having DSLs for making building UIs easier.
- Use of weak pointers (vs having a tracing GC for cycles): Better for Objective-C compatibility.
- Async/await + actors: Trying to balance usage of Dispatch (which is the platform API) with newer programming patterns, while still being able to compile in a way with low resource usage.
- Upcoming C++ interop: Many big iOS applications use large amounts of C++, so better interop would make Swift usage easier for them.
Does that mean I think every feature of Swift is perfect? No. For one thing, I think method overloading is way too flexible, which is what causes exponential time for type inference in a bunch of cases.
Java[0] and C#'s foreaches, Rust I think[1], Javascript's `for...of` when you use `let` or `const`. Probably a bunch of others, this is just off the top of my head (edit: just checked, Swift as well)
And obviously languages which do away with "imperative" iteration entirely e.g. erlang, haskell, clojure, ...
And it should be noted that this is more problematic in Go than in most, because of Goroutines (if you create goroutines using closures in a loop you're hitting this issue). Javascript was extremely hard hit by that (because of closure-based async stuff, and also that `var` is even worse) for similar reasons, which is what led to to `let` and `const` having so much better scoping.
Incidentally, I assume that's at least one of the reasons why the order of evaluation of the `go` statement is so weird. And why you probably should not use anonymous functions to create goroutines.
[0] also Java doesn't allow closing over variables which are not effectively final, so the issue couldn't happen, if foreach variables were not effectively final the compiler would reject the closure
[1] though the borrow checker will usually tell you to get bent before you can even run the code
> It’s not clear if a value that is passed via pointer is intended to be mutated or not. (const-ness / mutability) (maybe it’s passed via pointer just because the size of the struct is large?)
The dilemma is even richer than that -- what if the function call is inlined, with the struct known to be instantiated in the caller? If so, size of the struct doesn't matter. In C, inlining can be forced, so passing a struct by value can be assured to be economical. The programmer has no such option in Go.
Ha. This is quite true. I wish there were an (obvious) way to build languages on top of Go's runtime, in much the same way as Clojure/Scala/etc run on the JVM.
> Errors on unused variables are annoying as well.
> Not everything that should be done needs to be done right here right now.
Hence the ability to suppress it with `_ = unused_var`! The point is that it's explicit. You have an out — it's not the pretty out you like but that's the whole point. It's unpretty and annoying in order to force you to think twice. Every design decision is Go has been the result of consensus of multiple people who've been bitten many times by the issue the design is addressing (in case that wasn't obvious). Have a read of issues, drafts and proposals on Go issue tracker to get a feel of what it takes to get something in.
> Imagine trying to learn a musical instrument and being berated at every time you play the wrong note. That’s not a way to teach; it’s a way of asserting dominance.
WTF!
> No sum types with exhaustive pattern matching
> I tried a search for a code pattern often seen due to the lack of exhaustive pattern-matching in Go
> That’s 38.7k hits in the source code across GitHub etc. as of Apr 29 2022.
Great. Write it up in an issue and with such an overwhelming evidence, it's a good candidate to make it to the language in the future. Why isn't it there to begin with? See above.
> No overloading for common operations
For many, that's a huge positive. When reading the code, it's much easier to reason about the extent of side effects for-loops have.
> Yes, one can use map[T]struct{}, but that feels needlessly cumbersome.
Now you're just trying too hard.
> Go has a convention that doc comments must begin with the name of the entity they describe.
Rob Pike, and docs, and blog posts talk about why this is the convention. Read up.
> Limited markup support in godoc
Thank goodness.
You have a rust/swift programmer (from dozens of mentions of each language) trying to beat Go into a shape they're familiar with and not enjoying it. That's not the right way to learn and use a language, just as you don't go learning Japanese, expecting it follow the grammar rules of English. And especially in case of Go, bits are added pragmatically and after long and careful deliberation, not as a race to have as many features, bells and whistle as every other hot-lang out there.
A experience report that keeps mentioned language X and Y as a benchmark for what language Z should be like, only tells how language Z differs, not whether it's better, worse, etc. So labeling the differences positive and negative is of little value in this case.
Look, it sucks when programming languages get condescendingly bashed flamily, but the OP did none of the sort here. I, personally, don't find things like the map-as-set syntax cumbersome, but I can see the argument from the OP. And OP also freely admits they have 6 months of experience with the language, enough to have the weird bits stick out, but not long enough to have internalized them. I don't think all of these are "trying to beat Go into a shape they're familiar with", I think they're legitimate complaints about the language. I don't find them problems myself, but I think it's good to document issues and pain points so that future language developments/languages try to incorporate this feedback.
Had the same thought, reading the strings package. Why’d they mention letters? Were Ken Thompson and Rob Pike even aware of the concept of Unicode code points?
Oh my God, as an average moron who decided to pin his professional growth in my current company on learning Go and Rust, these threads are paying dividends that I could not have imagined in my wildest dreams!
At this level I was more annoyed about what goes into the libs and not. Very flawed. Scoping was mentioned with defer but it can be confusing in other cases too.
I think if the progression of GO had been the same as we have seen with Javascript, C# or even Java. Then GO would have been a relevant general-purpose language today instead of this safe C alternative in niche areas.
Go is a tool. I haven't seen a language get as much directed criticism on HN in a long time. I appreciate genuine feedback, but only when its useful and direct. I can't remember a time when Java got ripped on HN this hard. Distancing oneself away from the religion of languages is the best thing I've done thus far. I "hope" the merits of language justify direction of software rather than the hype.
Java is like 15 years older than Go. Go had the opportunity to learn from Java. And in some ways it’s better, fast compile times, simple to deploy binaries, etc.
But the language itself was a step backward from Java in some ways, and only recently has gained some of those features.
I don’t think it’s wrong to say that Go in its desire to be simple left a lot to be desired for many people.
There is a better Java, its Kotlin and its embrace has been lukewarm by community. I think golang serves an interesting purpose and it was always meant to be a better C not better Java.
I’ve always seen Go as far more like Java than C. They facilitate the use cases. Go isn’t a good C replacement for mostly the same reasons Java is not.
People are adopting Kotlin, I’ve seen many coworkers using it for new projects. It becomes a bit more complex if you’re trying to integrate it into an existing large code base, of which there are many in Java.
Actually it was been heavily pushed by Android team, which knowingly stagnate Android Java, with the agenda to replace its use outside the Android system libraries.
Even Android 13 update to a newer Java 11 LTS subset seems to be caused by not losing the ability to use specific Java libraries than anything else.
A weird special-pleading argument, as Java is intensively used in our industry, including for new projects, and language features are engineering decisions, not moral lessons.
Of course, people have to come up with special pleading arguments about Go, because most of the complaints (and all the most pointed complaints) are shared by the other most popular languages in industry. You can't get any traction with a takedown of Python, only with Go or Rust. So these kinds of complaints are invariably pretzled up into some form of "Go is a terrible language for the good language it is compared to most other languages; it may be better in many ways than its predecessors, but it had no business not being even more better". Well, peachy.
Don't have a programming language as part of your identity. It takes you weird places.
I’m not sure what’s more predictable: the go-hater-is-my-identity posters or the defensive go-coder-is-my-identity posters responding to the OP line by line. (Or perhaps the smug above-it-all posters such as myself).
As a likely certifiable go fanboy, I like the article and see a lot of stuff I agree with. Still enjoy programming in Go. Will error handling ever be less verbose and repetitive? Not sure. If not, it’s not a dealbreaker for me. And I do acknowledge that the fact that all the error handling is right there in my face makes it easy to reason about. Pros and cons, folks.
I thought this article was a cut above a lot of the Go criticism I've read. Language critiques are good! Where we tend to get in trouble is language ordering.
Hey, can you please rein back this sort of post on HN? I totally get, and appreciate, your passion (and deep knowledge) of programming language spaces–I've observed it for years and have learned quite a bit from your posts. But it's really important to restrain the passion from crossing into destructive places like this. Otherwise we just end up with angry flamewars where no one is learning anything from anybody. The whole idea on HN is to try to avoid those end-states and stay in more interesting places.
I am not going to convince anyone, I am not a missionary doing conversions, specially from a group that workships statements like,
"The key point here is that our programmers are Googlers, they're not researchers. They're typically fairly young, fresh out of school. Probably learned Java, maybe learned C or C++, probably learned Python. They're not capable of understanding a brilliant language. But we want to be able to use them to build good software. And so the language we give them needs to be easy for them to understand and easy to adopt."
You're definitely not going to convince anyone if the only thing you can inject into a discussion is contempt. The "Smug Lisp Weenies" tried that course; we're not all using Lisp. In fact: their heyday was the great flourishing of the least-Lisp-like languages.
Did you even read the article. Also, it was written by an individual and their experience with the language. The article merits a read or at least a skim.
I cannot speak for others, but I do not see a cabal of people conspiring to bring down Golang on HN. Yes, esoteric languages get love and practical languages like C++/Go/Java et al. get hate because people are more familiar with them and see their warts.
The more a language is criticized the more often it is used.
It appears that many of the issues I see in articles of this type are often RTFM-level issues, especially when calling out linter exceptions (which a quick scan of the source code of an individual linter would help to explain).
Some people still call it a toy language because they are mentally like 15 years behind and think it's unreadable nested Jquery event emitter callbacks and Object.prototype overloading. The rest of us pushed for / picked up on the rapid improvements in tooling, spec, and best practices, causing "JavaScript fatigue", though that has slowed.
This is meta but in most fields 6 months of experience
is not considered significant.
"Things move so fast"
Not really.
We are doing mostly the same as always on the front end.
Put dots on screen at the rigt time,right shape, colors etc.
With over 50 years going at it, you would think this was a trivial
effort that was well understood and solved decades ago. <
Esp, after hardware converged to the rather uninspiring selection
left.
(This might actually get better now with manny companies getting in
on desiging CPUs GPUs and so on I have some hope for a return to
competition on those fronts )
"Changing pervasively used structures in C++ or Rust can cause recompilation that takes over a minute. That is enough time to get distracted by something else, like Slack."
That is...absurd, to be blunt, unless one has a medical condition (e.g. ADHD). It takes a neurotypical person almost no effort to remain attentive to the task started during a pause in activity of a few minutes. If a person without such a condition is getting distracted in a few minutes' compile cycle, it's a flaw of theirs that they need to work on.
Of all the things to point out about other languages, "over a minute" compile times is not a good one. Now, tens of minutes to upwards of an hour (some C++ code bases I've had the pleasure of working in) or more? That's an issue.
I think I would say overall that I like the idea of Go better than I like Go itself. I like the idea of a simple language, but not the choices they made. I like the idea of simple tooling, but don't find their tooling simple to use (ironically, Rust's tooling is much simpler to use IMO). By being less expressive, it actually makes it more confusing to use IMO (in the same way I find dynamic languages more confusing because without static types, I'm less sure about what args a function takes). By keeping null, not having sum types, etc., I don't trust the code I write. In the same way I don't trust any code I write in Python unless covered by a test, I don't get much confidence in Go's static typing due to their implementation choices.
In summary, I don't think 'simple' is the right metric because it doesn't necessarily make writing code 'easy' or safe. For me, languages that are highly 'logical' and 'compose well' are much easier to use in practice. A language should try and find the simplest way to express X, not remove X as a concept IMO, or else code simply will not scale or compose once past a few thousand lines.