Hacker News new | past | comments | ask | show | jobs | submit login
Backward Compatibility, Go 1.21, and Go 2 (go.dev)
386 points by philosopher1234 on Aug 14, 2023 | hide | past | favorite | 300 comments



The question for compatibility is not whether but how. And it's not really backwards: you want your code to just work, from now on.

Go 1.21 offers two essential features not matched by any other language ecosystem:

1. A GODEBUG setting for each change, together with per-change opt-out and per-change metric for detecting use of the prior implementation.

2. A per-module toolchain version, together with automatic fetching of both older and newer go toolchains (deployed safely as modules)

As an amazing bonus, if you specify a given version of go (e.g., 1.21.2), then when running under a newer version of go, go will automatically apply the relevant opt-out configuration so you won't get the new behavior until you ask for it.

Finally, as ever you can declare things in code, in go.mod, and in environment.

That basically covers all the use-cases for compatibility, from developers through deployers.

Simple, and beautiful.


Perl does something similar;

    use v5.24 
at start of the file will make it work as if the version of Perl was 5.24, and it works on per-file basics, not only per module


That's exactly what I thought of when reading the description.


Even more fine-grained, https://perldoc.perl.org/functions/use#use-VERSION

> ... Later use of use VERSION will override all behavior of a previous use VERSION,

> possibly removing the strict, warnings, and feature added by it ...


Apologies, but this actually sounds like a nightmare to me, if you have at least 1-10m+ loc over enough teams, enough teams being probably 5+ depending on org boundaries.

Either you support the new version or you don't is more than enough complexity to cause arbitrarily ridiculous problems, because there is a deep valley of you /think/ you support the new version and the new version thinks that it supports you but you both miss.

(who wants the t-shirt saying that they have been responsible for a 1m+ code base on top of 10m+ for at least 20 years?)


If Team A requires compatibility with go 1.21, their go.mod will start with "go 1.21". Even if the code is compiled with go 1.22, their code will run unchanged, as the toolchain now treats the go version line as the toolchain to be compatible with. Similarly, if Team B requires compatibility with go 1.22 but their code is being compiled with go 1.21, go will download the 1.22 toolchain and run with that instead. It's sneaky and crazy, but so crazy it just might work.

(As a user of CircleCI convenience images for a few tests suites, I appreciate this feature. When there is some security vulnerability that requires updating to go 1.21.1, I don't have to wait for Circle to build a new convenience image. I can just change go.mod and start using 1.21.1 immediately. This saves a day of telling people to ignore govulncheck.)

The TL;DR is that the compiler version is now something you can declare in your go.mod file like any other dependency.

If you share one go.mod file across all teams or have a One Version Policy, then there will always be work to do. No doubt there are several dedicated (in practice) employees to manage "there is a critical security release for github.com/whatever/frob@1.2.3 but the Team A's tests fail when updating to github.com/whatever/frob@1.2.4", which is inevitable at this scale.


It's not downgrading (unless that changed recently), it's just new emulating old. And only partially at that.

E.g. `go fmt` with a new Go will use new-Go's formatting, not the module's version's formatting (comment format thrashing is fun!). And then they special-case backwards compatibility stuff like the `//go:build` syntax change, and that behavior pays attention to module version. API accessibility and module file formatting follows module version, I don't believe `go vet` does (in general, nor do I think it necessarily should), compiled implementation of stdlib absolutely does not, etc.

Rust (cargo) by contrast actually does version the tools, and automatically pulls the stated rustc, stdlib, docs, everything (set in rust-toolchain.toml, among others: https://rust-lang.github.io/rustup/overrides.html). I'm not sure if cargo versions itself or not.


I believe it's emulated.

There's a tradeoff; the old toolchain may actually build code that is vulnerable to some security vulnerability, but the new toolchain will produce (in theory) code that produces the same output from the same input, but without the vulnerability. So it's not clear that either direction is a clear win; old is "known", but old can be dangerous. But, you can only write so many tests to ensure that emulation is as good as the original. Which direction you are less paranoid about dictates the direction you'll go.

Having used Go since ~1.3 and having been responsible for the version in use for my project since about ~1.9, I'd say that on average I upgrade on release day and it has never caused a regression. But, the article mentions a handful of bugs that have occurred due to fixing library bugs, so they aren't nonexistent. How much risk you want to take here is up to you.


Yeah, mostly I think emulating is the best approach, and Go seems to strike a pretty good maintenance-load / compatibility tradeoff. Upgrading lints / optimizations / bugfixes is generally preferred. Rust's "true versioning" approach makes more sense for a language without a stable ABI and more backwards-incompatible changes in recent times.

Besides, if you want true versioning, there's always gimme. Gimme's easy, and easy to use with folder-env managers (like direnv).


That's how Rust editions work too. You can have a Rust 2021 crate depend on a Rust 2015 crate which depends on a Rust 2018 crate, it will all be compiled by the latest compiler even though they are written in slightly different languages (different syntax and in some cases different desugaring)

So that's how Rust can make language changes without splitting the ecosystem and without requiring everyone to migrate all at once

To think about it, Java is like this too


This is definitely present in other languages. Mainstream ones. Haskell takes it to the next level where individual files can turn on and off language features.


> essential features not matched by any other language ecosystem

> proceeds to list features abundantly common in other languages

Least enthusiastic Go user.


Love this. Nothing better than coming to a go codebase and bumping the go version knowing everything will work fine.

One thing I worry about is that the type system can’t get improved significantly without breaking changes in the “this was wrong and wont compile now" sense. Although I'm not sure theres interest for this stuff in the golang team at all. But there are many low hanging type system improvements in go that are major compile time robustness wins for folks, with no language additions.

1) reporting unchecked nils, might require formalizing [T, nil] | [nil, err] so err != nil guarantees T (obviously concurrent pointer access makes it tricky but thats special code with special considerations, not the 99% case)

2) unchecked array access

3) inferring types of nested struct literals. Writing nested GRPC calls is such a damn pain go, this is literally only one layer of type inference... its in the damn function signature I'm calling!

4) Exhaustive enum matches

But now I'm curious, is there any interest for these changes in the golang community? None are adding "features" (like say generics) but would be huge robustness wins and seem pretty easy.


Just to beat this drum from outside the go community, rust / swift style enums with parameters are mana from heaven. So many programs get easier to write using them. If there’s one way I’d love the go type system to be improved, it would be by adding these simple, lovely algebraic data types.

Trust me, they would look great in Go. Treat yourself.


The irony is that Go actually does have sum types, but only for generics!

type A interface { Foo | Bar | Baz }

Can't use this anywhere but in generic type signatures though :(


There's a proposal [0] by Ian Lance Taylor that proposes extending the type parameter constrains union of types to non-generic interfaces too. Indeed, there are some issues with the implementation details, but I'm hoping they get ironed out.

[0] https://github.com/golang/go/issues/57644


It's ridiculous to me that the most popular languages from the last 20 years or more are the ones that handicapped. They have `and` types but not `or`.

It sounds like a very basic logic tool that's needed for types.


I agree. If go had sum types I’d use it in a heartbeat!


Grumbles in F#…


Amen to that. I get super excited about Go releases because they often add useful stuff with zero breakage of anything (just free wins all around). It's not clear that any part of the language is so broken that backward compatible changes can't fix it. Even the few foot guns like loop variable assignment have proposals that mostly preserve compatibility.


Interesting because I observe the opposite sentiment among .NET developers for features added in C# despite that its backwards compatibility is also fully preserved. They say “the language is getting bloated”, “it becomes harder to learn”. I totally have your perspective on this, but the difference in attitudes is noteworthy.


In the past ten years, here's a list of significant feature additions to the Go language itself:

     * Generics
And even that has not been the shocking revolution some people expected; it turns out to overall be a nice little addition rather than something that rewrote what good Go was. Nothing like the Python transition to "new style" classes, or the Python 2 -> 3 transition, or the Python async ecosystem transition, and that's just one language. For the most part a Go 1.4 programmer suddenly transported to today would not find it a terribly difficult task to read code using the generics.

And broadly speaking, the language didn't start with a lot of features either.

Go has had a lot of library improvements, it's had a lot of tooling improvements, but the language itself is extremely stable. I've seen a few people actually complain about Go moving too fast, to which I wonder what exactly would make them happy if even Go is zooming along too quickly for them, because I honestly don't know a language moving more slowly than Go, for all the advantages and all the disadvantages that incurs. I've honestly sort of wished I could sit down with them for 15 minutes and see if I could figure out what their real problem is, or if they really are upset that there was ever a major change.


The biggest problem has been the GOPATH → modules conversion, which broke lots of (or all?) tooling. For example, I had written a fairly nice program to analyse Go code and generate an OpenAPI document, but when modules came around quite a bit of that broke, and it's non-trivial to fix. It still works in "GOPATH mode", but few people use that these days.


I'm curious to learn what sort of issues you've seen when trying to use modules instead of GOPATH? Most of the migrations I've done have been pretty seamless and the few that had issues where all easy to resolve with a few replace directives or similar.


Anyone who has written any tooling that works with Go code saw issues because almost everything changed in this regard. I was maintaining vim-go at the time, and quite a few changes were needed there too. I don't have a list of issues at hand because it's been a while, but generally speaking the only reason things went "pretty seamless" is because people spent time updating the tooling.


C# mostly suffers from having way too many options to do something. The quality of the average codebase written in 10 layer-abstraction-heavy OOP style that does not leverage the language features for writing concise and understandable code does not help either.

This is one of the reasons people rave about F# - it's less about the language and more about not suffering from really bad tradition that has settled over the years.

Don't get me wrong, the situation improves significantly each year, but the developers who require extensive explanatory work to get any semblance of buy-in to stop doing things the painful and cargo-cult-y way are still in the majority.

Otherwise, I think C# is very approachable thanks to it being extremely forgiving language* in general and standard library offering easy-with-pretty-good-defaults shortcut options to do the basics like networking, file io, hashing, creating simple web servers and UI applications, etc. Its both CLI and IDE tooling is also top notch today.

* Forgiving as in, mistakes usually tend to decrease otherwise excellent (better than Go) performance rather than cause critical failures.


>The quality of the average codebase written in 10 layer-abstraction-heavy

This meme needs to die. I've seen "10 layer abstraction heavy" code base only once

The code was simulating hardware/firmware behaviour when the real impl. was not available, and when the real impl. was delivered, then it was called instead.

So that was quite reasonable why they went with this like that.

Normal apps are mostly MVC-like where controller receives HTTP request (like 5 LoC?),

moves it to some handler which performs business logic / calls db and the returned stuff is either HTML or JSON.

That's your average web app.


Well I guess you've just been very lucky.

I've seen a codebase where half the features were implemented by inheriting the controller class and adding some behaviour and then that wqs wrapped in another layer adding more behavior. It had about 5-6 layers of that. I guess it's not 10 layers, but it was still extremely shitty code that was very not fun to deal with.


> This meme needs to die. I've seen "10 layer abstraction heavy" code base only once

I dont think so. I like Go code because its usually pretty "flat" unless you find someone from a Java or C# background. C# code is similar to Java in that nearly everyone writes it with an IDE, so you end up with nested folders like 8 levels deep. it makes reading the code stupidly hard, unless you download and load into an IDE or something like GitHub with essentially a web IDE. so anyone used to just a normal editor like Vim or similar is basically out of luck.


While I agree that almost everyone uses IDEs when doing C#

then I'm not sure about this folder thing.

I've been shocked many times when seeing Java repos that they have like 5 empty folders nested just to have 3 java files. I don't see that in C# world.

>it makes reading the code stupidly hard, unless you download and load into an IDE or something like GitHub with essentially a web IDE. so anyone used to just a normal editor like Vim or similar is basically out of luck.

What does "normal editor" even mean?

Shouldn't "normal" be dictated by market share? so VS Code, Notepad++ according to SO Survey 22


> that they have like 5 empty folders nested just to have 3 java files. I don't see that in C# world.

This is because Java ties the package path to the filesystem, and C# does not tie namespaces to file paths. Typically in C# codebases you'll see the layers implemented as separate DLL projects.


That's not my experience, and I've been writing C# since .NET 1.1. Obviously though it depends on the scale of the app. If you have 1m+ LoC, you really need to have some sort of structure, regardless of the language.


The ten layer abstraction continues to exist in Java land. I see it every day at work.


You get a few layers for free by using Spring alone


> The quality of the average codebase written in 10 layer-abstraction-heavy OOP style

They did not learn from Perl, it seems.


Why do you think Perl is abstraction-heavy?


Perl, itself, isn't but if you let lunatics loose on it for 10 years without adult supervision, you'll end up with people writing their own ORMs, their own object models, etc. and you have a fragile tower of abstractions that you can't even look at for fear of it imploding.


As opposed to Java, C++, ..?


Which are valid fears -- C# has become a language with an insanely big surface area, almost comparable to C++'s. This surface area will get rough, unforeseen edges when multiple different features are used together (no matter how generally good C#'s design is), so on the end-user dev it is not even a linear-only weight to learn.

Java is a common butt of jokes among C# devs, but in my opinion its addition of features to the language while committing to backwards compatibility is simply the best in the industry and should be copied by every language pertaining to a similar status (not for research languages obviously, they should be the ones experimenting) -- they seldom add new features, only those that have been proven by others and trying to kill multiple birds with a single stone in each case.

Also, .NET does have a dubious past regarding backwards comp. from what I gathered, "its" frontend churn alone is remarkable, the only thing more spectacular is their renamings.


> Java is a common butt of jokes among C# devs....

And those of us Polyglot devs, have positive and negative arguments that go both ways.


Indeed, both languages have plenty positives and quite a few warts as well. While I do prefer Java personally, I consider C# an absolutely fine choice for almost every usecase.

Go on the other hand, I'm much less forgiving about.


I don't see any language advantages of Go if you know C# or Java already.

On the other side we use it for ops because new hire, regardless of what language they know, can learn it in a week or two and produce not-shit code and don't suffer any of the Python or JS problems.


C# is getting some strange design decisions of lately, see inline arrays, and interceptors.

Yeah, they had plenty of hindsight, and still....


Why are they strange? Interceptors seem to enable migration away from reflection, for better support of AOT compilation.

For example: https://github.com/DapperLib/Dapper/issues/1909


Interceptors are a hack, instead of a properly designed AOP framework, which Microsoft already has, although comes with Visual Studio Enterprise price tag, Microsoft Fakes.

Inline arrays are another hack, instead of properly designed language grammar, like in languages like D or Swift.


C# interceptors are a metaprogramming feature. AOP is a higher-level concept. An implementation of AOP can use these metaprogramming features to achieve aspect weaving.


Are a badly implemented, and clunky, metaprogramming feature.

Where a sound AOP framework, or macro system are much more sane.

Belongs to the same trash bucket as the C# 11 bang-bang operator, and I will vote for the same outcome, in all places asking for community feedback.


I lack experience to assess quality of C# interceptors, so that might be true. I'm just happy that C# ecosystem is slowly becoming less dependent on runtime reflection.


Which are both fine?


Which suck both.


>Java is a common butt of jokes among C# devs

Rarely are "jokes among X devs" for another language not a sign of ignorance and fanboyism.


I've seen this repeated a few times in online forums, but that's not my experience. Language itself has gotten bigger, but code has become much less bloated and elegant. To me, it's easier to read new C# code than the old one.

I'm probably biased as I use C# a lot. But I think that with advent of GPT tools, larger languages have become much less of an issue that they were in the past. They make it really easy to get explanation of code or feature you don't understand right there on the spot.


I haven't used C# in over a decade, but I was already feeling that way - there's just a lot in there. Go seems to start with a default no to things, so that might be why it has a relatively small surface area.


>I observe the opposite sentiment among .NET developers for features added in C# despite that its backwards compatibility is also fully preserved. They say “the language is getting bloated”,

I don't recognize this sentiment at all and I think it's almost non-existent among developers who use ReSharper or Rider that will both introduce useful new language features as a refactoring suggestion that is one shortcut away.

.Net developers can easily get away with not keeping up with new C# language features. A .Net developer who went into coma in 2012 after having learned async/await and ASP.NET MVC can return to a hypermodern ASP.NET Core 7 codebase and be almost instantly productive.


Just to throw in my 2c, I've had significant breakages with most Go upgrades. I've had far fewer issues with rust upgrades, and gcc upgrades. C and Rust compiler updates feel actually backwards compatible to me.

I posted a list of a few breakages I've run into before, though of course I've run into more since, and far more I didn't mention there: https://news.ycombinator.com/item?id=29763324

I think the main reason I've had more issues with Go upgrades than with Rust upgrades or C upgrades is because for rust/gcc, I'm upgrading the compiler and getting language features, but most of the complex libraries don't change, and I can upgrade them separately from the compiler (i.e. hyper for http in rust, or libcurl in C).

For go, whenever I upgrade the compiler, I get a mix of language features (generics, embed, other toolchain features), but also a set of library updates (tls and security libraries yet again dropping support for something I use, http client or server changes, etc).

I think go would have been much better served by making 'http', 'tls', 'crypto', and other large chunks of the standard library their own separate libraries. Even if you kept the exact same go1 promise for those (which is to say "we sometimes break http and tls, but we tell you it's backwards compatible"), it would be far better because I could then actually fearlessly upgrade the go compiler version, and then later upgrade those libraries at my own pace. Or, when the tls upgrade breaks my code for the nth time, I could roll it back without also having to roll back the entire compiler version and revert any code changes which used new compiler features.

It also really makes no sense for libraries like 'http' and 'archive/*' to be part of the standard library. Presumably they're just there because the Go team wanted to release 1.0 before figuring out a good dependency system (though golang.org/x/ was a really quick follow-on, and that would have been enough for these), but that has led to the "go 1 compatibility promise" really meaning "enough go upgrades break my code that I'm scared to update, but not because the compiler has a breaking change, but because some part of the stdlib that doesn't even need to be bundled with the compiler breaks me".


> It's not clear that any part of the language is so broken that backward compatible changes can't fix it

Broadly I agree but the future is kind of unknowable. For example Rust’s 2018 edition introduced the async keyword, a breaking change because you could have made a variable or function named “async” in the 2015 edition.

Async functions weren’t something the first version of Rust and they were introduced with a breaking chance. I like backwards compatibility but am unsure about a future where Go can’t do innovation X because it would break compatibility.


FWIW async was only sort of a breaking change. The edition system meant that no crates would stop compiling, even with newer compilers, and you can mix crates of different editions in the same build product.


I'd like to see sum types and some better way to deal with error verbosity.


I'm a big fan of the assertion that a future Go 2 will never break Go 1 compatibility. I think if you need to make changes so significant to a language, you may as well just fork and rename the language (an opinion I can see many holes in).

I wonder: why not go further and say "there will never be a Go 2" in order to eliminate ambiguity about this? If a theoretical Go 2 will run all Go 1 programs, what would make it different from some Go 1.xx release? Some might interpret this post as saying that, but I don't think it quite does. It says "There will not be a Go 2 that breaks Go 1 programs."


> I wonder: why not go further and say "there will never be a Go 2" in order to eliminate ambiguity about this?

They did, five years ago. Albeit with an “if”.

https://github.com/golang/proposal/blob/d661ed19a203000b7c54...

> If the above process works as planned, then in an important sense there never will be a Go 2. Or, to put it a different way, we will slowly transition to new language and library features. We could at any point during the transition decide that now we are Go 2, which might be good marketing. Or we could just skip it (there has never been a C 2.0, why have a Go 2.0?).

> Popular languages like C, C++, and Java never have a version 2. In effect, they are always at version 1.N, although they use different names for that state. I believe that we should emulate them. In truth, a Go 2 in the full sense of the word, in the sense of an incompatible new version of the language or core libraries, would not be a good option for our users. A real Go 2 would, perhaps unsurprisingly, be harmful.


> Popular languages like C, C++, and Java never have a version 2.

Only someone that never used those languages would state that, all of them have had breaking changes.


> We could at any point during the transition decide that now we are Go 2, which might be good marketing.

Among the (entirely?) dev-oriented consumers of Golang would the shininess of "2.0" really outweigh the "ugh documentation is going to get harder to find" and "ugh I now need to increase my auditing of dependencies" and other similar fatigue?

Is Google universally good at marketing?


C had a breaking change this year, pre-ANSI C programs need to have all of their function definitions changed for them to be compatible with C23.


But Java absolutely had versions that broke old code


Java went through this. There was a Java 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, and 1.7. Then, Java decided "You know what, we aren't breaking backwards compatibility so instead of naming things 1.x, let's just say Java 8, 9, 10...21"

I think that ultimately makes sense.


Ironically they then went ahead and made a massive change in Java 9 that, for the first time in Java's history, broke pretty much everything. Still angry about that...


Well, they removed code that was always discouraged from using and it was always and explicitly stated, that there are no guarantees of any kind when using this code. The code was not living in a package called "unsafe" and being undocumented (to my knowledge) by coincidence.

So while Java broke some big libraries/frameworks (not "pretty much everything though"), it can't really be blamed on them.

In fact, look what Go has: https://pkg.go.dev/unsafe

> Package unsafe contains operations that step around the type safety of Go programs. > Packages that import unsafe may be non-portable and are not protected by the Go 1 compatibility guidelines.

Let's wait until Go has reached Java's maturity and see what happens when they change this package ;)


> So while Java broke some big libraries/frameworks (not "pretty much everything though"), it can't really be blamed on them.

I think one of the reasons people use Java is to get access to those big libraries/frameworks.

I've worked at a few companies that used Java during the transition, so maybe I had access to about 10 Git repos that underwent this transition.

I think pretty much all of them required some tweaking e.g. adding extra dependencies in Maven when moving from Java 8 to Java 11. I actually became the "go to" person to do these transitions, having worked out what incantations were needed.

All of those repos, to this day, despite the effort that was put into them during the transition, now print out warnings about things being unsafe. The companies just ignore those warnings. I have 15 years of Java experience and I don't know what to do about them. My understanding is this is normal in the Java world now.

They are just normal web applications or REST services using databases like PostgreSQL, using e.g. Spring Boot, Tomcat, etc. Maybe those libraries do things they're not supposed to, I don't know. I have never used sun.misc.Unsafe in my code or anything like that.

Perhaps if I spent days studying the problem I could understand what was going on and what to do about it (although probably not, as the problems might have been in third-party dependencies.) But this wasn't money the companies I worked for wanted to spend. But anyway, my point is that spending days fixing stuff after an upgrade != backwards compatible.


Heh, once there is spring boot as dependency it has like 100 library jars to show hello world on web page. So pretty much sure everything would be broken.


You likely has a dependency of dependency of a dependency that uses it, and thus get the warning.


"Let's wait until Go has reached Java's maturity and see what happens when they change this package ;)"

They did a while back, actually. Compare https://pkg.go.dev/unsafe@go1.0.1#Pointer with https://pkg.go.dev/unsafe#Pointer , in particular the modern very precise description of exactly what you can do with an *unsafe.Pointer. I'm not sure what the cutoff for that was but it was a while ago, yes. Still, it didn't do much.


Interesting!

Personally I'm not a big fan of either Java nor excessive backwards compatibility. But I can't avoid noticing that a lot of people praise Go for things like the backwards compatible while despising Java at the same time, even though both languages are extremely similar in lots of regards.


It is not clear that people despise Java or they despise backward compatibility of Java. Because I haven't see anyone despising Java's backward compatibility.


> Because I haven't see anyone despising Java's backward compatibility.

I'm happy to have beers, just so that you get the chance to see someone like that. :-)


Hopefully there should never be a repeat of that now that they have strongly encapsulated jdk internals. My understanding is that (nearly) all of the migration headaches from 8 to 9 were caused by libraries that were using improperly using jdk internals.


Devil's advocate: anything that's possible for a downstream user to access is fair game for them to use. You can certainly mark it as internal and be explicit that you reserve the right to break it later, but if it's actually possible for users to do, it's not "improper", even if it gets broken later.


That's why they sealed those holes shut, and only allow some of them with deliberate end-user command-line flags, so that anyone wanting to go that way only has themselves to blame.


No there were lots of just outright breaks.

JavaEE being removed, along with a package frequently used for Base64 encoding (with no replacement until several later versions). JavaFX being separated out so it's not bundled anymore, along with removing javafxpackager (since returned as jpackage). Java Web Start being removed.

Then there were all the borderline stuff. The locations of files inside the JDK all changing, like "rt.jar" went away and a lot of tools depended on that. The concept of an installable JRE was removed entirely and along with it the whole way people were used to distributing Java apps was deprecated with no replacement until much later (and the replacement was much worse in some ways). Suddenly spamming warnings to the console if you use widely used packages (which breaks anything parsing the output).

Even just changing the version number broke a lot of stuff because code had been written to assume the convention that Java version numbers started with "1."

Then when they went to 6 month releases soon after, that broke a lot of stuff because the whole ecosystem made the design assumption that Java releases were rare (stupid stuff like using enums to represent versions, the Java guys bump the version number in .class files on every release even if nothing changes).

Then people tried to use the new module system, but that broke the world too and for little/no ROI, so eventually everyone gave up. Now the ecosystem is full of broken module metadata that's there but doesn't work, and if you try to use it and report bugs they get closed with status: "don't care".

Frankly a lot of the dust has still never settled, it was a very damaging time for the Java community. Backwards compatibility über alles bitte, and that means NOT removing widely used features that were heavily developed and advertised as the right way to do things for decades.


I only see eternal stagnation as the alternative, and surely noone wants that.

Java does have very good backwards compatibility and they make every change with that in mind, but if you are big enough, no matter what you do, someone will surely depend on some stupid thing they should have never do in the first place.


I was pleasantly surprised about 6 months ago when I went to run a game I wrote in Java when I was at university. Nothing huge, but still about 10k lines of code. I originally wrote it in Java 6, and it compiled and ran with no issues on Java 20.

I only used one 3p lib, otherwise just the standard library, which helped, but I was expecting something to be broken given it was over 10 years later.


Was it more than internal lib references that one was not supposed to do anyway?


Pretty much. Most of the breaks came from touching the likes of "sun.misc.Unsafe". Java versions 9->~17 added new jdk features (such as VarHandles) to allow for the safe interactions that sun.misc.Unsafe exposed. Libs had to update to use these new patterns with 9 being the worst hurdle.

There was also a change to how packages could be named that messed with stuff. 2 jars putting stuff into stuff like `javax.annotations` was a big no-no that broke with 9.


If that's pretty much everything, then nothing is backwards compatible unless they have the same hash..


One cannot make omelette without breaking a couple of eggs.


Sun did the same for Solaris, jumping from version 2.6 to 7:

https://en.wikipedia.org/wiki/Oracle_Solaris#Version_history


I did not know that was the reasoning or logic behind the Java 8, 9, 10 ... numbering, that clears up so many thing.


Also relevant is that Sun had pulled the same trick with Solaris a few years earlier - Solaris 2.6 was followed by Solaris 7. Bigger version numbers make for better marketing. I am skeptical that backwards compatibility was strongly involved.


Apple also did a similar thing with OSX/macOS a few years ago - instead of making everything 10.XX they bump the major version (first number) every year now, continuing on from the 10 that the X represented, as if each version is the same increment as the jump from Mac OS 9 to Mac OS X (which was a jump to an entirely new codebase)

Android did that too, much earlier starting with 5.0. Previously the major version was something of an indicator of a major visual/conceptual redesign. 3.0 was the tablet version, 4.0 was the move to the holo design language, 5.0 was material. Then they just kept bumping the major version every year since.

I also assume it's just for marketing reasons.


I’d argue that the “everything is 10.x” for Mac OS was also basically marketing. :)


This happened at Java SE 5(1.5) after 1.4. This was at much a marketing decision.


It goes back ever further, Java 1.2 was marketed by Sun as "Java 2".

https://web.archive.org/web/19991010063140/http://java.sun.c...


That was somewhat different from the way the internal stuff was numbered. You'd still see "1.5" and "1.6" everywhere when you asked the JVM for it's version. 8 was when the JVM started matching the marketing (IIRC, might have been 9).


You might be overestimating the type of change required to break source compatibility. A benign example is adding a keyword. Let's say you want to add a new language feature and the community unarguably wants the feature and the right or only way to add it is with a new keyword. If you're not allowed to break source, then you can never add the feature.

I understand your argument for big things like changing the semantics of the language. But a backwards-incompatible change can also be rather benign.


It's not true that Go can't add new keywords. Now that we have Go modules, all Go code is now explicitly annotated with the version of Go it was written against. We can add a new keyword in a later version of Go as long as the compiler can still also compile code written for the older versions of Go. (The go command tells the compiler which version of Go to use for each package it compiles.)

What we're not going to do is abandon all the code written for older versions of Go, like Python 3 or Perl 6 did. Python is recovering now but it easily lost a decade to the Python 2 -> Python 3 transition, almost certainly unnecessarily. And Perl lost even more.


This is a the TL;DR I wish was the first paragraph of the article!

Could this mechanism be used to patch up unfortunate evolutions in the standard library also? For example, all of the `WithContext` functions that could be folded into the (more common?) non-contextful versions?


Why can't it add keywords? Adding a new keyword doesn't break backward compatibility. It breaks "forward compatibility."


New keywords are like the textbook example of a backwards compatibility problem. It's probably why C overloads "static" so many different ways.


You can sort of add new keywords backwards-compatibly using a trick called "contextual keywords": you require that they be placed in a syntactic position in which no identifier could legally go, and you maintain them as legal identifiers for compatibility. C++ used this trick to introduce "final" and "override" by moving them before the opening "{".


You mean new reserved words? For example, I'm quite sure when C# added "record" it didn't break backward compatibility, as old code that uses "record" as a variable name still compiles.


Go has made changes like that by adding new predeclared identifiers ("any" is an example, I think?) but there's a distinction between predeclared identifiers and keywords.


Old code becoming a compiler error sound like a backwards compatibility issue to me.


I guess it's a terminology thing. As someone from a C# background, not all keywords are reserved words. Only new reserved words break backward compatibility.

C# has added some keywords (record, and, or) without breaking backwards compatibility.


I dont undestand your example, plenty of languages add new keywords without breaking backwards compatibility, its removing a keyword that would cause such and issue.


I have named a function 'foo', in current version of language. A future change makes 'foo' a keyword. My code was broken by adding a keyword.


This is not theoretical: Python broke a lot of async packages when they made “async” a keyword!


I guess, some languages get around this by having a destinction between functions and keyword functions not having the () braces in the syntax. But really if ur defning functions as keywords u should just put it in the standard library


I’d argue Rust’s editions are a good counter argument to that. The differences between editions really aren’t huge despite being breaking changes.

In theory I like the idea of backwards compatibility never changing but in reality some breaking changes really do make sense and being permanently on the hook for a language feature that didn’t take X or Y into account when it was created doesn’t feel like a win.


The article does state that at the end:

> The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.


It becomes a semantic difference at that point. If Go is doing semver, and there are going to be no backward-incompatible changes, there's no reason to ever increment the major version. Everything is a minor version (compatible additions) and patch version (bug fixes).


I'm not sure that's a relevant distinction. If you take the stance that a major version has to mean breaking API compatibility with the previous major version, semver style, then their statement is equivalent to saying "there will never be a go2". If you don't take that stance, then their statement leaves open the possibility that, fifty years from now, we'll be at go1.102 and someone will say "hey, these numbers are getting pretty big, maybe we should just call this next release go2"; and that's fine. That's literally and exactly what Linux does; when the number gets big, it becomes easier to type smaller first number, so rename version to smaller first number. Its not semver, but semver doesn't have a monopoly on how software must be versioned, and leaving room in the language today to do that is totally cool.


> I wonder: why not go further and say "there will never be a Go 2"

Pretty sure they've said this in the past.


I write a lot of Go and I can’t tell you how much this warms my heart.

Compatibility probably isn’t much fun for the language team, always having to keep one foot firmly in the “distant” past. But for those of us that have to maintain large Go systems it’s such a gift.


I worked on the Go compiler for a couple of years, and it wasn't a big deal. We just thought carefully about things, and dealt with a lot of rejection of ideas. If we couldn't make it fit, it wasn't right, and we'd try again. If we still couldn't make it fit, we probably didn't have a good handle on the problem, and it was right to stew on it longer.

Frankly, I truly appreciated working with people who thought carefully and tried to make sure the right ideas were in. I appreciated Russ being a BDFL, Ian, Rob, and Rob. I'm glad I did it, and it made me a much better engineer.


Related:

Forward Compatibility and Toolchain Management in Go 1.21 - https://news.ycombinator.com/item?id=37122932 (no comments yet, but some will probably show up)


> Boring is good. Boring is stable. Boring means being able to focus on your work, not on what’s different about Go.

This really resonates with me. I work with NodeJS and the JS ecosystem in general on my day job and I have to tell you, the struggle is real. The ecosystem is fragmented, everyone is doing their own thing, which is hard to make things stable. Don’t get me wrong, I still enjoy this work, but I really wish the JS ecosystem could have a stable modern foundation we could rely on.


The JS eco-system (npm, React, etc)? Sure. Let us also acknowledge that JavaScript, the language, has been prioritizing backwards compatibility before golang even existed.


Well, it is a bit easier if you have no stdlib to speak of


JS has a stdlib, there's just nothing in it.


I wonder why Go isn't the new Java/.NET (yet?).

Clearly a lot of tools and APIs have been written in it, many would describe not needing a separate runtime on the target system as a big plus and the language seems simple enough to learn and utilize (with VSC support and GoLand both being good), even the typical complaints like the error handling don't seem like dealbreakers.

I wonder what's missing for Go to become a mainstay of development for the decades to come, or at least take up a huge chunk of the job market instead of being considered a niche language in some places.


Why would it be the new Java/.NET? You vastly overestimate that separate runtime as being a huge positive, it is almost indifferent to most niches where these two are most common: servers, especially on the bigger side of things. For a devops team with a proper CICD pipeline, monitoring, whatnot, installing a runtime is beyond trivial, especially that many of it is container-based.

So even if all else were equal, Go would need much much more positives to even start turning the wheel towards itself, momentum is huge in the industry (old Cobol systems are still clocking at places, even if they do so in a VM, as the hardware they are hardcoded against are too old now). Especially that it is not at all a net positive in many people's eyes:

- it is very verbose (yes, it is in fact more verbose than Java, that had been bullied by everyone forever for being verbose..) - has terrible expressivity (java streams/.net linq) - smaller ecosystem (java is much larger than even .net, let alone go) - slow reflection (on the more enterprise-y end of the industry you sometimes need more dynamic workloads)

Also add that both Java and .NET has native AOT compilation, so even that small benefit you mention may not be a dealbreaker, even if those are not as smooth a rides as go's.


Whenever someone mentions Java AOT, I look it up, and it’s a nightmare. Has it changed recently?

I developed server software in Java for two decades, and I can tell you that the huge JVM was always a PITA for us, even after Docker became a thing. All those frameworks? We ended up throwing them out. And the slow startup speed of Java apps made our tooling sluggish. It was also a pain to ship tooling to non developers, for all the same reasons.

Using Go after all this time was like a breath of fresh air.

It’s fine for you to assert your own preferences and biases but you don’t represent all enterprise server developers.


The only change is that now there are free beer AOT compilers, commercial AOT compilers have always been an option since around 2000.


OK... and these AOT compilers work identically to JIT compilers in every way, with no caveats other than needing to specify the architecture in advance.

Right..?


Naturally depends on the use case, yet they work good enough to be in business for 20 years.

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/products-services/jamaicavm-tools/

https://www.codenameone.com/

Android 5 & 6 (only changed back into JIT/AOT due to long compile times), https://www.infoq.com/news/2014/07/art-runtime/

Unfortunely the best well known, Excelsior JET, is no longer in business, most likely due to GraalVM and OpenJ9 being available as free beer, while PTC, Aicas Codename One are safe in their domains.

There is also RoboVM (https://github.com/MobiVM/robovm) as free beer, however it actually started as a commercial product, and the acquisition from Xamarin kind of stagnated it (naturally).


I think you missed my point.

Java AOT does not, and will probably never, offer a comparable experience with Go. Yes, it’s possible. But it’s not easy.


Naturally having to pay for compilers isn't something that Go folks would ever do.


You’re deliberately ignoring my point and then throwing in an ad hominem for good measure. It has nothing to do with free or non free.

I ran a Java enterprise company for twenty years. I looked at AOT on multiple occasions, starting with gcj 20 years ago, and most recently, GraalVM. There were plenty of commercial compilers in between.

If there had been a non free AOT compiler that didn’t come with a bunch of compatibility and licensing complexities we would certainly have considered it. Just like we bought non free IDEs.

But such a tool didn’t exist. The cost in missing features and added complexity was always much greater than the sticker price.

Java AOT is an afterthought, it is not at all comparable with Go AOT, which is a core feature of Go, expressed for example in the Go team’s explicitly designing the language itself to support fast compilation.

There are plenty of conversations to be had where we can compare the two languages, Go will not always win those discussions, but one of the inarguable features of Go is that it is AOT from the ground up, and that is a feature that’s really valuable to me.


First you accuse me of ad hominem, and then confess that you only cared about free AOT compilers for Java, gcj which the only feat was being able to compile Eclipse with some nurturing quickly abandoned in 2009, while ignoring the commercial offerings that have been available for 20+ years.

Good job.


WTF? I said “from” gcj, the earliest AOT compiler that I know about, “until” graalvm, the most recent that I know about, and that “There were plenty of commercial compilers in between”.

Please stop.


> Why would it be the new Java/.NET?

Because it's a seemingly "boring" technology that's not too hard to learn, seems to have a decent community/ecosystem in the works (let's see where it is in 5-10 years) and overall could hit the sweet spot of just being able to get things done with it in a 9-5 by a bunch of regular developers.

> You vastly overestimate that separate runtime as being a huge positive, it is almost indifferent to most niches where these two are most common: servers, especially on the bigger side of things. For a devops team with a proper CICD pipeline, monitoring, whatnot, installing a runtime is beyond trivial, especially that many of it is container-based.

I use containers and that won't change anytime soon, they're great! That said, CLR and JDK have some space overhead, even if a counterpoint could be made that storage is on the cheap side nowadays. However, there are also those who don't use containers and don't always have 1:1 reproducible environments (even though they should).

I've personally seen a difference in the MINOR version for JDK bring down production by having some sort of an issue that caused the performance under load to decrease 10x in an enterprise project. It would be nice to get rid of that risk and just ship the whole thing, much like how a fat .jar instead of configuring an application server like Tomcat separately makes things easier, just a step further.

Now, Java has some solutions for that, but I think that Go is in an even better position in that regard!


Well said. In addition, golang seems to shun everything under the guise of "keeping things simple", so you don't see frameworks like Spring or ASP.NET.

Of course pretending the issue doesn't exist doesn't make it so. At an employer, the reinvented a dependency injection + application framework, but poorly of course compared to the extremely mature offerings on the JVM and .NET, not to mention millions of dollars sank into maintaining it.


Java DI isn't good either. It runs a full DAG on boot.


There are entire second generation of “cloud-ready” frameworks for Java that does it at compile time instead.


I think Go is on that Java/.NET adoption curve, but it's climbing it slowly because backend programming, as a whole, is a lot bigger, mature and diverse today, than it was when Java/.NET emerged.

I think there's a decent chance that, `[java.age - go.age = 15 years]` from today, Go is high up on the totem pole. From my perspective, its ecosystem is vibrant but still young – we still need to decide on the Go equivalent of Flask, Django, Spring, etc.


There are plenty of Flask equivalent in Go IMO. Django... not so much.


Yep! I think, at this stage, `plenty` is the point of my comment. While there are many competing and quality options (vibrant), none of them are the de facto leader (young).

I think, given where we are in 2023, it'd be difficult for a Django (i.e. ORM + templates + web framework all-in-one) to emerge in Go – it's possible we never end up with one, and that's OK. [I don't think there's much stomach for good people to work on sprawling projects like that anymore – we're in an season of backend development that favors separation (vs bundling) of concerns, from my perspective].

I'm not sure if it's a unique feature of the Go ecosystem that there isn't one clear winner in the "minimalist + pluggable web framework" or "ORM" categories, or if we just need to wait for the winner to emerge. Ironically, I think the quality of `net/http` and `database/sql` might have been an anti-catalyst for the development of leading libraries in those verticals.


> we're in an season of backend development that favors separation (vs bundling) of concerns

I work in a small team (5 developers), on multiple projects that span from 3 months to 1 year of work. While I like some of Go qualities (like speed, types, low memory, easy deployment,...) it would be hard for me to introduce Go to the team. The thing is that with Django (or Laravel, or Rails, or any "opiniated" framework) I can point the team to a nice single documentation website and associated framework that covers probably 90 to 95% of our needs and gets us right into the business logic real fast. There's real value in framework integration for teams like us (and for this very reason, we don't use Flask either, way to much fiddling). Also, the feature set in these solutions, while maybe "out of season", is fine for most of our projects.

At this point, should I want to push Go to the team, I would have integrate libraries myself and document... so basically starting my own version of a "framework". Like you said, it's a sprawling project. But hey... isn't it how Django started ? Maybe one day...

Meanwhile, I'll stick to using Go in my personal projects, until I have a very clear picture of the ecosystem.


You also don’t need to have frameworks for everything when the std library delivers


Because it's very literally decades behind Java on almost every front. It was designed for system programming whereas Java is an enterprise platform. On observability Java stands alone. Far beyond anything else (including .net). E.g. check out OpenTelemetry to get a sense of how far ahead Java is on that front. It does all of that without impacting performance in a significant way.

In terms of 3rd party tools. NPM has more packages but they are simplistic toys for the most part. Java has a fantastic number of packages to solve every niche problem you can think of. These packages are at a maturity level that no one can compete with. Hibernate is so far ahead in the ORM field that there's no point comparing it to anything else...

The same can be said for Spring, it is massive. In a bad way as well... But that mass is without competition. You need to integrate with something, there's already someone who built that and it probably works with Spring.

Then there's scale... Horizontal and vertical scaling and the set of tooling to measure that it works properly. The question isn't why don't people switch to go, the question is why does anyone use Go to begin with?

To a large part it's a combination of ignorance about Java due to the vast amount of stuff that's already out there. A hostility towards the language which is redundant since there's Kotlin, Scala, etc. or problems with Oracle which is something I actually get... I use OpenJDK but Oracle does loom. Go feels like a toy. If I were to build a system level solution I would use Rust which seems superior in every way. For high level stuff the JVM is without competition.


> I wonder why Go isn't the new Java/.NET (yet?).

Go is moving too slowly (ex: most of the SDK still doesn't support generics). Meanwhile, Java is moving relatively quickly, as are new contenders like Rust.


Go is moving slowly, or trying not to move at all, on purpose, and I have to say, as a C# developer having to constantly learn new syntax, it seems refreshing. At the rate it's adding new syntax, C# may fall apart under its own weight and become the next C++.


But Go started at Java 1.1 levels, for no good reason. If you want a language with a conservative growth, Java is your choice.


Moving towards what?

I use it to build services that can serve lots of requests, I don’t need the language to be moving fast underneath me


> Moving towards what?

Toward a better stdlib? E.g. we had to wait years to get Min/Max.


I think the stdlib is pretty good! Was anyone really waiting for min/max?


The SDK is where most movement is needed.


Go is spreading like wildfire in microservice and devops space and Rust is following closely behind.

It's over.


Maybe the decline of desktop applications (Java, C#) and Android (Java)? And then Go coming out with some killer frameworks? I have to stretch my imagination to imagine anything causing Go to outpace C# or Javas ecosystem.


I was a Java dev full time and went to full time Go dev. I think a lot of it boils down to wide employment opportunities with Java.

Some people also like the idea of creating OO package private final monstrosities.


Go is popular in dev ops right? It could theoretically languish there for a while, like Python did, before becoming popular for other things.


Because the overal experience is lacking in language features, tooling and ecosystem.


Go ecosystem is like a fraction of what .NET/Java is. Both in quality and size. Dart for instance is a better lang. It is also from Google. It also compiles to binaries. Why is that not the new...

The slim spot for Go is the devops tools space. Where Go may or may not survive. Considering the march of Rust.


JavaScript is famously backwards compatible. That’s exactly why it’s the mess you describe.


> I really wish the JS ecosystem could have a stable modern foundation we could rely on.

I actually think we do now. ES modules, ES2020 code. Both supported by Node and major browsers. Node even has a built in test runner now! The problem is getting everyone up to this bar. Once we’re there I think things are going to feel a lot better.

I think part of the problem is that the JS ecosystem also encompasses frontend UI work and there are so many different applications for it that multiple implementations is inevitable. Desirable, even.


Moving some code from python to Golang was instrumental in helping me scale. I am so glad to read they are going to stick to their core statement of backwards compatability


Ah, parsing IPs. Just how exactly was the BSD's original inet_ntoa written, I wonder? The atoi/atol and sscanf with %d/%u always parse exactly decimal integers; it would had to use either %i or strtou with 0 base to have this silly effect.


No need to guess. My man page for inet_aton says it comes from 4.3BSD: https://github.com/dank101/4.3BSD-Reno/blob/master/lib/libc/...

The earlier inet_addr from 4.2BSD has the same logic: https://github.com/dank101/4.2BSD/blob/master/lib/libc/inet/...

inet_aton and inet_addr parse addresses the obvious way. Using something like strtoul or especially sscanf would be stilted. The beauty of C pointers is that it makes simple parsing tasks very easy--perhaps too easy.


O_O

They've intentionally coded it that way? This is atrocious. And this hand-rolled mess that doesn't even parse numbers correctly! It would parse "099" as 81 and "99999999999999999" as whatever it is modulo (MAX_ULONG+1), without any overflow detection. Well, at least they don't accept negative numbers, that's something.

Anf mind you, beauty of C pointers has nothing to do with neither of these two bugs not the original decision to support octals and hexadecimals.


I laughed when I read that. “Back in the day” I decided to “clean up” my /etc/hosts by zero padding the quads.

Anyway, the outcome was that I had to go back and remove the zeroes.


As a language designer, I respect the decisions they made here, including you never have a real Go 2.

I'm also going to pilfer their techniques for ensuring compatibility.


If Go 2 won't break Go 1 programs, shouldn't it be Go 1?

I get not using semantic versioning for end user packages like a web browser, but for backend systems and APIs, it still makes sense.

I don't see why go didn't use full semantic versioning (x.y.z) where any update that didn't break anything moved Z, new APIs or minor changes to behavior/compile such as in the article move Y, and large changes to the core language or core libraries moves X.

By semantic versioning, they aren't moving off "Go 1" any time soon, no matter what it's called.


> If Go 2 won't break Go 1 programs, shouldn't it be Go 1?

Yes, that's what the article is saying.


Ah. So to phrase it differently,

"There will never be a Go 2 unless marketing says we should renumber."

Just stop referencing "Go 2" as if they ever think it will exist, then!


Agreed. It’s confusing as hell


I do hope they will eventually introduce sum types and better way of handling errors, but keeping the old code just work is frankly more important on the grander scale.


BTW, I've read the Go GC can be disabled, but what happens if you do? Wouldn't you have to avoid memory leaks?


If you turn off the GC then no memory is collected. In general this is not something people actually do. The only time I've ever even seen it mentioned is in an old blog post by ESR, and between him and me, at least one of us didn't understand what he was saying.


There are popular problem domains where turning off the GC is useful. The best example is Lambda functions/FaaS. Most invocations are short-lived enough that the GC never triggers anyway, but being able to turn it off can help smooth down your P99s where it does trigger, but the world is going to be thrown away in 50 milliseconds anyway. This is a known, not-uncommon performance optimization in other languages on Lambda; but I have less experience using Go on Lambda.


It's not even exclusive to GC'd languages. A toy raytracer I've written in the past got notably faster if I just didn't deallocate anything. Memory management is expensive, and for unix-y software that runs on some input and then quits it can be worth it to just let the OS do the cleanup.


That doesn’t seem like a real thing. Lambda and every other FaaS that I’m aware of will reuse a single instance of the function for multiple invocations. It is not started from scratch with each invocation. Each instance can live for hours, handling a very large number of (sequential) invocations during that time.

If you’re not GCing, you will almost certainly run into OOMs.


If the FaaS can catch the OOM, restart the instance and re-run the request, the visible effect would be somewhat greater latency for that request. If the service is configured to automatically kill and restart instances after some time or some number of requests, it seems like a reasonable tradeoff. Am I missing something?


> If the FaaS can catch the OOM, restart the instance and re-run the request, the visible effect would be somewhat greater latency for that request.

I’m not aware of an FaaS that would hide the OOM… that falls under the “not a real thing” category I mentioned previously. It would just be a failed request, which is a very visible effect.

Would you like to link me to the Lamda docs that say it will automatically retry the request in case of an OOM?

Also consider that code which OOMs will do so unpredictably. It may have completed half of a task, leaving that task in a corrupt state that requires manual intervention from a human to recover from. If everyone wrote fully idempotent code that can somehow skip the already-completed updates and continue where the OOM occurred, this wouldn’t be a problem, but that isn’t what everyone does. This is certainly a large reason why FaaS do not retry requests automatically at all, in most cases.

> If the service is configured to automatically kill and restart instances after some time or some number of requests, it seems like a reasonable tradeoff.

I don’t believe Lambda has any configuration parameters for those things. That’s not how this is meant to work. Lambda will kill the instance when it has been idle for too long, and that’s the primary factor.

You could manually exit the process when your conditions are met, but why? The benefits of turning off the GC are likely to be negligible. This is a lot of complexity for no real gain. It would take some seriously huge gains demonstrated by well-written benchmarks to convince me that any of this is worthwhile for a FaaS function.

Grug would rather fight t-rex.[0]

If you need absolute performance and control over garbage collection, it’s better to just write the lambda in Rust than to try to hack together a solution by turning off the GC and hoping it doesn’t blow up at the wrong moment.

[0]: https://grugbrain.dev/


https://docs.aws.amazon.com/lambda/latest/dg/invocation-retr... Lambda retry docs. I still would strongly not advice intentionally disabling the GC and explicitly said not to above for the same reasons you're concerned.


Glad you generally agree.

From that link, here are two choice quotes that I think are highly relevant:

> When you invoke a function, two types of error can occur. Invocation errors occur when the invocation request is rejected before your function receives it. Function errors occur when your function's code or runtime returns an error.

> […]

> When you invoke a function directly, you determine the strategy for handling errors related to your function code. Lambda does not automatically retry these types of errors on your behalf. To retry, you can manually re-invoke your function, send the failed event to a queue for debugging, or ignore the error. Your function's code might have run completely, partially, or not at all. If you retry, ensure that your function's code can handle the same event multiple times without causing duplicate transactions or other unwanted side effects.

So, it won’t automatically retry if a function is being directly invoked and OOMs, which I believe was the context of my most recent reply.

There are a few limited scenarios where certain AWS services that are invoking a Lambda asynchronously will decide to retry a couple of times, because it assumes that this type of function will be okay to call multiple times with the same input. Definitely not something worth relying on.


Do not do this in AWS lambda or OpenFaas. Lambda processes are reused in "warm" invocations and are only short lived in cold, infrequent use cases (read: once every > 45 minutes). This is useful when the process dies at the end of every request (in which case you're relying on the OS to do the GC for you).


Also things like compilers...which tend to make lots of garbage but the processes are so short lived who really cares? All the memory is getting released in no more than a few seconds, worst case.


Unless you're writing C++...


there is of course one notable use for turning off garbage collection

https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...


That seems like something that would eventually cause problem once someone just takes the software already designed and paid for and slaps it onto missile with longer range...


I'm sure all missiles will eventually cause problems for someone, somewhere


If you have a short lived CLI tool, disabling the GC might be useful but that’s likely an exceptional case.


It can, by setting GOGC=0 or from within the program. And yes, you would. It's essentially like writing in C, but not being able to call free(); i.e. allocating a fixed amount of resources at the start of the program and work on them. Which is also a recommendation[1] for safety-critical C code made by a NASA person, incidentally, heh.

[1]: https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...


Most of that seems reasonable, but the "do not use function pointers" boggles my mind. I'm pretty sure the alternative is a bunch of conditionals wherever you would otherwise use a function pointer. I've definitely seen some really ugly code written with this axiom which would certainly be a lot cleaner if rewritten to use function pointers. I'm curious if anyone can make a compelling argument against function pointers?


It's in the original:

> […] Similarly, function pointers should be used only if there is a very strong justification for doing so because they can seriously restrict the types of automated checks that code checkers can perform. For example, if function pointers are used, it can become impossible for a tool to prove the absence of recursion, requiring alternate guarantees to make up for this loss in checking power.

In other words, static analysis beats code cleanliness, in this perspective.


If your program is short running, then it may be more efficient to disable GC and let the dying process to take care of it all.


There many tales about programs foregoing any kind of memory collection and relying upon the business process itself to resolve the issue.

My favorite is a missile. Evidently the program leaks memory like crazy. Solution was to determine how much memory would be required for the platform to fly to its farthest possible target + some margin.

Alternatively are stock trading platforms written in Java. Disable the GC entirely because trading hours are only a a limited portion of the day. Restart the program daily.


I like to think the stock market thing is actually the reverse, and they shut it down every night because all the programs that keep it running were written in a memory-leaking tech stack that needs restarting


> My favorite is a missile.

This might be apocryphal, as software for this class of embedded system almost certainly doesn't dynamically allocate anything; probably doesn't even use pools.

I have a friend who works on missile software at a big defense contractor, and they do actually clean up their memory.


I found a source, quoting the quote, so provenance is what it is

> This sparked an interesting memory for me. I was once working with a customer who was producing on-board software for a missile. In my analysis of the code, I pointed out that they had a number of problems with storage leaks. Imagine my surprise when the customers chief software engineer said "Of course it leaks". He went on to point out that they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number. They added this much additional memory to the hardware to "support" the leaks. Since the missile will explode when it hits its target or at the end of its flight, the ultimate in garbage collection is performed without programmer intervention.

[0] https://devblogs.microsoft.com/oldnewthing/20180228-00/?p=98...


Nobody said it was good missile.


Go compiler performs escape analysis, which determines whether or not a particular object "escapes" the function it was created in - and if it doesn't, the compiler allocates it on the stack instead of the heap, thereby removing the need for GC.

So, as long as you write Go in malloc-less C-style (which, admittedly, is a non-insignificant restriction), your program will be able to run just fine forever.


> So, as long as you write Go in malloc-less C-style (which, admittedly, is a non-insignificant restriction), your program will be able to run just fine forever.

What happens if you write your Go functions in this malloc-less C style and they have to grow their stack? Doesn't this mean the old stack is now leaked memory?


I see no reason why would de-allocating a stack require action from the garbage collector - afterall, once a goroutine grows its stack and copies the contents of the old stack to the new stack, there aren't any reachable objects left on the old stack, so it can be deallocated without any GC intervention.

But I don't know enough about Go internals to be 100% sure.


The purpose of escape analysis is to avoid situations where other code or data structures could potentially reference stack-allocated memory. IOW, all stack-allocated data is only referenced from the local function invocation frame, presumably using stack relative addressing, but if not at least the scope of pointer updates is substantially narrowed.

Relatedly, unlike languages like Java, Go doesn't have a moving GC--once something is allocated on the heap, the address is fixed.

I'm not sure how realistic it is to code in an entirely malloc-less style in Go, though, given the limited ability to pass references to stack-allocated objects.


Yeah, I think you’re right that stack allocated data can’t be pointed to by the heap. I don’t know what you mean about “the limited ability to pass references to stack allocated objects” though—you can always pass stack allocated references, you just can’t return them (or they will escape to the heap).


A reference can escape not only up, as a return value, but also down, as a function parameter. I'm not sure how sophisticated the static analyzer and inliners are, but escape analysis can only go so far; e.g. stopping at a method call on an interface type for which it cannot discern the concrete implementation. In fact, I believe at least in early Go releases merely passing an object reference as a parameter to a simple leaf function qualified as an escape, forcing a heap allocation.

Note that the decision to force heap allocation is done at compile time, partly because of the semantics of goroutine stack reallocation as the previous poster hinted at. In principle Go could conditionally copy to heap at runtime, but that's just not Go's style of engineering--that's more in the vein of the JVM and similar environments which push more complexity into the runtime rather than compile time. Though maybe it will or has already begun to go down that road.


Yeah, good point--I wasn't thinking about interface types.


https://go.dev/doc/gc-guide is the docs on mucking with the GC, including disabling it.

> Wouldn't you have to avoid memory leaks?

I've not played with it, but I'd expect that every heap allocation would essentially be a memory leak in that mode


If you allocate everything in advance and use free lists, you can probably do it. I believe that it is one of the few ways tinygo handles memory. (so embedded as a use-case)

Note that GC can be triggered manually.

I've had to do that using the wasm target so that I'd only trigger go GC cycles when the browser was idle. Eventually that use-case might disappear as the integration with a garbage-collected wasm gets deeper.


There are at least two cases where GOGC=off could make sense for increasing performance:

1. You know you’re definitely not allocating on the heap;

2. Your workload is bursty and you can simply keep everything until the process dies and memory is released.


If you're definitely not allocating on the heap, setting GOGC=off should make no difference, right?


You might be allocating on the heap, but if you know in advance that the allocated memory never goes out of scope, then switching off the gc is quite reasonable.


You will leak memory when you disable it.

However, you might not care (app running too short), or you might turn it off only temporarily (say time-critical section you don't want to be interrupted by GC).


This is something I’ve complained about before on HN but this rings really hollow when Go 1.17 arbitrarily updated the x.509 package to suddenly stop supporting TLS hostname validation based on the CN field because it was deprecated in the X509 specification.

My browser doesn’t care, why should Go?


Found the guy who only read the title.


Golang's forward compatibility and static compilation allow developers to quickly download and use the latest Golang release without the pain of upgrading like interpreted languages or VM-dependent languages.


The leading zeroes in net.ParseIP change broke some of our unit tests too.


Looking forward to Go 1.99999999999999....


Pokemon Go 2 the polls


Nice article Russ.


darn i was hoping for at least a hint or mention when Go 2 might be released... no dice


Go 2 is coming out every six months as continuous Go 1.x releases.


The article ends with a definitive answer to that question.


I would expect a Go 2 to dispense with accumulated API cruft and to overhaul syntax with something radically different. But to otherwise maintain compatibility.


I started using Go at work ~2 years ago and I love it, especially for the backwards compatibility. But my personal projects up until then were node.js backends written with Typescript. Those personal projects are essentially stuck in time because of the whole ESM/commonjs mess that is modern javascript. Some of the npm package I use have been updated to only support ESM modules, while others will never be updated, some are half and half. It's in such a bad state that I've decided if it does ever fully break I'm just going to re-write the backend in Go, and I know I'm not the only one just frustrated with the modern javascript ecosystem and have projects stuck on old node.js and npm package versions because of it.


The whole Nodejs ESM thing is the result of a bunch of different parties - Node, Typescript, Babel/Webpack/etc - with a lot of influence over the ecosysten with conflicting priorities. This is the price of truely "open" standards. One of the reasons why it's such a mess was due to bending over backwards maintain backwards compatibility.

It's a mess. It's frustrating as a user/developer. Go has it significantly easier by having just one central organising factor behind the ecosystem. If there were multiple Go implementations, some moving faster on the spec (or even ahead of it) than others, then you would find the same pain there also.


I've been maintaining a fairly popular (~40k downloads/mo) react component library for about 4 years now. It is a clusterf'ck to keep things up to date with changes over time. React itself has broken things several times.

I knew in advance that this would be a challenge, so when I started the project, I wrote a lot of unit tests. The first test for each component was just a snapshot output of it. Then I had other tests for functionality and have added tests over time for bugs. That has saved my ass so many times.

I don't know how anyone maintains a JS project without comprehensive tests. Actually, I do... as you said, they never upgrade anything.


Babel was the most enterprise thing that could happen to the JS ecosystem.

Gone are the simple distribution channels, and all build pipelines have 100s of megabytes of legacy crap that nobody actually ever executes.

If you want to tell me your page is working in IE6, you're lying. It will break apart in all places.

Anybody remember bower and pikapkg? That was the peak in my opinion.


Oh my.... yes. For me, that would have been Angular 1.x days. And while it was neat having a tool that could pull a deterministic flat list of dependencies, I wouldn't go back to those days. I do wholeheartedly agree that injecting core-js into every single build, in the rarest of cases that you'd need to support a browser that doesn't support generators, or some other featured added around 2015, is just plain silly. I cringe when I look at a dependency graph and see all those polyfills in my builds!!


In your position I'd probably do the same thing and no doubt the ESM/commonJS mess is a mess and huge PITA.

All that said I'll probably continue with my TypeScript/NodeJS backends since I just absolutely love writing TypeScript and I love sharing code between the front and backend. I use JS/TS in my day job (frontend only) as well so it's super nice to be able to benefit from that knowledge/practice. I also use TypeScript to write a few apps (some for work, some for my side project, some for personal use) so staying in the TypeScript ecosystem for almost everything I do is really nice.

Again, I'm not defending that annoying/bad parts of TS and certainly not saying I've escaped the ESM/commonJS hell unscathed, just that I use TS everywhere and I generally have a lot of fun writing/running it.


> ESM/commonjs mess that is modern javascript

No - only a mess in the node/npm scope.

JavaScript modules are just fine if you stay away from npm and node. Node and npm are a total dumpster fire, but that is not all of JavaScript.


Agree that node/npm are the dumpster fire of JS, but ESM is still going through growing pains too. I have switched off of Node over to Deno. It natively supports ES modules. But most of the time I try to consume an ESM the code was designed to only load in a browser. It gets especially complicated if the ES module is using something like WASM - using FFMPEG via an ESM using WASM is the current use case I'm working on.


Never breaking compatibility is hardly a new idea.

Java and C++ have invested tremendous amounts of effort into preserving backwards compatibility for decades, often requiring bending over backwards and very suboptimal designs for new features, especially in C++.

But it keeps users happy. At least the ones with large old code bases.

I think Rust has one of the cleanest models for this, which keeps complexity low: editions.

The compiler will always support old code written against older editions, but the language can still introduce breaking changes in newer editions. Developers can either opt in to new editions and migrate their code, or do nothing, and things continue to work fine. There is seamless/transparent interoperability.

Of course Rust is still pretty young, and if this model will work long term remains to be seen. For example, I imagine this causes a lot of implementation complexity in the compiler, the standard library is much more restricted and must remain stable across editions, and major overhauls probably won't be possible.

But so far it has worked out really well, allowing the team to fix warts and mistakes in the language.


Did you read the article?

> The compiler will always support old code written against older editions, but the language can still introduce breaking changes in newer editions. Developers can either opt in to new editions and migrate their code, or do nothing, and things continue to work fine. There is seamless/transparent interoperability.

This is exactly what the GODEBUG scheme described in the article is intended to allow.


Edit: I had some wrong assumptions here.

The Go approach does indeed seem to enable similar functionality. Although I'm not quite sure I understand how the individual settings and the module level version declarations compose, and how much complexity that introduces.

With both module-level versions and GODEBUG, the whole thing does seem quite a bit more complicated than editions.


> The setting is opt-out, requiring manual intervention.

No. If a module's go.mod files declares go1.21, the compiler will use go1.21 semantics when building it, even if 1.22 has already been published and the module is being used from a 1.22-enabled program. This is explicitly mentioned in the blog post.


It's basically a much more featured, flexible, extendable, and usable version of editions. Anything you could do with editions you could do with this; you could implemented "editions" via this functionality.


Yes, but that's sort of my point regarding complexity.

"This crate is edition X, and an edition comes out every 3 years or so" is much easier to deal with.

It reminds me a bit of Haskell language pragrams, which can be toggled per source file (module), and which are a huge mess.


Can you explain in what way is that different from e.g. C++, where a compiler will take "-std=<C++ revision>" parameter which chooses the "edition" of the language?

And even link different versions together?

Because it sounds like Rust uses the same approach with the benefit of not hitting all the problems this causes... yet.


C++ unfortunately does textual inclusion for header files. So "-std=" does not automatically work for those. You need to sprinkle in #ifdefs rather than getting automatic forward compatibility.


I would also be interested in it. As far as I know editions only allow for syntax changes, the semantics will change if incompatible, especially that Rust doesn't link across different versions in the same way as C++ does.


I would upvote this twice if I could.

We will never break compatibility is code for "we will never fix warts and mistakes in the language".


It obviously isn't, as the text of the article repeatedly observes.


You are right. I haven't read the article and just assumed. It's actually about how they enabled introducing "breaking" changes without actually breaking backwards compatibility with existing code. Which is best possible outcome.


Interesting to see the first two comments in this thread have the opposing opinions on backward compatibility. I guess it depends on how deeply Python cut you.


> That raises an obvious question: when should we expect the Go 2 specification that breaks old Go 1 programs?

> The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.

Why is backwards compatibility such a religious sin for Go? Python made it through a backwards-incompatible source change. I understand breaking backwards compatibility would be difficult on a short timescale, but on a long-term time scale people would migrate. Source rewriters would handle most of the transition. Over 3-5 years I could see a source-breaking change play out positively for some of these newer languages like Go and Rust. And arguably, they sorely need it.


Go developer experience: * install the newest compiler * clone the code * go build

Python developer experience: - clone the code - setup virtual env (otherwise you will break your system) - install very specific version of interpreter because lib x supports only Python from 2020, because Python breaks compatibility in minor versions - no, you can't just update lib x withut updating python and other deps - install deps (hopefully author of the code pinned everyting, otherwise you're fucked) - never update any deps or you will suffer


As somebody who writes a lot of python, I'm going to quibble, but your statement isn't unfair. :)

My quibble would be "don't use lib x". 99% of the time, you don't need an unsupported 3rd party library from 2020. In the worst case, you can copy the subset of lib x that you actually need, the copied code will typically just work, verbatim, in a later python version. In summary, the "common" path travelled by footgun-aware python programmers is not this bad.

But yeah, it'd be nice if the language constraints meant you couldn't end up in a situation like this, and it'd be nice if we didn't have to learn-by-footgun as much in the python community.

TBF, Python is just an old language that comes with early-mover advantages and disadvantages. Like an old house, we have just learned to live with (i.e. avoid) certain floorboards.


Why the extra "clone the code" step in the Go developer experience? Been using Go since 1.0, never had to do that.


Let's say I described steps to hack on some Go package. When installing libs/programs, go get/install is enough :)


> Go developer experience: * install the newest compiler * clone the code * go build

HN screwed up your formatting, FYI.


I'm clueless when it comes to HN formatting ¯\_(ツ)_/¯


use extra new lines liberally (when in doubt, add a newline) is good working advice


> I understand breaking backwards compatibility would be difficult on a short timescale, but on a long-term time scale people would migrate.

It's funny to see people expressing this view because the "2 to 3 migration problems" is still a lively conversation in the Python world. I happen to agree with you and view it as a price of success - but at the same time I think it's obvious that you will pay an outsized price in community sentiment for even reasonable timescales for EOL'ing old versions.


> Python made it through a backwards-incompatible source change

The transition took more than a decade and was kind of a mess for a long while. For other languages, like Perl, it really didn't work out.


"Why is backwards compatibility such a religious sin for Go?"

I think the best way to understand it is that it is a first-class feature of the language. Thus, for Go specifically, it is like asking "Why is it a statically-typed language?" or "Why is the language compiled rather than interpreted?"; these are not bad questions but to a large degree the answer is that was a major choice made at the start of the language. And "why won't they get rid of it?" is in the same class of questions as "Why won't Python just become a statically-typed language?", which, is, again not intrinsically a bad question, but one that is certainly in a different category from "why won't Python adopt this particular bit of syntax sugar in their next version?"

That the designers of Go would consider that a "feature" and so many programmers probably find classifying "backwards compatibility" as a feature a completely befuddling concept ("that's not what a feature is!") is probably a pretty good microcosm of the difference between the gestalt of programmers as a whole and the Go designers.


Has python really though? still my company has a bunch of 2.7 lying around that not one is touching. I would like to flip your question on its head and ask why does any language need a breaking change ever? Might as well create a new language in that case


While I can't say for sure, one or the reasons I seem to recall from the Python 3 transition was that the Python 2 design was pretty much a dead end. There where so many limitation and wrong design choices that it would keep the language from moving forward. That does seem a little aggressive, but it does feel like Python picked up a lot of steam once Python 3 was viable (something that happened way earlier than many care to admit).

Our code base wasn't huge at the time, a few 100.000 lines of code. Getting everything running was a week of work for three people. Sure many had way more complicated code, and some depended on some idiosyncrasies of Python 2 making thing more complex, but a lot of people acted like the Python developers shoot their dog. Mostly these people either simply didn't want to do the work or their particular niche was made ever so slightly more complex... or they depended on unmaintained libraries which is bad in it's own way. Python 3 was always going to be able to do everything Python 2 could, yet a large groups of people acted as if that wasn't the case.

Still not the best transition ever devised, we had to wait until 3.2 to get the speed to a point where it's wasn't an issue for all but the largest users.


The Python 3 upgrade process for many projects was incredibly painful. "Mercurial’s journey to and reflections on Python 3" should be required reading for anybody with rose-tinted glasses of the migration.

https://gregoryszorc.com/blog/2020/01/13/mercurial%27s-journ...

There was, of course, a Hacker News thread discussing the article, and a fair few people decided to blame the Mercurial developers for handling the migration inelegantly. Because that's how you win over an audience of developers - reassuring them that if Python has a backwards-compatibility break, Python fans will go out of your way to try and blame you for writing bad code. And not, perhaps, the fact that Python was missing things like a u string prefix and % bytestring formatting until 3.3 (2012) and 3.5 (2015!!!) respectively.

If I sound peeved, I really loved Python in the 2.x days, and the way the 3.x transition was handled broke my heart and prevented me from using the language for pretty much an entire decade. There are lessons to be learned from the transition, but not if we ignore the real problems that the transition caused. More importantly, we need to recognize that Python is not the Python we know today because of how "well" the transition was handled, but because Numpy and Matplotlib swooped in and gave Python new niches to excel in at just the right time.


All well and good when u have an active dev team who knows the code. Have fun walking into a code base that has just been running for the last 5 years and all the consultants that created it have left.


Guido has admitted several times now that the 2-3 transition was poorly handled.

Transition costs are huge, which is precisely the reason that Go developers take this so seriously.


> why does any language need a breaking change ever

that's easy - because it's impossible to design everything right right away, and for many things also impossible to make it right later without breaking compatibility, while those improvements are valuable

New language for each breaking change also doesn't make sense when there is a lot of continuity


Incompatibility, lack of stability and other churn inducing changes are the nemesis of software maintenance.

The implied cost is immense and soul sucking.

In many cases it’s pure vanity as well.

It’s fine for early languages, research languages and toy languages. But outside of these categories it’s not worth the cost.


Python programmers just self-selected themselves into a set of programmers who care about backwards-compatibility less than other programmers. You can see this attitude all over the Python ecosystem. And even in standard Python it’s not just one backwards-incompatible change, but a series still made from time to time. Programmers from C++, Java and Go worlds wouldn’t accept it as easily.


> Python made it through a backwards-incompatible source change.

It took them a decade and permanently tarnished their perception with many programmers. They "made it through" in a similar way to how cancer survivors enter remission.


It's probably because Google has no interest in rewriting the hundreds of thousands of lines of Go code they have internally and don't want to expend the resources to maintain a v1 and a v2.


> Python made it through a backwards-incompatible source change.

That's skipping a few fact, isn't it? Let's call it what it was. Python was a victim in a cataclysmic software disaster of biblical proportions. It spend 10 years in rehab afterwards but miraculously made it through yes.


> Python made it through a backwards-incompatible source change.

It made it, but it was a rough few years, and the string model changes while mostly welcome (though not perfect) were a pain, we still find bugs from time to time.

Things also got a lot better once the community figured how to do proper multiversion sources even though it was more limiting.

A statically typed langage would have it a lot easier, by virtue of both the API and the semantics changes being much more flagrant, as well as the compiler making it easier to actually mix different versions of the langage across different packages (or even source files).


this always seemed like a nightmare to me. even so today years after python2 was officially sunset, documentation is still all over the web that may never catch up.

yes technically the language got over it but i would hold this up as an example of a reason not to break backwards compatibility. having to manage multiple interpreter versions when i'm just trying to run software on my computer, what a pain in the neck.


> Python made it through a backwards-incompatible source change.... Source rewriters would handle most of the transition.

It seems we have collectively learned literally nothing from the failed plans to change Python incompatibly.


Backwards compatibility matters to the Go developers because all of them worked with the same codebases for 20+ years. I remember rob pike showed me a source file that had been written before I was born (1973) but is still part of (I think) Plan9.


Especially for a compiled language, I would think it is significantly easier to make automated code porting which is guaranteed to be correct. Or at least enough to handle the 99% of a code base which is likely not impacted.


Which is exactly what Swift did when it shipped early source-incompatible revisions.


Python is definitely the exception, not the rule.


Swift shipped source-breaking updates earlier in its lifetime. I honestly don't really know of a story where a source-incompatible update killed a language. People grumble, then they move on.


> I honestly don't really know of a story where a source-incompatible update killed a language.

Perl, with the Perl 5 -> Perl 6 (later Raku) transition? Fortran, with the F77 -> F95 transition?


The pain of breaking changes is proportional to the volume of code affected by the breakage. This usually means breakages early in the lifetime are easy because there's relatively little code. For mainstream languages that have a decade or more of widespread usage, a breakage is a big deal.

> I honestly don't really know of a story where a source-incompatible update killed a language.

Perl. I mean, Perl isn't strictly dead, but its share of the market plummeted. Python almost certainly would have suffered a similar fate if it weren't for the explosion of interest in scientific/numeric computing (which more than made up for massive attrition to other languages, including Go).


> I honestly don't really know of a story where a source-incompatible update killed a language.

Death may not come as swiftly (heh) as you think. I know a few people (including myself) who, when deciding which language to add to their toolbox, had decided against Swift because of its reputation as “that language that always breaks code”, among other reasons. There are just a few anecdotal data points, of course, but I don't think it's controversial to say that a history of messy updates definitely makes new people less likely to learn a language without an absolute necessity.


Killed, in the sense of no one ever migrated? Certainly not. Killed, in the sense of hugely impacting the eco system and the community? Quite regularly. Python seems to finally mostly have made the transition, but there is just a lot of software around which will never be ported to Python 3. Perl almost completely went away, until finally development of Perl 5 gained some traction, but also with promises of maintaining compatibility.


Python was a disaster. I still miss print statements and encoding-agnostic strings. Good on Go for doing the right thing.


Print statements were a mistake. If you forbid breaking changes, you have no way to fix past mistakes in the language design. There’s noting inherently "right" in refusing to do breaking changes.


Removing the print statement was the mistake to me. It violated Python's own stated principle of practicality over purity.


There was nothing practical about the print statement.

The print function is easier to use, meshes a lot better with the langage, and can actually be extended without C++ style syntactic nonsense.

The print function is one of the best things to come out of p3.


Having two ways to do the same thing is the mistake, and print() was always more consistent with other built in functions are invoked.


I think if you'd ask any Python programmer today, they'd say Python is very much healthy and the community has successfully transitioned.


Python was my first language and I still use it sometimes. Still run into 2 vs 3 issues often.


> encoding-agnostic strings

Do you mean byte strings vs. strings in Python 3 or did you actually like string handling in Python 2, because that basically "Tell me that you only allow ASCII without telling me that you only all ASCII". I can see the issue with byte strings in Python3, it's annoying that you have to think about both strings and byte strings, but for dealing with actual text having everything just be unicode was reason enough to upgrade from 2 to 3. We deleted so much code dedicated to dealing with encoding during our switch to Python 3, everything just became better.


I had an IRC markov-like bot in Python. It used to be so simple in Python 2:

- the bytes come in from the IRC server

- they go in a string in the log, I can print this log to the console even if it has some IRC control codes (there will be a garbage character here or there but that's OK)

- if I want to make a bold/color/etc I can just throw \xwhatever in the string. if a word from the log had bold in it and it's repeated back it will also have the bold in it.

Then I reluctantly ported it to Python 3 and it was awful. Python now had to micromanage every single string to ensure I don't have any "naughty" bytes in it. Conversions to/from byte/string everywhere. Massive headache every time I wanted to read or write IRC encoded strings to a txt file or print them to the console or anything.

In Python 2 the encoding only lived outside the program, in my terminal, text editor, and IRC clients. Bytes went in, bytes went out, and everyone was happy. Python 3 decided it needed to know exactly what was going on every step of the way and didn't trust me anymore, forcing me to do an elaborate song and dance to get anything done.

I prefer my tools to have as little an opinion as possible. Let me open the door while I'm driving, let me run my web browser as root, and let me print \xDE\xAD\xBE\xEF to the console.


> Python now had to micromanage every single string to ensure I don't have any "naughty" bytes in it.

Oh that is annoying, our use case just fitted better into how Python 3 works. We went the opposite way. We micromanaged strings all over the place to ensure that encodings would always be correct. We needed unicode but also connected to Windows systems, which uses their own weird codepage system. Encoding and decoding was everywhere along with encoding detection. Python 3 made all that go away.


Maybe my case was exceptional and most cases were like yours. That would explain why I'm in the minority preferring old strings.


To coin a phrase, "Go 2 considered harmful" :)



I knew it was too good to have been original.


Call me when we get some sugar for `if err != nil`.

The signal to noise ratio when reading Go code is not great.


I did some regex analysis on my codebase of over 37000 lines. In exactly 566 occurrences of "if err != nil", there were only 154 that just returned the error directly. The other instances are doing some form of wrapping to add more context to the error, or collecting errors from multiple operations, or wrapping the error with a status code, or a bunch of other things.

The problem with every error handling suggestion I've seen is that it only handles the one case

    if err != nil {
        return nil, err
    }
But that's only the case 27% of the time! The other cases are harder, and require more thought anyway. Why optimize for a rare case that's already easy?


Thanks for looking this up. I'm not just referring to blocs that just return the error, though even then your 154 occurrences lead to 462 lines of noise. Noise that could could be avoided with sugar like `?`. I'm referring to all the code blocks that start with `if err != nil {`. Wrapping and adding context could be done at the end of the previous function call and we wouldn't have to read boilerplate over and over.


Since Go 2 is never going to happen, maybe it's time to rename the "Go2" label on GitHub and remove the "Go 2:" prefix in the template for language change proposals. I think this confuses people into believing that Go 2 is a thing.


Based on the discussions I've seen, Go2 is considered to be the name for the process of extending the language, not an exact release version.

With that said, the label seems to “work” by allowing people to first think of a backwards-incompatible way to solve a problem and then, possibly, arrive at a much more compatible solution in the process of discussion.


> Based on the discussions I've seen, Go2 is considered to be the name for the process of extending the language, not an exact release version.

I know, but many people obviously don't know that, so it might be a bad name. Just label it LanguageChange or Incompatible.


Go 2 is the Zeno paradox goalpost that will never be achieved, while making continual progress toward it.


Here’s hoping they do all the wild changes they want in go 2 and keep go 1.x versions compatible with each other. That gets you the best of both worlds: legacy code can run the legacy version of go and the newer iteration of the language will break compatibility and thereby shed any tech debt and cruft as it learns from any previous mistakes or compromises.


Seems you didn't get to the end: https://go.dev/blog/compat#go2


Oh my bad. Hmm. Well not what I wanted but probably the best choice I guess.


I suggest Go and Dart morph into one lang. Then you could use the power of Go with the more logical syntax of Dart.


So you want them to completely break every pre-existing Go program, for no practical benefit?

That seems dumb.


No. It would only be a flag to allow obsolete Go code in practice I guess.

EDIT: C++ did not break C. The people managing Go have full understanding of dependencies even though they made breaking changes.


Could some of the genius down-voting argue for the reason why Google needs both of these more or less dying langs?


Go is dying?


Since they announced adding malware in the toolchain (sorry, khem, telemetry) I somewhat lost interest in Go.

How do I know which version has spying turned on or whether this was abandoned?

edit: I wonder if this was mandated by Chinese government, as certain Go tools that help circumvent censorship are written in Go and maybe government wants to know who compiles them.


Compatibility or not, but if object field of interface is set to nil, then myVar == nil should return true, unlike bizzare behavior now!

Right now it's possible to get nil interface from one func, return it from another resulting in hidden cast, and this returned value fails nil check. This leads to hidden bugs, and code like this:

    p = makeP()
    if p == nil {
        return nil
    }
    return p
and all this can be avoided by not requiring both fields of interface to be nil for successful nil check.

I sure hope this will be fixed at least in 2.

Also finally we got max/min. Maybe it's time for, you know, sets? Ordered_maps? Hashable slices? Ground breaking, I know.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: