Rust seems to change constantly, because it's changing so slowly. The 1.0 release has held back a lot of stuff to avoid stabilizing wrong things, and it's now slowly backfilling the missing features bit by bit.
Instead of starting with a fat standard library, it started with minimal one and now each release adds 2-3 functions that could have been there in 2015. But none of it is idiom-changing revolution. These are mostly convenience functions, and you don't have to use them all just because they're there.
Every addition to the language is bikeshedded to death, goes through betas, revisions, refinements, before being stabilized years later. The first experimental release of async was 4 years ago and it has landed last November. So you may hear about new features a lot and they churn in nightlies, but actual big changes reach stable rarely.
Apart from async/await, since 2015 Rust made only 2 changes to how idiomatic code looks like (? for errors, and 2018 modules), and both of them can be applied to code automatically. I maintain over 60 Rust crates, some of them as old as Rust itself. All work fine as they are, and running `cargo fix` once was enough to keep them looking idiomatic.
> it's now slowly backfilling the missing features bit by bit
This isn't slow.
Java went three years, from December 1998 to February 2002, without any language changes. And to be honest, even that wasn't that slow!
> Every addition to the language is bikeshedded to death
Not to death, because many of them don't die!
What i find annoying about Rust's changes is that some of them are so trivial - they change the language for such small improvements. async/await is not in that category, for sure. But Ok-wrapping actually is. The need to Ok-wrap results is a minor annoyance at most. Ditto a bunch of the problems solved by match ergonomics changes.
I think the root of the problem is that the changes are driven by a small core of Rust programmers - many working on Rust itself or some of its flagship projects - who are actively involved in the discussions around Rust. They tend to view change as natural, and the cost of change as low. I believe there is a larger group of Rust programmers who are not actively involved in these discussions, and are maybe not spending 100% of their time on Rust, for who the cost is higher. But we don't hear as much from them.
> Java went three years, from December 1998 to February 2002
Oh hell no.
Sun was widely criticized for that boneheaded move, and for the whiplash everyone experienced in Java 5 when they changed that policy too fast.
The mass of changes in 5 left a lot of tools and thus people stuck in 1.4 for years. I started on a project a bit before 6 came out. It was processing data and I couldn’t use the collections API, because it had to run on an embedded system and Aicas was taking their sweet time catching up with all those changes. Four years later I was still using arrays for everything.
One problem with not changing at all for a long time is that everybody forgets what it means to change. Teams stop planning for it, developers get too used for things being they way they got to know over the years.
Then you get huge pills like Java 8 which took forever to be adopted in large corporations.
I have witnessed most of Java history play out, starting with Java 1.3 in 2000.
Looking back, I wish we had regular change cadence where new features are being slowly added at a regular pace so the versions don't feel like a completely new language and developers have time to digest the new possibilities.
It is interesting when you spend multiple years on various projects, each being in a state of migrating from 1.6 to 8. Or coming to interviews and the first question being asked is "Which version of Java are you working on right now?".
Similar problem with Ada. 2012 standard came and for a long time only AdaCore supported it... Now going for 202x I'm thinking only advocating small changes that don't reduce ease of reading code...
`match` ergonomics don't affect already working code.
Ok-wrapping isn't a change that's been made. At best it's a hypothetical change that one person wanted to have a calm, rational discussion about with the community. Instead we get this.
> What i find annoying about Rust's changes is that some of them are so trivial - they change the language for such small improvements. async/await is not in that category, for sure. But Ok-wrapping actually is. The need to Ok-wrap results is a minor annoyance at most. Ditto a bunch of the problems solved by match ergonomics changes.
So what? Then don't use them. The whole point of full backward compatibility is that you don't have to use new features. This perceived need to always use the newest feature is only in the head of some developers.
Neither in the blog post nor here do I see any reason to not make the experience better for people who want it. Even if it's just a "minor" (very subjective, I love the match ergonomics enhancements) annoyance that's fixed by it.
You still have to understand all the new stuff to read other people’s code. And the old way, so you can read older code. And if there are other ways to do something that are or have recently been in common use, those too. You end up memorizing a bunch of crap for one actual operation.
I think ruby (which in general I still love) has become a major offender of this. "If you don't like it don't use it" is not a good answer for exactly the reasons you explain.
I sense a goalpost moving here. The point of the blog post to which I answered to was: I have to redo all my programs to follow the "idiomatic" way.
Now we are at "other programmers could use new features, so I still have to learn them". And yes, that is true. And that will always be true as long as a language has new releases. Even if all those releases were library changes and not syntax changes/additions. The only way to stop this would be to stop all language development.
But you don't have to change your code unless it's an improvement to you. Somebody else may do it if they care, and you may accept OR reject the PR with no bad feelings.
Agreed. I know plenty of codebases still written in C++03 style and work fine. I'm still using 2015 edition Rust crates and they work fine. I won't say there's no cost to evolving idioms, but evolving idioms on this front means it's more of an affront to asthetics than it is to productivity or anything else actually important to business goals, IMO.
The one exception perhaps being async/await, in that a bunch of crates are making breaking API changes to accomidate it and bumping their major versions - because it's bloody useful and a bit more than your typical syntactic sugar - which means changing your code if you want to keep getting patches.
I can’t force other people not to use changes, which means I will always need the latest version of rustc installed if I want to be able to build a relatively complete selection of the corpus of Rust code, which in turn makes it very difficult if not impossible for rust software to be distributed in typical OS package managers.
I.e., imagine how Debian or Ubuntu could package things if the definition of C++ changed every six weeks, given that the compiler version should change as little as possible during a major OS version cycle? This is a major blocker holding Rust back from mainstream adoption, IMO.
Can't package managers distribute binaries? Besides, maybe the fact that a package can fall three years behind the stable release is a problem with package management, not the stable release.
You still need to know about the feature and understand how it works or eventually you'll only understand your own code as everybody else is using stuff you haven't learned.
From my experience of trying to learn Rust (got almost halfway through the manual and then put it on hold), I would add that having all these ways of doing the same thing is also confusing from a beginner's perspective. As a noob, I have no idea what the timeline is of when feature A was introduced that does the same thing as feature B. Also, I kept having to dig around to see if feature A added any type of improvements (other than syntactic sugar) over feature B.
• try!() has been replaced by ?. You don't even need to know — the compiler will tell you.
• futures 0.1 has been replaced with std::futures.
This is it. This is the exhaustive list of A->B feature changes in the last 42 releases of Rust. Now you know them all.
Rust is aggressive with keeping everyone using the current stable release. In C++ you may need to keep track of feature history to know if something is too new or too old for your compiler/project. There's no such thing in Rust. If you know a feature exists, you can use it.
> I believe there is a larger group of Rust programmers
Is there ? This is a genuine question - I've tried rust back in 2013 and have been following it from the outside not using it for anything serious since I haven't had a use case. My impression is that it's still a niche language with and their user base seems to be very involved and engaged. It don't have the impression that there are a lot of companies that are "just doing their job with Rust and not making a lot of noise about it" - if a company is using Rust for something it's probably writing an engineering blogpost about it and the guy who spearheaded it is probably active within the community. I'm just basing this on the amount of Rust posts vs the amount of Rust use/job postings in those quiet, doing their thing companies.
But that's my point - I knew about every one of those except discord just by following Rust marginally. Also Microsoft and Google etc. reported using it for various bits and peaces.
My impression is that if a company does something in Rust there's going to be blogposts about it and the engineers involved are active community members.
OP argued that there is a silent majority working along on Rust as is and getting caught unexpectedly with the changes - I'm just not seeing that.
It's a legitimate question. I can't be sure there is such a silent majority. I do know there are a dozen people using Rust at my company, none well-known, and we haven't blogged about it or publicised it.
> My impression is that if a company does something in Rust there's going to be blogposts about it and the engineers involved are active community members.
You must recognise that this is pure availability bias, right?
>You must recognise that this is pure availability bias, right?
I'm comparing to stuff I hear from local dev community and job postings - but again this is just my impression - I'd like to see some actual stats on this .
I _really_ felt that in JavaScript, especially es2015. So many updates came out at once, that people didn't realize many of the benefits they received from new features were now much less important because of other features that were included around the same time.
There was a huge rush to use classes, which was sugar around the prototype system, but happened when we also got spread operators, Object.assign, and property name shorthand. We could already easily express the concept of a class far more easily than we were doing before, but because it all came out at the same time it didn't really click for the community at large and a lot of what became idiomatic didn't really feel important enough for a language update.
Ditto with generators and async functions, which while merely a very specific application of a generator is far better known. Again, not sure we needed all this extra syntax so you can say `async function foo() { await bar()`, when `co(function* foo() { yield bar()` was already available (if co were simply added to the Promise object). We're also now tying the language to a very specific implementation, when the co approach is explicit about saying what's responsible for taking control from the yielded function (a Promise).
I don't get this perspective, are there only three good features invented every decade? Shouldn't it be the quality of the features that counts, not their rate of introduction?
Also, I think C might be the only language to get 1.5 features/decade. Fortran has seen a lot of recent editions, and so has Java. Even x86 gets new instructions every arch bump. Are there any other languages that change as slowly as C? If not it may be too elitist to claim that there is only one language in the world that's not a child's toy.
> Are there any other languages that change as slowly as C?
ANSI Common Lisp has had the same spec since 1994.
(This is not a problem, because Common Lisp gives you enough high and low level power to do (almost) whatever you want in user space, instead of relying on the language designers to add a particular feature.)
If this were true we wouldn't have done so much research into programming languages over the last 50 years.
PL design is not a problem to solve, it's the study of the way to express problems and their solutions. As long as their are new problems and ways to solve them, we'll find new ways to improve PL designs.
The best answer I've seen is the Lisp Curse. It's basically the idea that Lisp is so flexible that every project becomes its own DSL / idiomatic version of Lisp. Imagine what's being discussed here, except that every programmer actually has the power to just make the change rather than have to submit it to the central Rust team for consideration and eventual implementation.
I had the same thought. Lisp puts an end to the bikeshedding; everyone can build their own shed and paint it whatever they want, without having to break upstream.
There are a multitude of reasons, but I think the biggest ones are (1) the unfamiliar syntax; (2) the fact that Lisp runtimes tend to want all code that interacts with it to also be Lisp instead of focusing on FFI and interoperability; (3) Unix and C were free and so went viral.
I spent today updating some production Rust code from 2016. This was a partly automated process thanks to 'cargo fix'.
Some thoughts:
1. Most of the superficial changes, like 'dyn' and Rust 2018, are easily handled using automatic tools.
2. The original Rust 'Error' trait was clunky, and it lacked useful features like backtrace support. This led to a series of experiments and fads in error handling (e.g., error-chain, failure), which are only starting to calm down after recent enhancements to 'Error'. More conservative Rust users avoided this by sticking with 'Error' and living with the limitations.
3. Updating depencies has been surprisingly easy, with several notable exceptions. Very early adopters of 'tokio' 0.1 had a fair bit of work to do once 'async' stabilized. And OpenSSL has been a constant thorn in my side, despite being a native C dependency.
One handy tip: Don't force yourself to constantly update your code to use all the latest features. Not everything needs to be async. And it's OK to have some rarely-touched modules that are written like it's 2015.
> And OpenSSL has been a constant thorn in my side, despite being a native C dependency
If it's acceptable for your use case, it might be worth taking a look at rustls. I'm not sure how far along it was back in 2016, but it's in a pretty good state right now. I consider it much easier to deal with than OpenSSL, and the fact that a lot of legacy crypto that I have no need for isn't even supported in the library is appealing at least to me.
As someone newer to Rust who has been using error-chain because it was what I found at the time I'd be curious to hear what your preferred solution to errors in modern rust is.
I think current rough consensus is anyhow/thiserr depending on if you're writing an application or library. I haven't actually used them myself, though. You don't have to keep up with the cool new libraries.
Agree that there’s consensus on anyhow for applications, but I think many folks now prefer vanilla std::error::Error for libraries (which itself got better as a result of all the experiments in the ecosystem).
I used to maintain a custom derive crate for errors before failure came out, but these days I just use manual impls of `std::error::Error` for libraries and `Box<dyn std::error::Error>` for applications. I can't be bothered to keep up with the Rust error-library-du-jour game, even if it so happens that today's darling custom derive (thiserror) would emit the same `std::error::Error` impl that I write by hand.
Manual impls may be tedious to write, but they change rarely after they've been written, enough that the con of adding one more dependency that may go away tomorrow outweighs the con of writing them manually.
This matches my experience as well. I want to write code that works for years (dependencies make this hard) and is understood by everyone. So explicit is best.
Unless you want all I/O errors to be indistinguishable to your callers, you should use more specific variants than a single `Io(std::io::Error)`, like `ReadConfig(std::path::PathBuf, std::io::Error)` and `ConnectToServer(std::net::SocketAddr, std::io::Error)`. And if you do so, the `From` impl becomes untenable and you have to fall back to `.map_err(...)?`.
In general, I feel `?`'s automatic `From::from` conversion leads to bad errors, because the programmer is incentivized to have type-specific context via implementing `From<>` rather than operation-specific context, even though operation-specific context leads to better error messages. Consider the difference between:
failed to initialize library
caused by: an I/O error occurred
caused by: no such file or directory (2)
and
failed to initialize library
caused by: could not read config file at /path/to/config.toml
caused by: no such file or directory (2)
Edit: I should add that it is possible to use `From<>` when the type-specific contexts and operation-specific contexts form a bijection. In the above example, if there were dedicated `struct ReadConfigError(std::path::PathBuf, std::io::Error)` and `struct ConnectToServerError(std::net::SocketAddr, std::io::Error)` types, then having the higher-level `Error` impl `From<ReadConfigError>` and `From<ConnectToServerError>` would be perfectly fine. Of course, that would just shift the issue down to those types since they would not be able to impl `From<std::io::Error>`. Also, in my experience, libraries rarely bother having such individual operation-specific error types anyway.
The selectors approach is clever, given that it works without a `map_err` closure but still supports borrows through the `Into` conversion. Unfortunately, all the libraries before yours have ruined the chances of me using anything outside of libstd as far as error-handling is concerned ( https://news.ycombinator.com/item?id=22820661 ).
> The crate I would recommend to anyone who likes failure’s API is anyhow, which basically provides the failure::Error type (a fancy trait object), but based on the std Error trait instead of on.
I feel like error handling in Node hit a nadir in 10.0 due to async function stack traces being so truncated. I started in the industry when Java was young and while I appreciated stack traces plenty, I hardly ever had to deal again without them, and you really do come to depend quite a lot on this behavior.
Officially we are upgrading to 12 for perf, but I was the first (and will probably be the primary) contributor to that work because I want everyone to have decent stack traces again. Our code has too many layers of abstraction, and working without full traces is adding insult to injury.
I feel that the changes to date haven't been that significant. Adding `dyn` was almost completely automated, removing the need for `extern crate` was welcome and very easy to apply, likewise `try!()` to `?`. The changes to futures were a bit of a pain but futures were always known to be in flux. `impl Trait` is useful but failing to use it where you could is not ugly enough to irritate.
There are some upcoming improvements --- that I'd like to see! --- that are bigger and thus may be more challenging:
-- Better `const fn` will make many existing `lazy_static`s become unidiomatic. Worth it, but annoying.
-- Const generics will make a lot of existing code non-idiomatic. Again, well worth it, but annoying.
-- Generic Associated Types ... probably similar.
The question is, if people decided those important features don't belong in Rust because they would cause too much churn in code that wants to be idiomatic ... then what? That would almost guarantee a new language --- perhaps a fork of Rust, perhaps not --- would eventually fill that niche. Wouldn't everyone be better off if that language was Rust itself, and people who don't want to write that "modern code" ... don't?
> Wouldn't everyone be better off if that language was Rust itself, and people who don't want to write that "modern code" ... don't?
Agreed. The author's whole argument of not wanting "having to make these superficial changes again and again" doesn't make sense. There isn't any requirement for all code to follow the latest and greatest idioms. Rust guarantees backwards compatibility, and the author even acknowledges that in the article.
It would be pretty sad if the reason for no new features or syntax in Rust was due to programmers not wanting to feel like their code was no longer idiomatic.
I think it depends on whether you are working full time in Rust or as a hobbyist trying to break in.
I gave up on EmberJS around the same time because I only got a five hour chunk of time every couple of weeks and they were making semantic changes so frequently then that it felt like I spent half of my time upgrading my code to work with the changes. It ended up being a major factor in killing that project for me and I’m just now getting back to it.
Low-investment hobbyist Rust projects don't need to be idiomatic. Rust hasn't broken backwards compatibility except for a few soundness fixes that broke hardly anyone. It sounds like your EmberJS experience was pretty different.
I've been using Rust since 2015 and never had to update anything.
The Rust "language version" is a per translation unit setting, and Rust supports having multiple library versions in a binary.
So when I want to use a new language feature, or want to use a new library or library updated to use a new language feature, I just do so in the next translation unit I create. Creating a new translation unit always defaults it to the latest language version (unless you specify otherwise), so this is a quite laidback low-friction approach. I tend to create at least one translation unit per month.
If I had to update all my code to use the latest language features I wouldn't be doing anything else full time. This has nothing to do with Rust. I also use C++, Javascript, ... I don't think I would manage.
> The question is, if people decided those important features don't belong in Rust because they would cause too much churn in code that wants to be idiomatic ... then what? That would almost guarantee a new language --- perhaps a fork of Rust, perhaps not --- would eventually fill that niche. Wouldn't everyone be better off if that language was Rust itself, and people who don't want to write that "modern code" ... don't?
A lot of these things feel like cases where Rust isn't really advancing the state of the art and could have just followed what existing languages do. Like, Rust takes some inspiration from OCaml, but OCaml already had good answers to these questions (railway-oriented-programming style composition for error handling, LWT for async, first-class modules for existentials). If Rust had been designed more wholeheartedly as "OCaml but replace garbage collection with a borrow checker", I think it would have got to a pretty similar place to where it is now (not in terms of the specific approaches, but in terms of how good the language is at solving those problems) more quickly and with less churn. Over the years I've concluded that the correct number of major innovations in a new language is one; Rust's is the borrow checker, so it really doesn't need to also try to reinvent error handling from the ground up.
At a more specific level, if the language put more effort into making it easier to manipulate first-class functions (while handling ownership correctly) rather than language-level control flow keywords, then a lot of this churn could have happened "in userspace": different approaches to async or error handling would be library changes rather than language changes.
I don't think it makes sense to design a language by picking the closest existing suitable language and adding one major innovation. You want to do the minimal innovation necessary to achieve your goals --- no more, no less.
Rust designers did consciously try to minimize innovation. To that end, Rust error handling is linguistically parsimonious: the standard library Result type used idiomatically, plus the ? operator (itself very simple). To the extent it innovated, it did so by excluding dynamic exception throw/catch and dedicated error result syntax from the language.
Rust async/await is also not unnecessarily innovative. At heart it is the same as async/await in JS and C#. Getting it to work with the Rust ownership model was hard and did require some innovation. LWT was dropped from Rust because it was not compatible with Rust's goals.
Likewise, higher-kinded types aren't especially innovative but will require significant work to flesh out for Rust because of the ownership system.
The way I see it, Rust's day-1 error handling story was significantly worse than that of most recent languages (e.g. Swift, Typescript, Kotlin, Dart, just going down Wikipedia's list of significant languages). And it's not clear what Rust gains from that - after a lot of a churn it's not come up with a significantly better error-handling paradigm than what was already out there. So it just feels like a waste.
> Rust's day-1 error handling story was significantly worse than that of most recent languages (e.g. Swift, Typescript, Kotlin, Dart, just going down Wikipedia's list of significant languages). And it's not clear what Rust gains from that...
Not sure I understand... All these other recent languages you mentioned were released after Rust, so Rust couldn’t have benefited from them on day 1. The opposite happened: most of these other languages took design lessons from the languages that preceded them, including Rust.
The big problem is that there isn't a single error type that every function can return. Take this example:
fn read_i32_from_file(path: &str) -> Result<i32, ???> {
let bytes = std::fs::read(path)?; //may return std::io::Error
let text = std::str::from_utf8(&bytes)?; //may return std::str::Utf8Error
return text.parse(); //may return std::num::ParseIntError
}
What do you enter as error type in the type signature? In most cases, it would be sufficient to use Box<dyn Error>, but that requires boxing and thus an allocation, which is something that many libraries want to avoid to be useful for no_std or resource-constrained usecases. An answer to this could be to define
and then you have to provide a bunch of trait implementations for that new error type (esp. to cast the constituent error types into this type). That's a whole lot of boilerplate, and most of the error handling crates are about cutting down on this boilerplate.
Another problem with the std::error::Error trait is that until 1.30 it didn't provide a nice way to traverse error chains. Even now, the new cause() method may not be supported by all the error types from all the libraries you care about.
> What do you enter as error type in the type signature? In most cases, it would be sufficient to use Box<dyn Error>, but that requires boxing and thus an allocation, which is something that many libraries want to avoid to be useful for no_std or resource-constrained usecases.
Okay, so you are in a resource-constrained environment. What would you do in C or C++? You can't do exceptions in C++ here, because there's allocation and other ugliness for embedded or whatever. You can always have your rust code return an int, like we do in C if the boilerplate is too much for you.
On the other end, you have Kotlin and Swift Results. Kotlin type-erase the error completely. And I'm pretty sure Kotlin's Result allocates on failure. I'm not sure about Swift. But are you running Swift, with its heap allocated `class` and reference counting on embedded? You're surely not running Kotlin.
> You can always have your rust code return an int, like we do in C if the boilerplate is too much for you.
Sure, if you have complete control over all the source code. I'd like to use my standard set of libraries regardless of whether the code in question is resource-constrained.
(Side-note: I specifically said "resource-constrained" instead of "embedded" because you can have resource-constrained code in an otherwise not-constrained application, e.g. in tight computation loops where you want to avoid heap usage to benefit cache locality).
It's true that the preferred idiomatic approach to implementing error types hasn't stabilized. However, that "churn" has all been in third-party crates, so
> a lot of this churn could have happened "in userspace": different approaches to async or error handling would be library changes rather than language changes.
is exactly what has been happening, for error handling.
Composing first-class functions is much more idiomatic in typescript, because there's only one function type, so you can do the railway-oriented programming style. In a sense Rust doesn't quite have first-class functions yet (and I'd argue that NLL actually made functions less first-class than they were before): you can't always pull out a given block of code into a closure, because the resulting code may not pass the borrow checker. Whereas in Typescript (() => x)(), (a => x)(a) etc. are always equivalent to x.
That's true but that seems to be an inevitable consequence of the Rust ownership model. Which means this is just another example of where the "one big innovation" in Rust just isn't orthogonal to other language design issues, as you suggested it should be.
To a certain extent yes, but it's a problem that they've made much harder for themselves by trying to support both OCaml-style first-class functions and C/Java-style imperative control flow. If the NLL work around control-flow keywords had gone into first-class functions, I suspect we'd see a good solution by now; conversely if they'd committed to C++/Java style with what works there (i.e. exceptions) that would also probably make for a result that was easier to use.
They do belong in Rust, but they don't belong in a minor version release. There was Rust 2015, and then there was Rust 2018, we can put them in Rust 2021 (assuming 3 year cycle). Don't put major new forward compatibility breaking features in a minor release.
I missed it myself but have heard that people had to torture some of their code to deal with the object lifetime limitations in older versions. I’m sure that echoes of this still show up as quirkiness in various libraries.
async is huge, and I'm worried about future similar changes.
Suddenly "x.y" means member access.. except for x.await, which has a magic different meaning I still don't really understand. And adding 'await' to a function declaration magicly mangles it in various ways.
Honestly I haven't found those things to be problems at all. ".await" really isn't confusing. I think you meant "adding 'async'" to a function declaration --- it seems fine, but I personally don't use it.
The .await is another thing one needs to teach, and fairly early on so that people reading code examples don't get confused by where the "await" member of structures are. It just seems like a really bizarre break in what was fairly clean notation.
For me, it helps that ".await" appears as a keyword because of syntax highlighting. It is an odd syntax, but the alternative way of making await a prefix operator instead of a postfix operator would have been way worse for readability. Compare
let bar = list_foos().await.first().list_bars().await.first();
vs.
let bar = (await (await list_foos()).first().list_bars()).first();
I do agree. The syntax they chose was not my favorite option. I wish they at least had put the stupid parentheses, so you can think of awaiting as an operation that the compiler "magically" puts on Futures in async contexts.
I know that's not technically right, but it still seems a better approximation to reality.
This a key difference between the Go community and the Rust community. Go is like C 2.0: it tries to be minimal and stable. Go usually ships with no language changes between releases. Rust is like C++ 2.0: it is constantly being revised and expanded, supposedly to improve power, performance, and ergonomics. When a new version of Rust is released, the core Rust devs write a blog post gleefully explaining all the new "features" the current version has. What these devs perhaps fail to appreciate is that not having new features can itself be a feature.
I'm a Go fan and I've used it tons; I'm Rust-curious, but I've only made toy programs w/ it. While there is a germ of truth to this -- C++ and Rust are both complex -- I think it carries w/ it a lot of connotations which are untrue.
To wit, C++ is a brutal, terribly designed mess that's absolute torture to work with.
Rust is extremely well-designed and a pleasure to work with.
The "edition" model is a big point in Rust's favor. They can make "backwards incompatible" syntax changes without infringing on the ability to compile older code.
Pre-C++11 was. Modern C++ not so much. Things have drastically improved and dare I say, it's more appropriate to call C++11 and greater C++ v2. That said the tooling around build/test/dependency management/versioning leave much to be desired for. But a combination of meson (for build), Catch (for test) and CMake for dependency management can go a long way.
If you are using makefiles, you either write them manually or use a generator like Cmake or the autotools. Tools or scripts to automate build are required in all languages once you hit a certain level of heterogeneity/complexity.
Common grounds and uniformity - if every library were using it's own build system it becomes very hard to combine libraries. See c++ for example, when including a library you always have to either wrap the build, rewrite it to your build system or find a a way to prebuild. And every library is unique. Contrast this with other systems where you just add a line to your package.json file. Big red flag is that people come up with those header-only libraries or single .c-file only libraries, just to get around the problem of package management.
Language rules define dependency rules, that the build system is not aware of. For example transitive #include chains, requiring stuff like gcc -MMD to generate makefiles. This also improves IDE experience with suggestions on where to find imports instead of elementary error messages like "header file not found".
It has a lot to do with the language because the ease of adding external libraries has an enormous impact on the design choices made when writing code.
Also, the need for header files and disgusting hacks like unity builds just adds unnecessary friction.
> ... language because the ease of adding external libraries has an enormous impact on the design choices...
even if you could incorporate external libraries with zero impedance mismatch, you still have to _use_ the said libraries.
while there might be _marginal_ element of truth to that statement, however ease of stitching external libraries into your application has no bearing on either the programming language or the act of programming the application itself.
I don't understand what you're trying to say. Of course the effort to include the library in the project and keep it up to date has an impact on the decision whether I use it or whether I just roll my own. There is a reason that my Rust projects have dozens of dependencies whereas in C++ I basically restrict myself to boost or other header-only libraries.
50% of the popularity of modern languages like golang and rust comes from the fact that building and dependency management is light years ahead of older languages.
Your original argument was that the ease of dependency management doesn't change programming or the act of programming. I feel it encourages a compartmentalized approach to writing programs, pulling in dependencies as needed rather than relying on a giant library like Boost. Thus it does actually change the "act of programming", which is a vague term that can mean anything.
This is not mentioning all the upsides of having a easy build system from easy learning and community standpoints.
That's my opinion for system language. And even if purely superficial, it has already more argument that the previous comment.
But yes, if your language pretend to be a modern system language, you should yes. Not being able to write generic data structure or linear algebra containers in 2020 is a shame yes.
We hear this narrative really often even though it's not accurate: Go has its share of outrage about one change (in error handling too amusingly) last year and the proposal (the try keyword) even had to be repealed because it was considered too revolutionary.
Go modules are a huge change in how Go dependency management works and it's comparable, or even bigger in scope, to the Rust changes in 2018 edition regarding modules. And in a near future, Go's generics (contracts) are going to be a dramatic change, in a scale Rust arguably hasn't faced yet since 1.0, even though there is some similarities with the impl Trait features.
In fact, Go and Rust are both evolving fast compared to C (and maybe even to C++) and interestingly enough, most of the time the evolution comes in similar domains (error handling, modules, generics).
The big difference is the release schedule: Rust release small changes often while Go release big changes rarely. The 6-weeks schedule has advantages (each version is quite small so it's easier to test and bugs are found earlier) but it adds a feeling of churn which can be harmful.
"Too fast" is relative but the Rust language has had few changes since the 2018 edition (and even that can work alongside the 2015 edition). The pace of change has slowed significantly in recent years. The standard library has had more additions (but Go has had changes there too).
The blog post we're all commenting on started because someone took umbrage to an idea someone else expressed in another blog post. Not because of any actual language changes, not even a formally proposed change, just an idea in a blog post. All the actual changes mentioned ("try!() -> ?, the addition of impl trait, and the dyn keyword") happened years ago. And not all at once. It took years from adding `?` to adding `dyn`.
My point is that Rust doesn't really change more than Go does. Rust used to change a lot after 1.0, but for the past 3 years it hasn't been the case if you compare it to how much Go changed.
Rust has only become more uniform and contains less special cases than just a couple of years ago.
The rate of adding features is quite slow, as demonstrated elsewhere in this thread. Important features spend a long time baking, and a long time in testing before they are piecewise stabilized.
> When a new version of Rust is released, the core Rust devs write a blog post gleefully explaining all the new "features" the current version has.
Of course when someone talks about a CHANGELOG, they'll talk (rather happily) about the things that have changed/improved.
They are new features, so in some sense, they're "special cases", but even then, okay, so that's adding two special cases. Rust is still removing special cases all the time.
Take the Rust 1.42 release, for example. Way back in Rust 1.31, we removed the need for you to write `extern crate`. But when you were writing a procedural macro, you still had to write "extern crate proc_macro;". With Rust 1.42, you no longer have to, and it just works, like everything else. This removed a special case.
Rust has been doing a lot of the "this was special cased to ship, now we are making it more general like we always planned" lately.
What the GP means there is that when the list of new "features" comes out with each Rust release, if we discard the features that are just new stdlib APIs (~75% of the total count for an average release), most of the remaining "features" are just tweaks to existing language features that remove edge cases, e.g. non-lexical lifetimes.
As for your second question, async/await is absolutely the biggest thing added since 1.0 (and is an outlier in new features in that regard); the ? operator is just an improved version of the try! macro that shipped with 1.0.
The way I see it NLL added a huge number of edge cases. Pre-NLL it was clear what kind of lifetimes were possible; now I have no clue what the different lifetimes are and every keyword interacts with lifetimes in a different way.
Pre-NLL made the language incredibly clunky to use, with a lot of code acrobatics required to do something you know should work logically according to ownership but the scope lifetime rules wouldn't allow.
I don't disagree. But IMO a lot of clarity and comprehensibility has been sacrificed in the current implementation. I'd rather have seen a style that de-emphasised control flow keywords and made it easier to use closures, or simply blocks, to express the behaviour that people wanted. I've yet to see a case where what was really needed was NLL rather than better support for lexical lifetimes.
I don’t know any other “systems” language whose language definition changes every six weeks. When you say Rust changes slowly, what are you comparing it to?
IMO Go is more like C people took a stab at making a Java. I really like go, and it’s ethos, even if I haven’t used it much in production. Go is not however a replacement for C in the way C++ and Rust can be. Of course Rust and C++ aren't a replacement for C in some key ways, so maybe I don't have a point here?
It's been mentioned from time to time that Go is a successor of C. As I recently had a look into the Pascal/Oberon family. It looks that Go is no less influenced by Oberon than C. The CSP paradigm likely comes from Rob Pike's previous works. And the syntactical elements, such as the operators and punctuations, also follow the C tradition. But the overall program structure (including the approach to OOP and probably the 'evolving-by-reduction-rather-than-addition' philosophy) makes it seem closer to the Oberon linage.
Active Oberon also had co-routines in the form of active objects.
Also note that 'evolving-by-reduction-rather-than-addition' philosophy is only part of Wirth's Oberon linage.
Oberon-2 (which Go got its method syntax from), Active Oberon, Component Pascal and Zonnon, are all Oberon descendents from ETHZ as well, which go into the direction of mainstream languages, including support for generics or more low level programming capabilities.
Yes, that's the original inspiration. But Rob Pike has been implementing CSP ideas in different languages for a few decades now, and all that work has highly influenced the implementation in Go.
C++ has had decades to replace C, but it’s really stalled and even lost ground in embedded systems. I can’t see Rust faring much better. C might end up being an eternal language until there is a dramatic enough shift in operating systems to merit replacing it.
In my opinion, there are two main factors driving this:
- Lower spec devices have a hard cap on overall code complexity imposed both by available ROM or flash constraints (big projects literally won't fit on the chip), and by time constraints (if your chip is running at <= XX MHz when it's active, you don't have time to run very many functions between events or interrupts). Most projects for these devices won't grow to the point where you really need the code organization benefits that C++ provides.
- It's a lot less effort to port or implement a C toolchain for your chip than it is to port or implement a toolchain for a more complex language like C++, Rust, or even Ada. It's not just the compiler - you also have to have a working standard library (even if some functions are just stubs), an interactive debugger, and integrations with IDEs (if you already provide that for C). All that software engineering is expensive, and you have a much smaller market of developers to amortize that cost over.
These constraints aren't as binding for high-production, higher-spec devices like popular families of ARM Cortex-M chips, so usage of C++ seems to be relatively more popular for those devices. Even then, embedded work normally requires more of a "C with classes" or "C plus the std::algorithm library" approach, which is different from C++ projects you'd see that target servers.
>IMO Go is more like C people took a stab at making a Java.
I haven't used Go beyond tutorials and some very basic programs, but I am pretty comfortable with Java. What about Go makes you feel it's relatable to Java? From what I've read the lack of generics is a hot topic in the Go community, but they're pretty crucial to most programs written in Java. Is this still true?
As a side note, Java also didn't ship with generics. Java was released in the mid 90s and didn't have generics until about 10 years later with version 5. This timing should feel familiar - Go first appeared ~10 years ago in 2009.
You may remember that in Java 1.4, `get`ting from collections (e.g. ArrayList) always returned Object, which you were expected to cast to their runtime type (or a class that their runtime type inherits from). I was young at the time, correct me if I'm wrong. Contrast this with Go's solution, which seems to be user-inaccessible compiler magic. I much prefer Go's solution to Java's, but I also like generics.
By compiler magic I really mean syntax reserved to the language itself; for example, Java's String having extra operator superpowers or Go's generics that are only available to a chosen set of standard collections.
I think you can do the same in Go, using the empty interface (that can represent any type) and then having the user cast it to the correct type. Unfortunately this is not type safe, and type safe generics in go, such as maps is only available to the compiler.
The language and tooling is designed for development at scale. Lots of pragmatic trade offs were made. Its not flashy at all, and could even be accused of being a bit boring. It is also appropriate for a similar domain.
yep, there Go was labeled as a replacement for C or Python, and while it in no way replaces C and and as for Python it generally only does when the Python wasn't the right choice from the start.
In reality Go can fully take over Java's problem space.
Not really, unless it provides generics, JEE and Spring like frameworks, supports all embedded CPUs, Oracle/SQL Server/DB2, mainframes, has an OS of its own, a GUI half as good as Swing,....
there is no way that's happening, outside of silicon valley practically no one is using Go, there are millions Java developers who are quite content with language...
Stability is THE most important feature. Rust needs to mature a little more and then mostly just stop. I seriously hope that it doesnt follow the C++ path.
Just today someone posted in /r/rust about how they took a two year old project, compiled it with the latest compiler, no issues. Other than it magically took less time to compile and the end result ran faster.
This is a pretty complete misunderstanding of what the previous commenter was saying. You can probably find 20+ year old C++ code that will compile just fine with the latest compiler and it will take less time to compile and the end result will run faster. But that's not what people are complaining about when they talk about C++'s ever growing scope and the insanity of trying to cope with the 1001 ways of doing something in the language.
I have it straight from the horse's (Bjarne's) mouth that there will be no more editions of "The C++ Programming Language" because the language is too big and writing a book too time consuming to actually cover the language as it currently exists and what it is going to become. It's own creator admits that the language is too big to document in book form. How does one ever expect a beginner to get to grips with it in that case!?!
Obviously Rust isn't there as it doesn't have 30 years of evolution and development like C++ does. But the rate at which it is growing and changing means it won't actually take 30 years to become the new C++. And that would be a real tragedy. What's the use of all these awesome safety features if they're in a language almost no one will be able to fully comprehend or understand?
The parent made two claims, I only responded to one of them. You seem to think I responded to the other one. I'm sorry for not being more clear about that!
> But the rate at which it is growing and changing means it won't actually take 30 years to become the new C++.
I very, truly, seriously doubt this is true. First of all, as I said elsewhere in the thread, Rust has been changing very slowly lately. But beyond that, a significant reason for C++'s complexity is that it's sort of two, possibly three different languages: Modern C++ is very different than C with Classes. It is a miracle that they retrofitted a new language on top of an old one, but there's no indication that Rust will ever do that. The quantity of change is one thing, but the qualitative aspects matter here too, and I feel like you're only considering the quantitative aspect (which, I also disagree with, to be clear.)
20 year old C++ has absolutely no chance to compile with a modern compiler unless it has a compatibility option for 20 year old C++ and you know how to enable it. i've had to fix 2 year old code when upgrading gcc.
Pretty sure gcc doesn't enable trigraphs unless you pass a command-line option to do so (and maybe it's included in -std or -ansi or something like that).
i've had issues with templates, but please forgive me if i don't provide specifics because it was ages ago. i've stopped doing C++ long before auto was even proposed so can't comment on that.
That's correct. Rust is still a pretty young language; it's only been stable for five years. This doesn't mean that two years is meaningful in a broader sense, that is, this is one random anecdote. I know of a crate that still maintains Rust 1.13 compatibility, and that's about three and a half years. Still not actual evidence, of course.
Code that doesn't rely on soundness bugs written in 1.0 should generally compile on the latest stable release without issue. The vast majority Rust users in our annual survey report that their code never breaks, and of those that have, the majority have said that it is trivial or easy to fix.
We put a tremendous amount of work (and spend a lot of money!) into ensuring this. If upgrading your Rust compiler is a significant issue for you our anyone else, please report these things to us.
Upgrading my compiler isn't an issue. I haven't opted to transitioning to Rust at all in the first place until things calm down. I've yet to feel that my career has once ever been impacted by not learning it.
I'm in hardware land, not JavaScript frontends. Innovation there is more about the product features, sensor capabilities, power consumption. Generally solving customer issues.
Selling adoption of a new language to the superiors or new job interviewers to use in production isn't an issue about compiler upgrades or the merits of using cargo fix. They are more concerned about how many people are available to be hired that has extensive expertise in the language, making hiring hard, and support is still in flux. And that is what I'm reporting, tooling is more than about the compiler, the world around it and human issues, particularly non-technical humans are also at hand.
And until there is stability and a language people don't feel is a moving target to learn, that will remain the chicken and egg scenario.
> When a new version of Rust is released, the core Rust devs write a blog post gleefully explaining all the new "features" the current version has. What these devs perhaps fail to appreciate is that not having new features can itself be a feature.
Rust is trying to solve a set of problems, some combination of which exist in pretty much every programming language that exists today (including Rust). "Have you considered giving up?" is obviously not a very interesting question and not one that's going to get much traction.
From using Rust, it’s clear there is still a lot of work to be done. Someday Rust will be stable and mature, but it isn’t now, and it doesn’t look like it will be a year from now either. The plasticity of the language honestly is what allows it to continue to innovate where others do not. Let’s let Rust find its niche and then lock it down, and not rush it or we’ll be regretting it in a few years.
The only languages that are “finished” are the ones nobody uses. Heck, C’s latest standard is from 2018. And there’s probably gonna be a new one next year. Most changes are minor, but sometimes they’re bigger. Just like any language. It moves much slower than most languages, but it’s still moving.
C's 2018 standard cannot be compared to Rust's development in good faith. C didn't add anything new - it just clarifies edge cases, and does not change downstream C code in meaningful ways. And that constitutes 7 years of C development.
C 2018 was a small release, sure. There are bigger ones. C11 added an entire memory model.
> it just clarifies edge cases, and does not change downstream C code in meaningful ways.
This is exactly the same as Rust. (Again, with the small exception of soundness fixes, which, depending on impact, we sometimes leave a year of warnings in before actually changing.)
I would strongly disagree with that. C is finished, and what they've been working on for the past decade doesn't change that. I don't think you're arguing in good faith if you claim that the C development lifecycle is even remotely comparible to the Rust development lifecycle, and I'm not going to continue entertaining this.
I don't think that you're arguing in good faith when you claim that Rust putting out new versions means "it's not done" but C putting out new versions means "it is done."
My argument is not that they are comparable. My argument is that it's a difference of degree, not of kind.
The difference is obviously in what kind of changes are included in each "release". C has a specification (something Rust lacks) and C18 simply clarified unclear edge cases in the standard. No new features were added. Rust releases add new features and semantics, Q.E.D.
New Rust versions don't change downstream Rust code either, except in edge cases. And those edge cases are only to fix potential security vulnerabilities.
Only if you feel like you need to constantly update your code to be idiomatic, which is absolutely unnecessary. No one has to change their code to conform to whatever the new version of "idiomatic" is. Their code will continue to work, and it won't be "bad".
I don't think it's about giving up, but accepting that churn has costs.
Of the first 15 or so Rust releases, 4 of them caused library code I maintain (timely dataflow) to publicly break. They were all minor "paper cut" breaks that one could fix by changing the syntax, but several were in the public APIs (most were around convenience features, clashing in namespaces). The stated position of the Rust team was that it is ok to break code in this way.
I brought this up with some of the team; I do think they are more aware of it now; many are still more excited to land new things than preserve the experience of the users (I'm ok with that, but it is what it is).
> Rust is trying to solve a set of problems, some combination of which exist in pretty much every programming language that exists today (including Rust).
There's no reason monad shouldn't be a zero-cost abstraction, to the same extent as any other trait (i.e. using them in cases where the type is not statically known would require boxing, but the abstraction is not really a cost overhead in that case since you simply couldn't write that code at all without it).
Haskell currently uses a garbage collector, as do most languages that we could look to for inspiration, but I don't see that that's an essential barrier. The part I would be worried about doing in the presence of a borrow checker would be being able to form first-class functions, especially closures, but Rust already has those! AFAICS the only other "hard" part is having higher-kinded types and preserving good type inference, but the solution to that is already known (Hindley-Milner).
Here's one simple part of this very large and thorny problem: each of Rust's monad instances does not share the same type signature, even for similar methods. This is because Rust has more novel features that appear in type signatures.
To reduce the problem to a simpler (but useful) one: what would it take to be able to write a generic "compose" function (on function values) that would be polymorphic across at least Fn/FnMut/FnOnce? This is something that people already write the specific versions of; obviously the general version requires some form of HKT, but it's not clear (at least to me) what's blocking that.
When I've seen discussions of HKT in Rust it feels like there's an awkward circularity where if you talk about HKT people say the use cases aren't there, but if you talk about things that need HKT people say they can't be implemented because they need HKT. And I'm worried that the conversation about extending the type system has reached a place where adding ad-hoc special cases is seen as the "cautious" approach, when the result is a funny subset of a more general system that's both more complex and less powerful than biting the bullet and implementing the general system.
I am not an expert in this area, but the problem is that these things all have very different kinds of permissions, in a sense... I'm not sure what being polymorphic over ownership looks like.
I don't think that anyone who understands this stuff deeply denies that there's good use cases for HKT. There's just not a design, and we're not sure that one actually exists. And a general perception that GATs will give us most of the big benefits of HKT in a way that feels more Rust-like. That being said, there have been a few attempts at this. We'll see.
Being polymorphic over ownership is the most important problem Rust should be looking to solve, IMO: it would make a lot of other problems trivial. I don't necessarily think it's easy, but I do think it's tractable.
The choice to focus on GATs is exactly what I was thinking of. I think Rust will come to regret it, as it comes with most of the costs of full HKT but leads one into working with a limited subset that's not so theoretically well-founded, and I'm not convinced the forward-compatibility is as good as claimed (the semantics may extend to full HKT, but it pushes syntax in a direction that seems likely to make for a frustrating syntax for full HKT). I hope it ends up working out.
How are GATs any less theoretically well founded than HKT?
I can see how maybe working toward that would be a priority in a PLT research sense, but Rust isn't that kind of language anymore. (This is my own opinion and I'm not on the lang team, mind you.)
> How are GATs any less theoretically well founded than HKT?
I'm not aware of their having been studied to the same extent, so I don't have the same level of confidence that they are well-behaved. For example, is there a known algorithm for doing perfect type inference in the presence of GAT? What about when we add in subtyping, or higher-rank types? Maybe it's all going to be fine, but the further you stray from the path that's been trodden by PLT research, the more I'd worry about falling down a hole.
Self-selection? There are already many other programming languages that appeal more to those who value elegance or minimalism, so those using Rust either lack the organ that senses elegance, or simply picked Rust anyway because of some other reasons.
Go is more like static Python rather than C 2.0. I wish Go was C 2.0, but it isn't with its crazy reflection where one can do basically whatever they want, and garbage collector.
When companies like F-Secure use it to write bare metal security firmware, CoreBoot uses it for UEFI firmware, or Google creates an hypervisor, TCP/IP stack and GPGPU debugger with it.
The linked article is literally a list of things that are changing too rapidly in Rust to keep up with. Maybe those don't qualify as "big" changes, but then most of C++'s evolution was an aggregation of features that were in isolation fairly small and focused.
* .await. This feature was in development for almost five years. Landed November 2019. This is a legit new feature, with significant ramifications on the language.
* try!() -> ? . This feature was in development for two years, and landed in November 2016. This largely consists of the transformation "try!(x)" to "x?".
* impl trait: this feature was in development for two years, and landed in May of 2018. This has two major parts: one that is very rarely used that is mostly sugar, and another that's pretty useful, and is a legit new feature.
* dyn: this feature was in development for two years, and landed in June of 2018. This is largely optionally adding `dyn` in some places. You don't have to do this. You can turn a lint on that will tell you exactly where to do this, and it can be automatically added to your code.
So that's four features. One of them three years ago, two two years ago, and one last year.
That's awful hard. People have to learn an entire new feature each year? :D
I don't get why people complain. C# is pumping features at a much higher rate and most people are glad.
Those features do solve some issues or make it easier to solve them. If you don't have an use case for them, then you don't need to learn them.
Not necessarily. The way that new features fit together with existing features has a big impact on how useful and beneficial adding those features are.
In C++, using modern style is .a nicer experience, but at the cost of having to stay away from any code pre-C++11. This is not because C++ designers are bad, but rather because they have more cruft due to existing for longer. Time will tell if Rust will go down that same road, but right now new features fit with the rest of the language and are usually extensions to things that already exist but that weren't allowed in places.
But... again... the contention of the linked article, and one I share, is that these new features don't "fit with the rest of the language" and keeping up with them represents needless cognitive churn.
I mean, sure, that's an opinion thing, and obviously you don't share it. But to me, watching from outside the core rust community, it sure seems like exactly the same thing that happened to C++.
It really, really, really needs to be stated that the thing that triggered this whole discussion is not a "new feature", it is not even an RFC, it is a blog post written by a prolific Rust developer who also turned the idea into a library, and wanted to share the merits of the idea.
The fact that it's controversial and "doesn't fit with the rest of the language" means it will very likely never become an RFC in its current state, much less become part of the language.
There is no need for the author to make constant changes to his code. It will continue compiling just fine. Moreover, new "editions" come along with nice documentation on what changed and tooling to automatically fixup code, and are totally opt-in. You can simply wait for a new edition to come along to update your code -- or not, it doesn't ultimately matter that much.
I mostly wrote C++98, or at least, C++ of that vintage, so I can't really speak to how hard it is to keep up with later C++ standards. Just providing more context for those specific list of changes, and their frequency.
I don't think it's really fair to talk about pre-1.0 language changes -- by definition, being pre-1.0 typically implies a lack of stability. One of the major editions you mention is the 1.0 release.
So, there has been 1 new edition since 1.0, and, as the parent comment says, the only really groundbreaking change since 1.0 is the stabilization of async support.
Other changes, such as the `dyn` keyword and match ergonomics are mostly superficial, and things such as NLL and additional lifetime elisions are just simple quality of life improvements that don't fundamentally change the language.
Not only is it unfair, it's counterproductive - the lesson future languages will learn is to not work in public and to not improve. Rust pre-1.0 was a mess and nobody would have been better off if it had gotten frozen with all the not-so-great stuff officially supported.
I've been writing Rust since before 1.0, and I maintain at least once piece of 5-year-old production Rust code.
In general, old Rust code continues to work almost completely unchanged. There have been 3 or 4 cases where I couldn't compile an ancient dependency because it was relying on something weird like a soundness bug. The fix was usually to update the dependency to the latest minor revision.
Rust 2018 fixed a few minor syntactic issues, adding the "crate" keyword for imports from the current crate, and "dyn" in front of dynamically dispatched traits. These can both be updated using 'cargo fix' in about 20 minutes. Rust 2018 modules can use Rust 2015 modules almost completely seemlessly, even when macros are involved.
Is it perfect? No. But updating older code is something I spend a day or two a year on.
If by "its early days" you want to talk about pre-1.0 Rust, then I think a fair comparison is pre-C++98 C++, from the original Cfront. At that time C++ was just C with classes. It didn't have templates, namespaces, and plenty of other indispensable features in modern C++.
Bryan Cantrill's talk about Rust addresses when they removed green threads, and how much of a bold, disruptive change that was. He remarks on how that impressed him with the Rust community and made him follow the language more closely - had they been too scared to change the language in drastic ways, we would still be stuck with green threads.
Sorry, I can't remember the timestamp right now but the entire thing is worth a listen:
You mean pre-1.0. There has been one major edition afterwards, Rust 2018, which is optional but has awesome new features like non-lexical lifetimes, async/await keywords and I guess `dyn`.
In the parlance of the Rust developers, a "feature" is often just the addition of a single function to the standard library. The only remotely paradigm-shifting feature to be added since 1.0 was async/await; the other things mentioned in the blog post are minor QOL nice-to-haves.
Interesting. I witnessed a constructive collaboration by which the Rust community chose (mainly) the syntax for it's async features. I thought of it as amazing, thoughtful, rather democratic and on point. Nothing war-like to me.
To be honest, reading mainly blog posts, RFCs and summarisation comments, it did look like a thoughtful and organised process. However, there has been a constant problem of tiresome work and "intellectual and emotional labour" of reading thousands pieces of feedback, some of which are emotionally charged, and many of which are duplicates or re-statements of similar ideas, and many of which have misunderstanding or mis-valuations of other ideas. I've heard that collecting and editing this huge amount of unstructured feedback into comprehensive and balanced summaries has been a huge drain of mental energy for the participants.
I wasn't involved at all in the discussion, but having dove deep into async over the past few months since async/await was stabilized, I definitely am happy with the result of that process. Writing code with postfix `.await` feels very natural to me and it fits it much better with the surrounding code I write than a prefix version would. I can sympathize that participating in that process might not have been fun, but I really hope that the results of future discussions make me as happy as this one has.
> Writing code with postfix `.await` feels very natural to me and it fits it much better with the surrounding code I write than a prefix version would.
That’s your subjective point of view, not everyone agrees with it. Part of the problem is that Rust team failed to find a solution good enough for the most people and failed to communicate this decision. In particular the switch to .await syntax happened just couple weeks before it was finally stabilized.
And await is not the only controversial decision made by the Rust team.
Even if 10% of people unhappy that’s too much, because the next controversial feature will cause 10% more of unhappy people and so on.
I might be wrong but for C++ for example every C++ fan seems to happy about every new release (at least happy with new features, not lack of desired new features).
Same for Java, IIRC the only controversial decision they made was the introduction of modules.
Python 3 was a mistake, but generally Python users are happy with Python changes.
10% is not impossible. Even if this decision is ”right”, communication could be better: more options to play with different syntaxes, more time to get used to new syntax before stabilizing are just two possible ways to deal with the angry mob.
> I might be wrong but for C++ for example every C++ fan seems to happy about every new release (at least happy with new features, not lack of desired new features).
I believe you are mistaken. C++ community has a lot of diverse opinions about its features. As an old saying goes, it is possible to use only the good subset of C++ features, the problem is every shop has its own subset. Even about something as basic as exceptions there is no consensus. Similarly, not everybody is happy about the "modern C++" movement. I guess these people are just confident that thanks to serious commitment to backwards compatibility they will be able to continue writing code in their old grumpy ways so there is no need to loudly express their dissatisfaction.
Rust community is much less fractured (in part because it is smaller and younger) and there is a sense that decisions made today will determine how everybody will write Rust code tomorrow. Combine that with efforts to encourage an inclusive discussion and you get a lot of disagreement visible in discussions, especially concerning aspects that are prone to bikeshedding.
> I might be wrong but for C++ for example every C++ fan seems to happy about every new release (at least happy with new features, not lack of desired new features).
Coroutines were quite co_ntentious.
Maybe the difference is that very few C++ developers live on the bleeding edge. I don't use anything unless it's available from the default packages on an LTS release. So, I won't be using C++17 until I update to Ubuntu 20.04 later this year. And compared to the older developers in my lab, I'm an early adopter.
By the time new features reach the average C++ programmer, they're just how things are. The pain and the fighting of the standardization process is just distant memory that happened to somebody else.
> I might be wrong but for C++ for example every C++ fan seems to happy about every new release (at least happy with new features, not lack of desired new features).
If anything if 10% of C++ users are happy with the new features in a release it's a great success. I see a lot of complaining about the committee, its decisions, and its decision-making process.
C++11 introduced std::async and std::future. Pretty much everybody uses something else nowadays (std::async is a deprecated footgun, std::future should have been a Concept instead of a concrete type, etc.) and some parts of this design impact new language features like C++ coroutines, which 50% of the people say are good enough, and the other 50% say "they are not zero cost and as fast as Rust's".
Major new features of this size in C++11 are lambdas, variadic templates, auto, and move references. C++14 adds generic lambdas. C++17 adds constexpr if and structured bindings. C++20 adds concepts, coroutines, and modules.
The syntax had been proposed long before then; I (along with others) had been advocating for it the prior year. The May 6 blog post was merely the point at which the blog post author relented to the rest of the lang team to bring it up for a vote.
As for your second sentence, RFCs are almost always accepted before they are implemented. There is an entirely separate round of review and approval before an accepted RFC can become a stable feature of the language.
I tried to contribute to the discussion and actually didn't find it to be super productive. We had thousands of comments just around syntax - from people who had never been involved in any async/await design or usage before. It was super hard for everyone involved to just keep up with the comments.
And while the syntax discussion saw a ton of comments, the more important underlying semantic aspects of async/await got a lot less attention - even though they also had and still have a few issues to resolve (e.g. around cancellation, thread safety, extensibility).
That's not really what happened : the core team decided they'd use ".await" instead of a regular prefix await (with good rational) and any people where pretty uneasy about that to say the least: most people considered that ultra ugly and confusing for new rust users (and I'm still in that camps even though I'm glad the discussion is finally over).
I used to dislike the "field-like access" syntax because I was used to "await" being a prefix in other languages, but using await many times in longer function call chains with multiple await calls has totally changed my mind. No need for nested parentheses. No need for splitting up the code just to make it more readable.
Syntax is something very emotional and beauty is very subjective. Thanks to the Rust team, the discussion around await syntax was mostly focussed on rational arguments, not around perceived beauty. The outcome is great in my opinion.
I guess the syntax was already a given, so they just did it regardless of the criticism. I still think there were other, better, postfix options.
I also guess that, since part of the implementation was sponsored not by mozilla itself, then also the syntax choice was not led by mozilla itself (nor the overall community).
I could be wrong - and I only superficially followed the discussions - but this is the impression that is left on me after all that.
edit: but I think that this Ok-wrapping feature is different from the await one, since now there are no more deadline-like pressuring into mozilla/lang team. So I guess that now, something that is generally disliked (if it is) won't actually get into the lang..
> I need to now change my code in order to keep it idiomatic
I'm not a Rust person so maybe I'm missing something, but why does the OP need to keep the code idiomatic? Can't the changing fashions in what's idiomatic be ignored?
> Can't the changing fashions in what's idiomatic be ignored?
They absolutely can, but you might be stuck on an older "edition" of Rust. I've stuck with `try!()` for error-handling, because I think error-handling deserves more prominence than a single character. But that means my code is stuck on Rust 2015. If something I need is added to Rust 2018 or a later edition, I'll be forced to update or backport.
> If something I need is added to Rust 2018 or a later edition
Most new language getting added to Rust, are added to the Rust 2015 edition (e.g. NLL). If you need a language feature that is added to a later edition only (like the `try` keyword), that's because the feature is incompatible with code being used in earlier editions (which declared a macro with the keyword name).
You can continue using the macro by just using raw-literal syntax.. but my old Rust code tends to "just work", so I don't really need to update it. My crates are also very small, so if I'd ever need an update, it wouldn't take too long - but haven't had to do that yet, so can't comment.
I tend to write most of my new code on new crates, so I default them to whatever edition was the default when I create them, and never look back.
I have lot of 2015 crates in my dependency tree, and that works just fine.
It looks like they might need to use "r#try!(...)" in Rust 2018 (a trivial search replace, 'cargo fix' might do it too, not sure), but the macro is still available:
Thanks. Didn't know about the raw identifier syntax. That's kinda ugly, so I'd probably still go with copying it to all of my projects and giving it a new name if I were to update to Rust 2018.
`try` became a reserved keyword in Rust 2018 and the `try!()` macro was dropped (edit: not dropped, see @boardwaalk's comment). I could copy the old macro to all of my code bases and give it a new name. The two things stopping me from doing that are (1) I don't have any reason to update to Rust 2018, and (2) I can't think of a good name for a replacement macro. I'm thinking `check!()`, but not sold on the name.
> (not a Rust user)
One thing to know about Rust 2015 vs 2018 vs future "editions" is that they're a distinct versioning mechanism from Rust 1.23, 1.24, etc. The latest version of the Rust compiler still supports Rust 2015 and I believe it's been promised to be supported in perpetuity. So it's not like I run a risk of being without a Rust 2015 compiler available.
Just to expand on the editions things, editions are local to the crate. A Rust 2015 crate can freely use a Rust 2018 crate and the reverse. So davidcuddeback isn't being locked out of using up to date dependencies either.
If you don't update over time there's a danger that you won't be able to interoperate with other software, or you might not be able to take advantage of new features. It's easier to make small changes over time then trying to do a big bang upgrade with several years of changes.
Rust 2015 and Rust 2018 have no trouble interoperating with each other in both directions. If you want to update, that's great, and there are automatic tools to help you, but you don't need to feel compelled to do it by the pace of language development.
Even with the Futures ecosystem having a massive change first with the std::future impls and then with async/await, there are backward compatible libraries that make it possible to inter-operate between the two.
You probably don’t want to live like that forever, but it’s a way of not being “required” to upgrade large code bases.
Even if you do believe that it's for "no practical benefit", the value of _x_ in Rust's case is ~156. (I.e. roughly once every three years, when a new "edition" is stabilized.) And the transition is mostly automated.
Speaking as somebody who has maintained several medium sized Rust code bases over the years, I can comfortably say that it's not a big deal.
What _has_ been a big deal, on the other hand, is getting excited about what I can do with extremely unstable libraries, and then spending a lot of time keeping up with their breaking changes. But I'd be a bit silly to complain about this; in each case, I opted in to an immature ecosystem very early on because I was excited about being part of its early growth. I don't expect to have my cake and eat it, too.
I'm getting seriously concerned about feature-creep in Rust. I understand that there was an initial period of rapid growth as the community figured out what was needed, and that some of Rust's syntax sugar makes a very real difference in productivity.
But Rust is already not a simple language. It already has long compile times, and some of its syntax sugar already makes it harder to understand what's really going on when you invoke it. I think it's important that the Rust community start reeling in frivolous sugar, otherwise it may just become C++ all over again.
> Can you name some examples of "frivolous sugar"?
The one mentioned in the article seemed fairly frivolous to me. I don't know if I can name one that's already been accepted into the language which I would call frivolous, per se, though they're all varying degrees of necessary. But most of the "sugar" syntaxes have made the language harder to understand for newcomers, even the ones that are arguably necessary. They all come with a cost.
The ? syntax for Results, for example, rubs me the wrong way a little bit. In that case it really does eliminate a large amount of code and is probably worth the trade-off. But now instead of Results just being a monad that can be picked apart like any other data structure, they become a Language Feature that gets special, magical treatment not expressible by the type system. You just have to know what the question mark translates to underneath. Worthwhile or not, this makes me sad. Async/await is a similar case.
> Also compile times have been a big focus of the compiler team, and have been going down YoY.
Yes but adding complexity to the language immutably increases the difficulty of that task. Even if only by a little bit, it adds debt that will have to be reckoned with for the rest of the language's lifetime.
> But now instead of Results just being a monad that can be picked apart like any other data structure, they become a Language Feature that gets special, magical treatment not expressible by the type system.
I believe, but please correct me if I'm wrong, that the try operator (`?`) does nothing which lies outside of the type system. It just transforms
let v = expr?;
into
let v = match expr {
Ok(v) => v,
Err(e) => return Err(e.into()),
};
Maybe "by the type system" was the wrong way to say that. What I meant is that if I wanted to write my own Result, say Result2, it wouldn't be included in the question mark syntax. Perhaps I could write a macro for it, I guess, but the real one is not implemented as a macro and if I wanted to go use my knowledge of the language fundamentals to read the standard library and figure out what it does, I couldn't do that. It just is that way.
This is being worked on. See the Try trait in std::ops. It's implemented for Result, Option and Poll (the `try!` macro only ever worked with Result). Implementing it for your own types is currently unstable because sorting out all the edge cases is proving to be difficult.
But the intent is definitely to stabilize it once a good general solution is found.
That's good to know. And I suppose there are other cases where "special language features" can be hooked into via the (wonderful) traits system, like Drop, so I guess that does go some way toward relieving the sting of having magical syntaxes.
'?' is an almost completely useless language feature IMO.
The second block is much clearer and uniform. When reading the first you implicitly read it as the second, which introduces mental overhead.
The only upside of having '?' is to write less code, which is the worst kind of syntactic sugar: It makes the writer type less for a couple of seconds, but the result is harder to read.
Now even if one doesn't agree with the above it is still a bad feature. Why? Because the upside of it just doesn't outweigh the cost of having any syntactic language feature like this.
And I don't just mean the energy invested in introducing it into Rust, but also the continuous cost of having to deal and respect it in future changes.
This is a bad case of hyper-optimization within a very narrow scope combined with bikeshedding.
This is all subjective of course. I for example feel the exact opposite of what you described, and I think the overwhelming majority of Rust coders prefers it to explicit matching (which I find is just code noise).
I’m not sure how much Rust code you’ve actually written, but almost uniformly everyone in the Rust community prefers ? to try! and the explicit designating that you suggest. The fact that other languages are adopting this [1] suggests that your preference is not the popular one.
And I’m pretty sure there isn’t “continuous energy” being invested in supporting this. It’s primarily a one and done feature...
I made it very clear that this is my personal opinion. I'm aware of the fact that the feature made it into the language, because it had widespread support.
I still don't think it is a good feature.
> I’m not sure how much Rust code you’ve actually written
Only as much as someone who finds time on the side to learn the language. There are many things that make this process nice, for example the compiler messages, clippy, traits, the functional API on iterators and so on.
But I've read substantially more code than I wrote. And as I stated the ? operator seems to be primarily a feature that makes writing more convenient. Not easier, not clearer, just shorter.
The pattern matching syntax expresses what it does very clearly and explicitly in a nice, tree branching like structure. And it is also documenting the code more directly.
Again, even if you and "almost uniformly everyone" disagrees with this. There is still the more general point of: How useful does a feature need to be to be considered at all? "Most people find it nicer to write" is in my opinion too weak.
> And I’m pretty sure there isn’t “continuous energy” being invested in supporting this. It’s primarily a one and done feature...
To be frank I find this to be shortsighted. Every syntactic feature adds baggage, that has to be dealt with continuously on some level. The feature already was limiting the syntactic possibilities of async/await for example, which is in my opinion a much more important and impactful feature. This is the kind of evolution that is painful to see, especially since this is a repeating pattern with evolving languages.
I like, even admire a lot of things about the language but I feel the weak point of Rust is its syntax. Especially for such a young language there are already too many ways to express the same thing. And this might be the fundamental point where we disagree.
I don't completely buy the argument for `?`. Yes, you have to know what it compiles down to, but the same goes for any other part of `std::ops` (like Deref, Index, etc.). In that way the addition of `?` as `std::ops::Try` doesn't imply "special, magical treatment not expressible by the type system", but gives it the deserved representation in the type system that was possible anyway, and hooks it up with a syntax that is only achievable with compiler support.
Yes. I've used Rust (in a hobby capacity) for a couple years now, I've written a fair amount of C++ and have a clear understanding of how pointers and references work, yet these two cases you mention sometimes remain mysterious to me. I've developed an intuition for "how to use them" but I don't always know exactly what they're going to do, because their meaning shifts under my fingers to try and match up with what's most convenient. I've just learned how to fiddle with them until the compiler accepts them. There's a fog over the code when they come into play.
Lots of operations on Option and Result for example, and more and more added constantly. They don’t make code readable and it’s hard to remember it and often simple match is easier to write then trying to remember what’s the suitable function exist for the operation.
> Lots of operations on Option and Result for example, and more and more added constantly.
You mean methods? I actually don't have any issue with adding more methods and trait implementations, because those can be understood within the existing framework of language-level concepts. Of course the docs also need to highlight the "normal" usage with examples, but those do exist for Result, just not on this page:
> See the std::result module documentation for details.
std::result has quite a good explanation of "what are Results for and how are they normally used".
Yes, this is my concern as well. I used Rust semi-seriously around the time it launched 1.0 (the cake was delicious!) but as an ex-professional C++ dev who loathes C++ very very much, I have a hard time finding Rust any more palatable these days. I'd probably still choose C if I needed to.
Not really going to address the rest of the post but I wanted to point out that `impl Trait` hasn't really changed what's idiomatic. People still use generics in argument position, and while I've seen a couple instances of people using impl Trait in argument position it's not enough to call it an idiom shift.
impl Trait in return position has changed how people code; but that's because it made certain things suddenly possible, which isn't an idiom change as much as obsoleting some old bad workarounds.
I'm curious to know why you felt like you had to change your code to be more idiomatic with impl Trait.
----
Nor does it seem like Ok wrapping is the kind of feature that would change what idiomatic Rust is.
This is an anecdote, but I'm someone who keeps up rather zealously with language updates, and even though I write a lot of generic-heavy code, I don't think my code contains a single instance of `impl Trait` in either argument or return position, across over 60,000 lines of rust code
Do you ever write code that returns, e.g., an `Iterator` ? e.g.
fn my_map<T: Mul, I: Iter<Item=T>>(it: I, x: T) -> impl Iterator<Item=T> {
it.map(|i| i * x)
}
`impl Trait` in return position is the only way to write that kind of code, because you can't name closures. You can workaround this by re-implementing a custom `Map` iterator... Or if you are dealing with `Future`s, by writing a new custom `Future` type, but with `impl Trait`, you don't need to.
I use it a lot. What before required a lot of boilerplate, now is a one liner.
> impl Trait in return position has changed how people code; but that's because it made certain things suddenly possible, which isn't an idiom change as much as obsoleting some old bad workarounds.
this isn't a new idiom. this is code that wasn't possible before suddenly becoming possible, and people using it.
,,The .await war was the final straw to make me stop watching the Rust community beyond what comes in the release notes.''
Regards the error checking syntax the article may be right, but as far as I saw, the await addition to Rust went perfect: it became stable when the decision was made (also a state of the art memory model was created for async calls), and nobody had to use it (or even follow it) before that.
I was referring to the fact that it is the only async implementation that I know of that compiles to a state machine that doesn't need to allocate new memory for each continuation. I think it will translate to extremely fast web services in practice.
Pinning is of course part of the elegant solution. There are some great videos about it:
> I think it will translate to extremely fast web services in practice.
In practice, it's probably not going to matter all that much because the average web service does way more time-expensive things like disk IO or SQL queries. The more significant effect is that async-await being really really fast enables you to use it in more places. For example, if you have a tight computation loop where different aspects of the logic are meshed together in a hard-to-read way, now you could try detangling it into several async actors talking to each other to make it more readable, without utterly destroying the performance characteristics.
I actually have't heard of any - and I'm pretty deep into the async things. Rusts async model just follows the synchronous model. Everything is not thread-safe unless it's annotated with the `Send` or `Sync` annotation. Futures which can be awaited are either not thread-safe - which means they can only be executed by a single threaded executor. If they are `Send` they can be migrated between threads and also executed by a multithreaded executor - like tokio.
But those are rather just thread-safety annotations and not a memory-model
Not sure I agree there. The language progresses with features that OP himself says he likes, and he also says that they DO maintain backwards compatibility, what more to ask for?
The author would be better or revisiting their code on a more relaxed cycle, for example every 18 weeks. Or whatever works for them, or just let go of the need to be using 100% of the latest best practices.
One of the main features I look for in a language is stability. I just don't want it to change that much other than adding functionality to the standard library.
I know that's not the fashion currently, but it's probably one of the most valuable things to me.
Have you used Rust? Rust has made a strong guarantee to always be backward compatible.
We don’t want dead languages, look at Java. It was stagnant for a long time, but has now shifted to a better delivery methodology of new language features. Rust is the same and continues to improve. Continues to make things easier and better to use without giving up its speed and low overhead goals.
Why does a language need changes to be alive? We don’t ship languages, we ship code written in them. It’s like claiming wood is dead because we have vinyl.
The comment you're replying to gave a good example - the history of Java.
If you're shipping code and modified code, you care about the language in which you can make those modifications. A frozen language means that either those modifications are harder (because potential improvements to the language are ignored) or they require writing components in an entirely new language and figuring out interop.
(If you're not shipping modified code, then you don't care if the language changes after you ship, anyway. You shipped, and then you're done.)
> A frozen language means that either those modifications are harder
I don't agree that it's harder. What is definitely harder is not being able to ship bug-fixes or modifications without ripping everything up because the language has moved on since your last release. And that is very common when developing for, as an example, iOS, since Swift is a fast-moving language that doesn't maintain backwards compatibility. The benefits of having some new language feature in Swift are far outweighed by the downside of existing codebases being invalidated. The various languages in the Javascript family suffer from this as well. The Python 2 -> Python 3 debacle was another example of this.
I have dusted off 20 year old Java code which compiled and ran just fine just fine. That is extraordinarily valuable to me, and requires a lot of discipline by the language maintainers. In fact, the new faster pace of Java iteration could be its downfall, time will tell.
A last note: how many language features from the past 20 years really matter? How many really speed up development, improve maintainability, etc. I would say that there are very few. In fact, perhaps the only one that passes that bar might be async/await type threading advancements.
Yeah, there's a big difference when you're targeting platforms (iOS apparently and to a lesser extent the web) that move. But if you're writing Python, Python 2 works better today than it did five years ago. The most recent Python 2-compatible version of every library that existed five years ago works at least as well as it did then, if not better.
People move to Python 3 not because Python 2 is unusable - it was perfectly usable five years ago and the bits haven't disappeared from our world - but because there's lots of small things that make development easier, faster, more pleasant, and more robust. I don't think there's any single feature you can point to, but there have certainly been countless little things where, when I work on a Python 2 codebase these days, I say "I wish this were Python 3."
Anyway, Rust in particular committed to indefinite backwards compatibility when they released 1.0, and the "epochs" system has been a good (post-1.0) implementation of this. They realized they wanted some keywords that they didn't reserve, some syntax that they didn't define, etc., so they said there are two epochs, 2015 and 2018. The compiler handles both, but the newer syntax only works in the 2018 epoch. If you have code that was written pre-2018, it'll keep working indefinitely, even with new compilers.
Because pre-C++11 is a nightmare to work with, but since C++11 and especially C++17, it became a somewhat enjoyable language. Similarly, pre-ES6 javascript is utter crap, but since then it became quite nice.
As a software engineer, I recognize that my own code is incomplete. Will it ever be complete? Will it have all the needed features to meet ever changing demands? As the system it’s built to support grows in use, will those demands change?
Languages are the same. Decisions in the past may be discovered to be incorrect in the future. Some features make the language far easier to work with and built better more stable software.
In Rust there are some big ones. ?, For easier returns from functions when errors are encountered, this simplified my code, made it easier to read. async/await, vastly simplifies Future based code, based on my own experience I believe there was a 30% reduction in code when I converted old hand written Futures to the new model. These are big awesome features in Rust, that most people who use them find to be great improvements for productivity.
Your attempt at an analogy is off, because neither wood nor vinyl are living things. I think if we turn to other engineering fields it’s more obvious. For example, batteries. We’ve had batteries since nearly the time that electricity was discovered. If they did not improve over the years, would we be able to build high-performance log-range cars with them? E-bikes with them?
Or something else, hammers are fine for nailing things, but nail-guns are amazing. Have you ever seen someone lay top nail floors with a nail-gun? I have, and my jaw hit the floor as I watched how fast they were.
I want my languages to be the same. Improve things that make the language better at working in difficult engineering spaces (for us that might be embedded), and improve it to make it more productive so I can build better software faster.
My analogy was to compare wood and vinyl as building materials for flooring (I guess I omitted that) and asking if the existence of something newer invalidates something older.
I am skeptical that all language features make code better. It shifts the complexity from the code to the programmer. In order to read a piece of code, I need to potentially keep up with all of the new language features, which is a huge burden in C++.
Those new features also have a tendency for complicating otherwise simple things. Like all of the edge cases that arise from objects with moving and exceptions. Sure the code might look simpler, but there is a lot more going on in the background you have to keep track of.
When I write C++ now, it is typically as C with a few constexprs and templates thrown in. I try to avoid most of the new features because they just distract me from writing code that works.
I program to solve problems. Changes in a language or worse, "how it is done" mean that I must keep up with something else in addition to solving the problems before me. Waves of increasing syntactic sugar in a language begin to look like layers of frippery and frosting, and I can only remember that "fashion is a form of ugliness so intolerable that we have to alter it every six months." I do not want to look at some bits of upper-row "executable line noise" that is somehow called syntactic sugar and try to have to mentally project what surface lies under that frosting.
What I want is to be able to look at ten year old code and it immediately be working and understandable. I don't want it to be out to style or "uncool" or to even have to think about these things.
By all means, fix bugs. Add libraries as new formats, protocols, and technologies emerge. Huge mistakes can be corrected in rare major version updates, which may have to count as actual forks in the language. When I first looked at Python, somewhere around 2004 or 2005 if I had to guess, I remember seeing that whole floor division bit and thinking, "Yeah, they're going to have to change that eventually and digging in their heels on that one was foolish." That's a rare exception, and it counts in the "why did this make it to 2.x?" category.
I feel a bit lonely in this, but I just cannot seem to embrace the churn.
Rust isn't even five years old yet (its birthday is in May). It's not entirely surprising that it has gained a few major features in its early years that weren't ready for the 1.0 release, or were found to be useful once more people had actually started using the language.
It's that sort of thing that has pushed me in the direction of, if there's no other pressing reason to adopt a language, selecting those which have been stable for a while.
I've been meaning to write a blog post on this topic. Maybe I'll do it tomorrow. But, I've been thinking about this question too, or at least a related one: how many new features does Rust get, and how often does it get them? I'd like to bring some data to this discussion.
In 2019, we had eight releases: 1.32 to 1.40.
- 1.32: one language adjustment, finishing off previous module work
- 1.33: const fn can do a few more things
- 1.34: you can do a bit more with procedural macros
- 1.35: a few minor details around closures
- 1.36: NLL comes to Rust 2015. Not really a new feature so much as backporting it to an older edition.
- 1.37: you can #[repr(align(N))] on enums
- 1.38: no real language changes
- 1.39: async fn! some adjustments to match. Some more borrow checker adjustments.
- 1.40: #[non_exhaustive]. The 1.39 borrow checker adjustments are ported to an older edition. Macros can go in a few more places.
Really, 1.39 was a huge release, and other than that... all language changes were very minor, mostly sanding off some edge cases, making some things more orthogonal, stuff like that.
Most releases these days add some standard library APIs, maybe we have some toolchain improvements... but for all of 2019, we had one feature I'd call truly major.
I suspect that things were a lot more hectic previously, but the language has slowed way, way down recently. We have had two releases so far this year, and I would argue that the only real language feature was an expansion around match patterns. Which again, isn't so much a new feature as it is adding a little bit to an old one.
Churn is real. It's important to keep in mind. We've said for a very long time that Rust's churn rate would slow down. It's really pretty low at this point. Or at least, in the language.
One person on the lang team writing two blog posts fleshing out an idea does not mean the sky is falling.
My main concern is with new syntaxes. Expanding use-cases for existing ones, loosening restrictions that can be safely loosened, etc. are all well and good. But layering on new sugar-syntaxes that aren't truly necessary makes the language much harder to mentally grok over time. All of a sudden you can't just derive what's happening by studying the code, you have to add new pieces of arbitrary information to your mental toolbelt first. The Ok(..) one mentioned, from a quick glance, seems really unnecessary.
Your general concern, which is that "new syntax makes things less explicit, and is therefore bad", isn't logical.
Adding new syntax has many costs (not only one), but it can also have benefits (hence why it is being proposed), and the real question is whether the benefits outweigh the costs.
That is, whether it is a trade-off worth making.
In the blog post, the author quantify the actual costs of not having Ok-wrapping on their own code, as well as the costs of adding Ok-wrapping to the language, and for them, their numbers suggest that the trade-off is worth it - so they went on and implement it as a proc macro, and are using it on their own code. They don't care whether anybody elses uses it, and they don't have to.
You could have disagreed with their quantification, maybe you have different numbers, and that leads to a different outcome of whether the trade-off is worth making or not.
Instead, you just disagree, without making an argument, and others are doing the same.
In my eyes, this kind of hurts your position, since it gives me the impression that those against Ok-wrapping can't really argue why. Whether they just can't argue in general, lack data to back their arguments up, or maybe even whether they are Rust users at all, is anybody's guess.
I can't imagine a Rust application writer that is using Result idiomatically and not doing _a lot_ of manual Ok-wrapping all over the place. I can't imagine anybody actually enjoying that manual Ok wrapping. Looking at servo, the Rust compiler, and all my applications, what people end up doing is writing their own one-shot local macros to hide that manual Ok wrapping. Having written some of these myself, it definitely did not happen because I was "enjoying it too much". On the contrary, once or twice per function is ok, but when I had to do it dozens of times per function, it was just too painful both for the people writing it and those maintaining that code.
it doesn't seem to me that your reading of brundolf's post is very generous. the post doesn't mention explicitness at all, but says that increasing the number of syntaxes that can be used for the same purpose increases mental overhead for reading code. also, the post says that this particular proposal "seems really unnecessary," strongly implying that the author does not believe the tradeoff is worth it.
personally, while I can't claim to love writing Ok around return values, I definitely value the enhanced readability that comes with it. rust is the most readable language I've ever coded in, and it's not even close. when I see the "throws" syntax that was proposed, I expect it would make it harder to read the code because the Result type is erased from the type signature. To me, whatever tradeoff I would gain in ease of writing code from this would be far outweighed by the cost in reading that code.
C# became that way to me. Some of the new syntactic sugar was redundant and wound up making code less easy to follow. I didn't adopt it all myself, but it made me less interested in parsing through others' codebases.
I still typically remove the LINQ defines from my own new code, and that seems to be one of the more acclaimed language features.
Given that rust is changing, I think they are managing the change well.
The question is: should it be changing?
I think the answer is yes. Rust is a new kind of language that is showing people (like me) things I never would have thought were practical. It's inevitable that ergonomics are going to evolve to make everything flow together.
For those not familiar with the Rust drama, it should be emphasized that this blog post is very much a reaction to the linked blog post advocating for "Ok-wrapping": https://boats.gitlab.io/blog/post/why-ok-wrapping/
This is a subject that has provoked much drama in the past and looks like it will again.
In general, programming languages will change over time, so what matters to me is how that change is managed. Cautious, small, gradual changes over time seem safer than big releases every year or so, but particulary because of Rust's approach to stability. When Rust has a new feature implemented, it does not immediately become part of the stable language. Instead, Rust only commits to stability for a feature once it has been properly tested. It's something I think PHP (which I have been a significant contributor to) and other languages could learn from, because there's no substitute for real-world experience to decide if there are remaining rough edges on feature, or whether it is good idea at all.
Moreover, unlike some other languages, Rust has chosen not to force people to adapt their code to new syntax and features! They can stay on an old version (edition) of the language forever, yet still use an up-to-date toolchain and have others be able to make use of their code. I wish every language was like that. Imagine if Python 2/3 had never happened.
Yes, large projects like Firefox and Chrome are constantly updating their code. The rate at which that change happens varies though.
Sometimes you do that because the new stuff is a significant improvement in its own right, but you also need to do it because using an "obsolete" dialect of C++ makes it difficult to incorporate third-party code, makes it unattractive to new contributors, and is generally just a code smell.
I appreciate that the language I'm using is evolving, as well as the libraries and frameworks in the ecosystem. If I fail to keep up because I'm focusing on making the most out of the tooling I committed to, that leaves opportunity for growth. I care about creating solutions, not keeping up with the ambitions of a large, diverse group of high achievers.
Maybe it's just me, but every time I look at Rust I'm either turned off by the community or by this constantly shifting idea of what is best practice.
A lot of comments have talked about the idea of writing Rust like it is X year, which seems weird to me as I would imagine that a language with a strong ecosystem would not create a situation where five-year old code that was considered ok then is now looked back as bad code and a reason for derision. Besides the obvious learning better ways to do things as time goes on, shouldn't code written five years ago still be good if it works? Is code being a few years old the only excuse we need to rebuild it?
Maybe I'm wrong here, but what I want is to not have to think to myself a year after I build something "well, that was built in last years standard, time to rebuild in this years standard."
The "community" you're paying attention to is made up of enthusiasts, so it's to be expected that they tend to be in the camp of adopting new shiny things as soon as they're released. The same thing happens in C++; you'll often see reddit / stackoverflow telling you to stop using new/delete in favor of make_shared and make_unique, stop using loops in favor of functions from `<algorithm>`, and so on.
Regardless of whether the new additions make the language better or not (I believe they do, in both Rust's and C++'s case), adopting them immediately in the codebases you are responsible for is your decision. You have to decide by yourself whether the pros from adopting the changes are enough to offset the cons of making changes. If you conclude that the cons outweigh the pros, and if it bothers you that the "community" derides you for not tweaking your code, then a valid solution is to stop paying attention to the "community" and get on with your life.
Are you reacting to the talk about "editions," e.g., Rust 2015 and Rust 2018? Those are actual concepts in Rust, similar to how C has C89, C99, and C11. I personally don't feel compelled to rewrite working code in newer editions. I have, however, received at least one PR from someone else who felt compelled to do so without asking.
Meanwhile typescript people be like: oh I see you're still using the old style mapped quantum ensemble types introduced five weeks ago rather than the retro-causal discriminated union types which will be introduced five hours from now - how quaint.
I can't keep up with it either, but that doesn't stop my code from doing what I want it to do. There are a lot of people who seem like they enjoy working on Rust as a hobby, the same way some people like playing video games. They still have a commitment to backward compatibility though, so who cares if a bunch of people geeking out on their favorite language are fine-tuning it and making superficial changes, the code I wrote before stil runs the same (or better) than it did when I wrote it. The compiler doesn't care if it's idiomatic, just that it's correct.
Yes, cargo fix just runs rustfix on all the files in your project and passes the same flags cargo does to rustc. (Similarly, cargo fmt runs rustfmt on all the files in your project.)
On the one hand, this is a predictable outcome if you are trying to shepherd a large codebase through a fast-moving language. Idiomatic Python 1.6 looks dramatically different to idiomatic Python 3.x.
On the other, Rust isn't the language I would want to write lots and lots of code in either. There are a few projects and organizations where it makes sense to do so(namely web browsers, databases, and other kinds of "deep backend, large surface area" types of projects), but most of the things it does well also act as a hindrance to feature development, compared with an idiomatic Java, C# or Go equivalent.
I recently had a thought, that language team should provide automated migration tools for such things. It woukd made python 2 to python 3 switch much easier and faster, that applies to rust even more.
just a big FAT WARNING here. C++ suffered for a long time from staleness because people detested change.
I think tools and technologies should prioritise the future and new users than focusing on the past and long time users. it is what is best for the survival of the language.
if you don't want to deal with the changes then don't update your code base. but don't update the compiler then resent the fact that it is updated.
Not even close. C++ is a language that is over 30 years old. In comparison, Rust is way younger and has had many more changes and features added in a way shorter period of time.
"The problem with C++ is all the bending backwards they do to keep old code compiling and working"
Uh, that's not a problem, it's (as you say) the main reason people use it and why they burn out on flavor of the day languages like Rust and Go (as the OP illustrates).
This is funny, because with Rust you can almost certainly compile code from Rust 1.0, unless it was relying on a soundness bug. You can also seamlessly use dependencies that use newer editions of Rust; you don't have to adjust your code to do this.
>Even if you add the recent additions, it is still less stuff than Rust already has. For instance, async.
C++ has been trying to add generators and async functions (`co_yield` and `co_await`) for a while now. I'm almost certain they had started being discussed and were even available as preview features in some C++ compilers before Rust started implementing them.
So what? Did you ever look at a production codebase? You will be hard pressed to find an idiomatic codebase and nobody in the world is going to update anything to be "idiomatic" every 6 weeks. I mean maybe your manager doesn't fire you right away if you try, but you better be doing this in your spare time lol.
For academic and self-learning use? Sure go ahead, but then please don't complain that you can't keep up with it. Nobody should be using a feature for the sake of using a feature. If you don't need it, don't use it. If its old and not "idiomatic" there is literally ZERO reason to change it unless you ain't got anything better to do.
This is what happens when a bunch of web developers are in the core team of a supposedly "systems programming" language. Web developers are used to moving to a new framework every week.
Personally, I stick with K&R C. I haven't gotten around to update all my with function signatures. whats is the thing with putting (void) all over or specifying the return type of my functions? Really, these explicit function declaration are just code duplication that serious programmers don't need as they know their code.
(/s)
This arrogance of “I need the one true language that was perfectly designed from the start and never needs to change! If a new feature is added at all ever after initial release, it must mean the language is useless. Why don’t we all use Go? That’s the perfect language. It’s just like C.”
Yawn.
The borrow checker is a major leap forward, even if it’s from the 80s. And, with the incredibly expressive power and concise syntax Rust offers, it’s an incredibly compelling option.
Plus, Rust has a guarantee of backwards compatibility.
Fold isn't an operation on a specific monad, it's a generic operation over any monad. Adding it to Option defeat the whole point of thinking about Option as a monad, i.e., the ability to apply generic algorithms that work on any type constructor that follows the monad laws.
(If the question is why no generic fold over any monad exists in Rust, the short answer is that Rust isn't Haskell. Which is great, it means Rust can specialize in things Haskell can't and vice versa. Personally my favorite example is that Haskell avoids shared mutable state by avoiding mutable state, and Rust avoids shared mutable state by avoiding shared mutability. Both have their downsides but each works better for different applications.)
> Fold isn't an operation on a specific monad, it's a generic operation over any monad. Adding it to Option defeat the whole point of thinking about Option as a monad, i.e., the ability to apply generic algorithms that work on any type constructor that follows the monad laws.
Having the same function exist with the same name helps communicate with other programmers, even if you can't express the fact that it's actually the same function at the language level. Just like in a language without generics you might still have add(int, int), add(decimal, decimal), add(string, string) and add<T>(list<T>, list<T>) functions. Rust already offers a function called "and_then" on most of its monadic types, and a programmer can see and understand that these are all in some sense "the same function" even though there's no way to abstract over that.
But 'fold' isn't very useful over option types as there aren't really any non-trivial operations that you can perform on something that's either Nothing or Just x. I think OP was probably thinking of folding over a list of option types, which you can do easily enough in Rust.
That's the opposite of my experience; fold is probably the main way I consume options in Scala. It's pretty much the only thing you ultimately want to do with an option, after you've finished transforming and composing.
I'm really puzzled by this. Which Haskell 'fold' function would I ever want to apply to a Maybe type? I think there must be some kind of terminological variation in the use of 'fold' here.
edit: Ah yes, I see that Scala chooses to use the name 'fold' for what is essentially the 'maybe' function in Haskell.
I don't think there is any very useful generic notion of 'fold' that corresponds to this usage. (I see how the operation is a kind of fold, but the type signature of the function isn't flexible enough to be used for any foldable type.) So that is why Rust and Haskell don't name this operation 'fold'.
Actually, in Haskell, given that Maybe is Foldable, you can technically implement the maybe function as
maybe' def f value = foldr (const . f) def value
Although in pratcice almost noone uses Maybe's foldable instance (and it also doesn't scale to other datatypes, like Either, nearly as well, because of the multiple type parameters).
And here we see exactly why implementing a specific function on a specific instance of a monad isn't the answer - two users of two different, well-respected functional languages can't communicate.
That's completely backwards - the confusion here is precisely because the function is not implemented, if it was present in Rust (with a type signature) then there would have been no misunderstanding. (There was also a mistake in the original statement - fold is nothing to do with monads)
The function is implemented for option types in Rust, it's just not called 'fold', because there's no universal convention according to which functions with that signature are folds.
Ah, indeed. That said I do occasionally use foldLeft, foldMap, and sum (which I think is what Haskell calls fold?) on options, so I'd say there are a handful of legitimate use cases for most kinds of folds.
Sure, but Scala supports monads via the 'for' construction and yet, no monad trait or class is defined in the base language at all. Even if they are defined ad-hoc on each type, there's value there.
Now imagine when it gets to the 40 years of C++ existence, or 25 from Java, 20 from .NET, then one learns to value why enterprises care about backwards compatibility.
More breaking changes are on the way, until error handling and async/await get properly integrated into std, instead of having a couple of incompatible crates each going its own way.
Please allow me to share my view of the conversation up to this point:
Boats: Here is an approach I use in my personal projects that has proven to be ergonomic, consistent with other language features, and could possibly even unlock future optimization opportunities long down the road.
OP: That's it, I'm DONE. The language moves too quickly and I can't keep up.
You: More breaking changes are on the way. They never learn.
-----
Would you agree that this escalated way, way too quickly?
Boats didn't "move" the language with a blog post, it's not even a pre-RFC. As pointed out elsewhere in this thread, the language hasn't been moving at all since 1.0 - you could say the frontier has been inching forward, but that's what frontiers do; nobody's forced to fight on the front lines all the time. And then the cherry on top - "more breaking changes on the way", when 1) prior changes weren't breaking; 2) it's not clear there will be any new ones; 3) if there will, those won't be breaking either.
Please let us know how async/await is not going to be a breaking change, when even Fuchsia just decided to create their own async runtime, because surely we still don't have enough of them.
So I wonder how std will incorporate a runtime implementation that happens to be compatible with all ongoing runtime flavours, so that those breaking changes don't happen.
Well, either the language will leave async runtimes in the domain of third-party libraries, which is the current approach. Or, the potential blessed one in std will be an alternative that you may or may not want to use in your code.
When I think of breaking changes, I think removing sun.misc.Unsafe. Which, yes yes, was never supposed to be used, but still was. Evolution of best practices and introduction of alternative implementations doesn't break anything.
I might be misunderstanding, but the third party libraries won’t be going anywhere, right? No one will be forced to use the runtime in std, if it’s added.
That is correct. It's not clear that we'll add one to libstd, but even if we do, it will have zero impact on folks using external runtimes. If we do add one to libstd, it will probably be a very simple one, mostly for convenience and prototyping, rather than something that tries to compete with the bigger runtimes.
It depends. The major pain point right now is that there's no common API for spawning new tasks. Not every library does that though.
It's still not clear what this has to do with being a "breaking change" though. Like, if I were to use the serde-json crate in my project, the fact that the json crate exists, but has a slightly different API, does not mean that somehow this is a "breaking change." Moving between two or three different libraries is not the same thing.
Well it all depends on how interoperability will work across runtimes, otherwise you get silos depending on which runtime each library decides to use for their async/await code.
Right now Rust is looking to have as many async/await variants as there are C++ string libraries, with the same amount of interoperability headaches.
That is a very idiosyncratic definition of "breaking change."
Interoperability is good! We have more work to do to make async more interoperable, absolutely. That doesn't mean that "async/await is a breaking change."
No, it's just that a prolific rust contributor made a blog post and then a follow up and things have been heated in the community a bit (I lurk a bit).
The fuss isn't over what the syntax should be, but whether it should be changed at all. The proposal isn't even an RFC, just a blog post and some discussion threads. I think the proposal made good points about the ergonomics of error handling. I think the community might just be a little stir crazy with the quarantine, and talking about a syntax change that upends the much loved Result idiom rustled some feathers.
FWIW it is a purely syntactical change that adds sugar, the underlying mechanism does not change. The "hacks" for not having exceptions are strictly superior to having exceptions, at least in my opinion because the error monad is both opt-in on the caller side and opt-out on the callee, it does not include non local returns (errors don't cause the program to jump up the stack, like C++ exceptions, or any need to pass down the exception handler or pass up a continuation), they support all sorts of wonderful combinators that exceptions don't play nicely with, and unless explicitly disabled by a binary author the stack will always unwind and RAII patterns observed. The issues with exceptions are well documented.
Not it didn't. Rust error handling is glorious comparing to exceptions.
Rust community is just full of perfectionist and we love to argue endlessly about how to improve stuff even more, which is very refreshing. In many other PLs people just keep their heads down and crawl in the mud.
> In many other PLs people just keep their heads down and crawl in the mud.
Or, as I would put it, programming is a job for them and not also a hobby. They have other things to spend time on, so banging out an ugly codebase in JavaScript is "good enough" to let them clock out.
Either is fine, but I don't think it's realistic to assume everyone can or should seek perfection in their tooling.
I think that's mostly because the end desire is to have them forced to be dealt with like java's checked exceptions, which most people have a problem with. It's hard to make forcing handling something easier for the developers, but I would argue its better than having unknown failure modes in functions you call.
Perhaps in terms of verbosity/tedium, but at least they don't have the problem of silently breaking functions between the exception thrower and catcher.
Sad to see that rust is heading towards a wrong direction. I think everyone in the core team should be stopping what they are doing and write Haskell for a year. This way they can appreciate the benefit of Haskell, and know what to avoid when they implement those ideas in rust, and anyone who trying to make the language and the philosophy more like c++ or java should be banned from the rust community
I'm sure they do, but even you must admit that Rust has gotten less 'functional' in nature as time goes on. New syntax or ad-hoc abstractions are being created to handle problems that have well known FP solutions. Take this matching on an Option business. The easy solution is to just add a fold method on the option type. Problem solved, now you never will need to match an option ever again! The harder solution would be to implement HKT and add a real monad trait, but let's not get into that. Another example are the async/await syntax. Why does this syntax need to exist? What value does it provide? Why can't a future just be a regular old class?
A long time ago I thought Rust was going to be a flexible and innovative general purpose language. While it still is undoubtedly innovative, it seems like the target audience has shrunk considerably. And as GCs improve and get more tunable, I think that target audience is going to get even smaller in the future.
> even you must admit that Rust has gotten less 'functional' in nature as time goes on.
I don't think this is true, especially when you look at long timescales.
> New syntax or ad-hoc abstractions are being created to handle problems that have well known FP solutions.
They have known solutions when you have a GC and no control over memory layout. They do not have known solutions in a language like Rust.
> Take this matching on an Option business. The easy solution is to just add a fold method on the option type. Problem solved, now you never will need to match an option ever again!
Sorry, I do not understand what you're talking about. (I coded in Haskell most a decade ago, I'm not on the lang team.) What would fold do on an option, specifically? Fold is usually a list operation, but an option is a list of zero or one... so it would be a no-op?
> The harder solution would be to implement HKT and add a real monad trait, but let's not get into that
Again, this is "we don't know if this is possible in a language like Rust" territory. Which is why you don't want to get into that.
> Another example are the async/await syntax. Why does this syntax need to exist?
Specifically because of the previous statement. It is not clear that a more abstract option is possible. We could wait for more years until maybe it's proven possible, or we could implement a useful feature today that we know is.
Rust doesn't have classes. Futures are a regular old typeclass.
> While it still is undoubtedly innovative, it seems like the target audience has shrunk considerably.
We have seen a massive uptick in adoption over the past few years. async/await, for example, has been a feature that a lot of folks have said "I'll start using Rust once that hits stable."
> What would fold do on an option, specifically? Fold is usually a list operation, but an option is a list of zero or one... so it would be a no-op?
Presumably the fold takes advantage of the fact that the default, "accumulator" value is chosen if the Option is empty and the collapse function just runs what would be the Some arm of the match statement.
If the team had just written Haskell for a year they would have picked the obviously superior "fold" name for this operation. (I think you can actually use fold here if you really wanted it, since Option implements IntoIter?):
For people coming from a functional programming background, `fold` might be the superior name. But setting aside monads, `map_or` expresses the intent much better I find.
The function in Option linked above by steveklabnik has signature
fn map_or(self, default: U, f: FnOnce(T) -> U) -> U
a fold has the signature
fn fold(self, init: U, f: FnMut(U, T) -> U) -> U
The difference is in the closure. in map_or the closure gets called at most once with one argument, in fold it can get called many times with two arguments. This makes `map_or` and `fold` distinct functions. While you could argue that the distinction between FnOnce and FnMut is Rust specific, the difference in number of arguments should also exist in Haskell. And in fact in Haskell the `map_or` function is called `maybe`:
(I'd argue the name `map_or` is more descriptive than `maybe`, although it does lift the result out of the Option monad which is unconventional for a map.)
Besides the GC that is already mentioned, Haskell has lazy evaluation which gives you coroutines for free. Coroutines are more or less synonymous with async, so in Haskell the implementation is a no-op.
From what I read, much of the challenge of adding async to Rust was in designing the coroutine mechanism, which is a substantial language extension.
What do you do if someone writes Haskell for a year and concludes "Actually, C++ and Java are better at this than Haskell"? Force them to write more Haskell until they reach the right conclusion?
(I personally love Rust because I found Haskell amazing but ill-suited to the work I wanted to do and I'm glad to have an option that's not C++ but is suited to the same sorts of applications as C++.)
Instead of starting with a fat standard library, it started with minimal one and now each release adds 2-3 functions that could have been there in 2015. But none of it is idiom-changing revolution. These are mostly convenience functions, and you don't have to use them all just because they're there.
Every addition to the language is bikeshedded to death, goes through betas, revisions, refinements, before being stabilized years later. The first experimental release of async was 4 years ago and it has landed last November. So you may hear about new features a lot and they churn in nightlies, but actual big changes reach stable rarely.
Apart from async/await, since 2015 Rust made only 2 changes to how idiomatic code looks like (? for errors, and 2018 modules), and both of them can be applied to code automatically. I maintain over 60 Rust crates, some of them as old as Rust itself. All work fine as they are, and running `cargo fix` once was enough to keep them looking idiomatic.