I haven't had much time in the past month to help out with Rust, but I'm excited at how things are proceeding. Conceiving, proposing, and implementing I/O reform in the span of a single six-week alpha cycle is very impressive, to say nothing of the sea of tweaks and refinements that have appeared in the same period. That said, I see this as basically just a "heads down, working furiously, nothing-to-see-here" release... six weeks from now is when the real action starts. :)
I must say what worry me the most is what happens once 1.0 is shipped in term of language changes and compatibility with previous versions. Because I've observed and of course I understand the huge amount of changes the language has undertaken until now, but I must say I worry it will continue at this pace right after 1.0.
Currently, I'm on the verge of choosing the language I will choose for an important project I'm about to start, I have a good knowledge of Rust and I think it's a great language, but despite all of this I think I will pass on Rust for this project because I'm not sure I'm ready / be able to keep-up with this level of changes.
Rust follows SemVer, so since we're still pre 1.0, everything has changed. Once 1.0 final lands, we will be backwards compatible until a theoretical 2.0. Every six weeks will be a 1.x release, and they're intended to be a drop-in upgrade.
Personally, I prefer languages that make small incremental breaking changes, as it prevents cruft from accumulating over time.
The important thing is to provide a migration path (e.g., begin by marking something as deprecated). Then provide refactoring tools to help migrate or interface with legacy code.
I think C++ would have been much better off had it been willing to do this. Paraphrasing Stroustrup: C++11 has within it a smaller more elegant language that's trying to break free.
Being able to run old legacy code is one of the most important arguments for C++. Every new feature is add-on (not "upgrade" and not "migration").
At the fast pace things develop for Rust I hope that the language will have a good way to handle Rust 2.0 code together with Rust 1.x code. IMHO providing practical tools for an upgrade path is something that should be thought of before releasing 1.0.
Objective-C is a good example of successfully migrating the language forward. Breaking changes were introduced slowly. Library APIs are marked as deprecated several releases prior to removal. The compiler warns you of their use. Automatic reference counting can be turned off for legacy files, but a tool is provided that will help you convert a class to use ARC.
Code written in ObjC a couple of years ago will likely not run today without modifications. But overall this has been a net positive for the language. It's actually become really pleasant to use. And is in part why Apple's platform has thrived.
I think optimizing for the past is the wrong thing to do. If your language is successful, then most of the code is yet to be written. So if your choice is between making future code easy to write, or not breaking legacy code, then you should err on breaking legacy code and provide deprecation/migration tools.
The concern isn't that the pace of breaking changes will mean Rust breaks sem-versioning but that the duration between Rust 1x and 2x will be short. In practice, it doesn't make a difference what number is assigned to a release if the major stability turns out to be short.
Can you comment on that -- possibly entirely incorrect -- concern I have about Rust's development? Are we going to see Rust 2.x popping up in six months?
We have no current timeline for a 2.0. It certainly will not be on the order of months, I would prefer on the order of a decade, I'd bet on the order of years.
Furthermore, and this is speculation, since again, we haven't talked about it as a group, but I wouldn't imagine a Rust 2.0 where it's like breaking changes are today. I would imagine a very long period of deprecations, a nice upgrade path, and all that jazz. Nobody likes when the entire world changes out from under them all at once, and sudden, massive changes are something we're trying to avoid with the release train model.
Will it really cease making all breaking changes? In the past, the Rust team has been open to the possibility of making changes that technically break backwards compatibility, but that they view as being unlikely to cause many problems in the real world. This is still different than ceasing breaking changes in the sense of SemVer.
Compatibility and stability is always somewhat fuzzy. For example, strictly speaking in JavaScript, it's basically impossible to make any semver-compatible change to any library; adding a method "foo" could break anybody who was counting on monkey-patching in a method called "foo". But people still get a lot of mileage out of semver in the node.js/io.js community as a promise that we intend not to break code. Likewise, in Rust we might break some subtle details of behavior that we explicitly left unspecified, such as type inference or heap layout, but we'll do our best to avoid breaking real-world uses of the language or libraries.
The changes are not happening in spite of 1.0, but because of it. After lots of development work, these are the features the devs think are good for the base language. There will definitely be language and library additions in future releases, but there shouldn't be any changes.
It's true that lots of changes are still happening now, but this is because breaking changes won't be accepted post-1.0 and so everyone's getting it out of their system. :)
I really don't think you need to worry about breaking changes post-1.0. The current concern (or battle) is people who have gripes with the language as it currently is lamenting that they appear to have already run out of time to get changes made.
Awesome, with their announcement that "all major API revisions are finished" this seems like the turning point where I will finally invest my time into thoroughly learning the language! I'm really excited!
Rust doesn't currently enforce pure functions at all; there is no 'pure' keyword. Here is an old mailing list post related to this decision - note that because it is 2 years old and Rust has been in a state of constant flux for most of that time, it may not fully reflect current thinking.
Years ago we had pure functions that have since been replaced with our current function model. I'm betting that we are still reserving "pure" from that time. I haven't heard of anyone proposing to use it for something, so should probably unreserve it.
I've recently attended a conference where they gave a D talk, and almost immediately after a Rust talk.
This let me compare these two interesting languages in a quite detailed way.
While Rust seems to be more safe in terms of trying to prevent as much as it can that you shoot yourself in the foot, it can be understood that the way to achieve this is maybe too extreme. I've read that programmers new to this language invariable struggle with lifetime issues that the compiler keeps arising.
I'm sure this is a good thing. But I'm not sure the ways to fix those issues, at least in a number of examples I've seen, are pretty. Most of them involved abusing the "*" and "&" symbols, which put you in a territory in which you seem to be dealing with pointer arithmetic again like in low-level C, something which I thought Rust was trying to escape from.
One thing I really liked about D is the string mixins, which could allow for what has become common these days in F#: type generation at compile time (which at the end of the day allow you to, for example, parse XML or JSON with static APIs, hence allowing intellisense/codeCompletion/youNameIt).
I'm really excited about what these kind of high&low (hybrid) level languages will allow. I hope they get into mainstream soon, and surpass other less interesting (but trendy) ones like Go.
> Most of them involved abusing the "∗" and "&" symbols, which put you in a territory in which you seem to be dealing with pointer arithmetic again like in low-level C, something which I thought Rust was trying to escape from.
First off, Rust doesn't have pointer arithmetic.
Moreover, though, Rust is not trying to "escape from" pointers in C and C++; it's a manually memory-managed language, not a garbage-collected one. For Rust's domain (systems programming without automated memory management), learning about pointers is a necessity. What Rust saves you from is making dangerous mistakes with memory management.
> One thing I really liked about D is the string mixins, which could allow for what has become common these days in F#: type generation at compile time (which at the end of the day allow you to, for example, parse XML or JSON with static APIs, hence allowing intellisense/codeCompletion/youNameIt).
Rust has macros and compiler plugins, which allow you to do the same thing.
> It should have avoided the "∗" and "&" symbols then, IMNSHO.
Those operators aren't for pointer arithmetic…
> C# does this partially by using the "ref" keyword.
But C#'s reference parameters have no corresponding dereference operator, because they aren't first class and are limited to function parameters. As a result, most programs cannot be written without using the garbage collector in C#. In Rust, however, references can appear as first-class values, which makes them much more flexible—enough to supplant a garbage collector when combined with smart pointers—but requires that you annotate where they are created. `&` and `*` are well-known operators for this.
Those operators aren't for pointer arithmetic in any language I know of, including Rust, C, C++, or Go. The operators for pointer arithmetic are plus and minus (technically square brackets too in C)--that's why it's called pointer arithmetic.
Ok sorry, I misused the term "pointer arithmetic" then. I think I was referring to memory indirection and the like. I find "ref" and "->" less cryptic than "&" and "*".