3. Given the aforementioned degree of remaining instability, users should still probably stick to the nightly releases in order to help keep their code up to date and weed out bugs in the compiler.
As ever, it deserves to be reiterated that the 1.0 release does not represent the language being "finished" in any way, only that things will stop breaking. The language will continue evolving rapidly after 1.0, and even the 1.0 release will contain several known (and sometimes rather unfortunate) restrictions that will be backwards-compatibly lifted over time.
Post-1.0, I expect there to be a large community outreach to determine which work to prioritize (for example, I foresee a great clamor for making macros more usable). With developer help I intend to publish a blog post before then detailing exactly which deficiencies Rust 1.0 will contain, and the use cases that they currently either prevent or make awkward.
> Breaking changes will basically cease, with the exception of a list of libraries that are still unstable
It's worth noting that this is a significant point; probably 40-50% of the standard library is 'unstable'.
You should expect breaking changes often during the alpha. It's just ridiculous to pretend otherwise.
I'm not really a fan of these 'artificial deadlines'. There was no reason today had to be the alpha, other than it was previously said that it would be. There's been a lot of great work going into this release, but the standard library is not ready; it's going to under going some major changes over the next few weeks.
I don't really see the point in hitting the alpha 1.0 now, with the library in it's current state.
The 45% number is misleading, because that's a per-item count. The majority of almost every module is stable, just some of the internals haven't been marked as such yet. In addition, the three modules which will see changes have RFCS that are in their final stages, so they will also stabilize soon.
The alpha was always about language items, and an initial commitment to some degree of stability. Beta is the release where things are expected to be in their final form.
If the majority of the modules are already stable, why not wait until they are marked as stable, with a more realistic set of #[unstable] in the standard library for the alpha release?
Anyone using the alpha now is being smashed with pointless unstable warnings until they add #![allow(unstable)]
...which basically negates the point of having the lint at all.
Surely a better approach would be; if 'we have no idea where it's at at the moment', don't tag the api as unstable. Tag things which wont make it into 1.0 as unstable, so people start getting meaningful warnings about using api features that won't make it into 1.0.
Seriously, what tangible benefit do 90 warnings about every single api have when you run a compile?
Even if that number slowly drops over the coming weeks, it's still going to be entirely meaningless as an indicator of what will or will not be broken once the beta hits; you're also going to be getting a lot of rubbish feedback from people asking for certain apis (which will be stable for 1.0) to be stable, because of warnings that they're unstable; for example std::fmt.
...while anyone using say, std::raw::TraitObject, or say, alloc::heap::allocate, needs to get a heads up now that they needs to say something and get involved if they want those apis to make it into 1.0
Practically speaking, it feels like you need to roll back to 'unstable' and 'experimental' tags, with the lint warning about experimental, and ignoring unstable.
ie.
'unstable' <--- Probably in 1.0, not final yet, don't lint these.
'experimental' <---- wont be in 1.0, lint on these
Traditionally alpha just means something works. Proofs of concept are often called alpha. Beta means things mostly work, although there can still be many known blocking bugs. Late in the beta* series you would expect true stability to emerge. The release candidates are the first signal of actual stability, maybe.
Finally! I hadn't used Rust for two weeks, and when I downloaded the nightly last night, code from two weeks ago was breaking. Feature stability will help.
Yes, this is a big deal. I tried looking into Rust a couple of months ago, but the speed of change meant that blog posts and tutorials from just weeks before were sometimes completely broken.
Getting to 1.0 means that the ecosystem has a chance to grow properly which means "lesser" coders such as myself have a chance to learn and get involved.
And most are dated and versioned, but a changing API still meant that the corpus was not expanding and knowledge was being lost. Tutorials got out of date so quickly it was hard to get a foothold when trying to learn.
The evolution is important and I don't want to seem like I'm saying it shouldn't have been done the way it was, it was very well managed from what I saw from keeping an eye on the project. I am just happy that it will now be stable letting the ecosystem grow and feed off itself.
The last week has been extra intense. We set the date for alpha around the time we thought we'd be able to ship everything we needed, and we hit those deadlines! But that meant that the past seven days has seen a _lot_ of stuff land.
Yep, and for those who has to stay on master it was pretty nasty time - one day you patch a library, another day you're almost revert yesterday's patch.
Coincidentally, I used Rust for a little board game solver this week and have been delighted by its performance and typechecker feedback!
My Scala v1 was was very concise but took ~3 seconds to simulate a whole game. The naive Rust rewrite did it in 0.7 seconds and my current version churns them out at 0.015s each!!
Performance is an area where Rust still has a lot of low-hanging fruit to pick, so I'm happy to hear that you managed to make your version fast. Did you have to make any design compromises to that end? We're very interested in optimizing the typical use cases.
I have currently got some bit-swizzling [1] going on to fit the concept of a Piece into u8. (one of 6 colours and one of 6 shapes). I'd like to turn this back into a nice enum if I can!
Introducing laziness by writing an iterator was actually one of the biggest single improvements (I couldn't figure out the syntax for a while, but lifetimes worked much better than I was expecting)!
FYI both kaoD and Kibwen's post seem to be dead because they included a link to a URL shortener (seems a little melodramatic on HNs part...) so I'll reproduce it here w/o the URL shortlink and see if that helps
>Hm, an enum like `enum Color { R, O, Y, G, B, I, V }` is represented by a u8 at runtime (see for yourself here[1]), which means that you're only saving a single byte in your `Piece` struct by doing that manual optimization. What was the magnitude of the speedup that you saw?
Ah that's good to know, looks like I should switch it back! The manual u8 stuff was pretty marginal. (It was also before I discovered the `PartialEq` and `Copy` traits, so please forgive my Rust inexperience!)
There have been a few proposals for advanced bit-packing stuff, but most of it has been postponed as back-compat for later.
That said, by default enum/struct layout is undefined, so it's possible the compiler could be taught to optimize your usecase correctly. For instance Option<&T> is the same size as &T because &T is strictly non-zero, and we can therefore use 0 for None.
Taking a quick look, it seems the program starts up, runs once and then quits. I'm wondering if the JIT actually compiled all of the app. Being JVM based Scala is very profile guided, so normally for benchmarks you want to repeat the same operation in a loop until elapsed time stabilises.
That said, if your use case involves running once for a few seconds and then quitting, ahead of time compilers like rustc will always beat a profile guided JITC. So this may be an unfair point.
Wow. Anyone remembers Rust pre-0.1? When it had typestate, ML syntax, garbage collection, etc.? So nice to see how the language slowly evolved and was molded to fit as best as possible the problem they were trying to solve.
It's been a fascinating metamorphosis, undeniably. I'm going to attempt to turn it into an interesting talk for the tentative Rust conference that may be happening this year. :)
Yes, I worked on GC as a summer project. Turned out to be way harder than anyone had thought because LLVM isn't really set up to handle high performance GC. I have a talk on it here:
Haven't used Rust, but I wonder what you could use in place of the phantom types? Are Traits capable of filling this role? Am I completely misreading what Traits are?
Can Rust still do phantom types, and I just can't see them anywhere in the language reference?
You should still be able to use phantom types with empty traits or enums.
Here's an article[0] (admittedly from ~6 months ago), about using phantom types in Servo.
Phantom types aren't part of the language because they don't need to be. They're just empty types used as type parameters for structs, where the type parameter isn't actually used as a variable in the struct. It is more of a design pattern than a language feature.
I've been incredibly impressed by the willingness of the Rust team to take a step back and re-evaluate old decisions. I think this 1.0 release will be much better for all the iterations put into e.g. dynamically sized types and other things that never quite fit in their first half-dozen iterations of the design. Congrats all!
Yeah, that was what impressed me at the beginning when I was starting. For example: ripping out classes just after they were documented and implemented, because structs/traits were enough.
The one thing that would put rust over the top right now is something along the lines of gofmt - something simple, with zero configuration, that can be run on commit or even save.
This makes me thinking, since world of coding has already reached the point of using zero-configuration fully automated code formatting tools on save/commit, why not make the final logical step — enforce coding style by language syntax, and be done with almost all extra burden of coding conventions.
Right now almost every company and even separate teams develop their own coding conventions document and configure their tools and infrastructure for it. But if all programmers agree that "coding style does not matter as long as it's consistent" and every one is willing to accept a project's coding style when joining that project, why not pick one style for all and put it into programming languages themselves. I imagine a lot of paperwork and human time would be saved.
They're actually RFCs. And the results are in a repository. I'm on mobile or I'd link you. But they generally codify already-existing style, rather than mandating it, although sometimes we just need a decision to tear down a bikeshed...
I was on the rust mailing list for a little while about a year ago. Around that time there was discussion of this exact subject. Most of the discussion was around exactly how to break parameters beyond a certain length and so forth, but it was there.
Ugh. I hate gofmt. It's a good idea in principle, but even with "go" there are times when you want things spaced out to align columns, or hide a distracting error handling case on one line (instead of making it take 3 lines of precious vertical screen real estate).
I find gofmt to be too opinionated about certain things and I will never use it for my go programs.
Why is it a good idea in principle then? Automatic reformatting of the source code always seemed like a bad idea to me for precisely the sort of reason you describe.
It focuses developers' attention on the things that really matter - what the code does, and how it does it - by removing how it looks from the equation.
The point of an automatic formatter isn't to make the code look pretty. It's to make the code non-ugly, with as little effort as possible. Obviously, there will be cases where the formatter screws up and writes out something pretty ugly. The point is to train yourself to ignore these, because honestly time spent looking at, angsting over, and fixing bad formatting is about the worst possible use of your development time. Instead you could be thinking about solving a problem that hasn't been solved before, or making the product easier for a user to use, or refactoring semantic issues in the code that trip up developers.
It's good enough when you can configure it and apply it selectively. If I want to clean up a function in my code base, I can e.g. select it and run "perltidy" over it, which has a configuration dot file for the company code standard (or CPAN guidelines etc.).
Or just clean up some nags so that version control behaves better. But if the only option is not running it at all or having all the code automatically fit to whatever the people On High have deemed to be visually pleasing? Yeah...
It's irksome when the auto formatter does something ugly. Though, I think the issue is far worse with Perl tidy. With, gofmt my view is more "I don't like this bit but fuck it." Probably because, Go has much simpler syntax.
Both languages (being bastard children of C) often run into exactly the same problems, where the syntax doesn't differ greatly. If you e.g. are accustomed to having indentation without tabs, margins within function calls or prefer non-tabular variable declarations, you can have that in C, Perl, Pike, Java, etc..
1) A lot more code isn't read in vim or emacs nowadays. We don't have the equivalent of indent on the github web ui
2) Everyone and their mother is doing code reviews now, and a lot of code reviews get (very visibly) bogged down by style disputes.
3) indent and his modern friends (eg clang-format) generally only answer trivial whitespace questions, and not even things like method capitalization. gofmt is the obvious extension of indent.
Whether it's called indent, gofmt or Visual Source Formatter 2051 doesn't really change its availability. Nor does it really matter a lot how much of the source is munged -- every forced change is probably annoying to someone.
The main issue for me is that your code reviwer isn't looking at the code from my company. So why should I give a damn what they think about how spaces around the parens of a function call should look like in the code I'm editing?
I'm not convinced that this should leave the bounds of a project. Within that, sure! Let's put a .rustformat right next to .gitignore, and whatever your source discombobulation utility is called just takes its hint from there. Certainly nothing wrong with a language having a good indenter, doc tool and linter in the core distribution. But one language, one style, one tab width!?!
This is especially annoying when you've got a pretty unified style across all other algol-/C-ish languages, yet can't keep this in your newest one. For the sole reason of pleasing some community that'll never see one line of your code.
> This is especially annoying when you've got a pretty unified style across all other algol-/C-ish languages, yet can't keep this in your newest one. For the sole reason of pleasing some community that'll never see one line of your code.
It's rarely a good idea to reproduce the same patterns in two different languages (except for things that are common anyway, like indentation etc.).
And the main argument for a language-global code style is that you don't need to teach new commers (to your company/project) which code style you follow, you don't need to have long discussions about whether the article #36 is being respected in that last commit, if the code style version you had is up to date, etc.
One great example: Python and PEP8. It's the standard, everybody accepted it. That means that all libs use it AND (almost) all companies use it internally, which makes friction between different modules from different sources minimal.
That depends on your purpose and the problem you are trying to solve. If the problem is style bikeshedding, that can easily be solved through autoformatting and configurations for the standard for that project, company, etc. Write however you want, run it through the auto-formatter with the right config before commit (or add it to commit hooks). Problem solved.
If you want to solve the problem that people are writing in different styles and there should be one true style for a language, why? You're bound to get something wrong in your style initially, and it will be annoying from then on, and hard to change. It actually seems the antithesis to how Rust was developed, which was to keep trying things to see what worked.
Indeed, I am missing said point. Version control snafus aren't really an issue outside of my project, so it all comes to the endless repetition of "bikeshedding".
indent(1) is not idempotent, at least not in the configuration I tested, which makes it practically useless as an editor save hook or an SCM pre-commit hook.
Of all the C pretty-printers I tried (about a year ago), clang-format came closest but none were idempotent on the codebase I threw at it.
go fmt is idempotent, which means I use it in an editor save hook. I got used to the convenience, so now I miss an automatic formatter when writing other languages.
Are you sure you're not exaggerating the benefits of an automatic formatting tool? I'm thinking that a large number of libraries, usage by multiple high-profile companies, a killer app, etc. would be better for Rust at this point.
I feel like this is the moment that so many devs who have observed rust over the past couple of years have been waiting for. I for one am looking forward to diving in.
The standard library is _not_ "batteries included," on purpose. Given that we have Cargo, and it works well, tying package updates to the language version has quite a bit of downside, and very little upside.
That said, the Rust team itself maintains and provides a number of packages on Crates.io ourselves. Many of these were pulled _out_ of the standard library over the past few months.
The upside to a batteries-included stdlib is, of course, that you don't have to go fishing around for the best lib to do $WHATEVER_PARTICULAR_TASK, and you also don't have to wonder whether whatever lib you eventually choose will be abandoned by its developer next month.
At such a young stage of language development, I agree that it makes more sense to let the community develop libraries in order to foster competition and quality. But eventually (read: no less than a year or two from now) I think it will be up to the project maintainers to officially endorse certain third-party packages and commit to their maintenance. At a certain point, the advantages of a library that's well-known, well-maintained, and well-documented outweigh the disadvantages of de jure ossification.
It's been my experience that the standard library is _never_ the place for the best lib to do $WHATEVER_PARTICULAR_TASK, but different folks have different preferences. We'll see how it all shakes out!
I think Python and Go are two examples where the standard library has very much proven invaluable.
Without it, it's difficult to be confident in the portability of components, and it quite frankly makes the language less attractive for use.
Like the other poster mentioned, I'm happy for the RUST folks to take a "wait-and-see" approach, but at some point, I believe "blessed" components are going to be expected and strongly desired.
Python is a prime example of a rotten standard library.
Take urllib/urllib2 (use requests instead), unittest (use py.test or nose), os (too low-level, hence arcane usage) or time as examples (use pytz for anything serious), and of course Tkinter.
Take all of the modules that solve minor/niche tasks that could easily have been put in a seperate library (e.g. wave), that are usually a bad idea to use (e.g. pickle), that contain some copy/pasteable functions the authors deemed useful as comments (itertools), have documented bugs with copy/pasteable workarounds (csv).
Oh, and the way they do exceptions is a mess.
Oh, and don't try to read the Python standard libraries source code. It's ugly.
No, standard libraries should constrain themselves to providing a good foundation for library designers.
(I still like Python, and use it a lot.)
(Since you mentioned Go. One thing that I love about Go is that everything interaction with the underlying OS goes through the syscall package. Also, their designer went for more minimalism. And they didn't have to worry about design mistakes they made 25 years ago. Python is old. I'm not a fan of Gos compatibility promise: it means Go 1 will rot away too. But they're in a much better starting position.)
It all depends what you're using it for. If I'm writing code that's going to be around awhile and can tolerate some dependencies, I'll totally use, say, requests. But I use urllib2 from the repl all the time, especially if I'm on a machine that isn't mine. The fact that any Mac already has the tools on it to grab some JSON from an API, parse it, and do something useful with it from the command line, without having to download or install anything, is immensely useful, even if the API is a bit suboptimal. The same applies to quick and dirty things I'm shoving into a Gist to share with colleagues. Not all things have to be elegant to be useful.
(That said, those kinds of usecases aren't really in Rust's wheelhouse, so I still think in Rust's case having batteries not included is probably the right call.)
It's possible to achieve the best of both worlds. We could introduce the concept of a standard distribution instead of a standard library. A release of Python should ship with specific versions of requests, nose, and other packages that have a broad community consensus. Those packages could be individually upgraded later, but every install of Python 2.7.9, for instance, would have requests 2.5.1 or greater installed. That would avoid the stagnation packages see when they enter the standard library, and would maintain the benefits of universal availability.
I agree about urllib and unittest. However over the last decade I still find Python to be a one of the best "batteries included" language/platform.
That is not small thing either. It allows getting started easier, which in turns gets more people to use it.
Here is a list of modules I used and was happy there were in stdlib:
socket, shelve, cPickle, tarfile, urllib[2], Tkinter, time, ctypes, subprocess, asyncore, json, SimpleXMLRPCServer, wave (sorry, I did use it many time ;-) ), timeit, syslog and many others.
While I'll readily agree that some parts of the standard library are rotten, that's not sufficient justification to say that there shouldn't be one.
I should also clarify my expectations about a standard library; to me, a standard library should have all of the basics covered (interaction with the underlying system, I/O, networking, etc.) and anything that benefits from better integration with the runtime (think data types such as those found in python's collections module).
If anything, I'd argue that the main problem with Python's standard library is not the library itself, but the lack of more focused curation.
Note that I never said that I expect all functionality to be available in a language's standard library; for me personally, Go's standard library has roughly the right balance.
The other thing that really needs to be in the standard library is protocols & interfaces that will be implemented by a number of userspace libraries. Go benefits immensely from having standard io.Reader and io.Writer types and most people implementing them instead of defining their own. Similarly, most of its web frameworks use http.ResponseWriter and http.Request instead of defining their own. Python's unittest module may be a mess as a test framework, but all the major Python unittest frameworks take a unittest.TestCase, which keeps tests portable among the different systems.
The worst case is exemplified by pre-STL C++, which didn't even have a string type in the stdlib. As a result, every project and library wrote their own, which meant that you basically had to choose a C++ ecosystem and develop for it rather than write libraries that are portable across multiple C++ projects.
... and a sufficiently large C++ application would use 10 different string classes in various parts of it, likely with different text encodings too, with lots of fun converting between them. Even a crappy UTF-16 based String like Java's is better than such a mess.
What's wrong with the exceptions? I never noticed that problem but agree completely on all your other points. (Especially itertools – why aren't the "recipes" defined in the module?!)
In my experience the batteries included is one of the best features of python.
The standard library is fine for simple and small scripts, especially in restricted environments without pip or sudo rights. They are a lot of awesome python libs (like requests) but it is awesome to have the stdlib available everywhere and be able to depend on it.
The opposite is also true; a good package manager can avoid the stdlib to bitrot ala Python, because you can easily swap out an old module by simply repackaging as a third party dependency.
On the contrary, the benefits of having a standard set of batteries, especially those upon which other might be developed, is fundamental to a sane ecosystem development. For instance, a standard framework for async I/O programming is important, because then people can write thousands of protocols that can be fully interoperable; if 2-3 different framework arises, each one can then develop its own ecosystem of protocols and libraries (not interoperable), and it's hard to think that each one would be as rich as the one in the former scenario. The same can be said for a HTTP library, a threading library, a XML/JSON marshaling library, a threading/concurrency library, and so on.
I think that crates.io still has a ways to go before the situation is ideal. Discoverability of libraries is the first pain point to look at. After that, I'd like to see crates.io automatically parse an uploaded package's docs (thank you, rustdoc!) and host them on the site itself.
I strongly disagree with that assertion; while I readily agree that some parts of the standard library are less maintained than others, as a consumer of many third-party components, I can say without a doubt that items in the core library are generally better maintained (from a security perspective) and easier to deal with.
The ease of installation has nothing to do with the desire for core components. Core components generally bring certain expectations/guarantees about security, reliability, and support.
That's just not true. Standard library components are not better maintained, not more secure, and not easier to deal with. There are too many examples to even count of each of those -- you can take a look at PEP 476 for just one recent example.
You're arguing with a general observation. The fact that it's not universally true is unsurprising. If you take the average quality of third-party libraries and the average quality of libraries in the standard library, I think you will find that the standard library is generally pretty good.
Whether it's better than the average is irrelevant, that isn't the benchmark. It needs to be better than or equal to all of the third party libraries for it to be worthwhile, otherwise, why does it exist? When you pick a library you don't pick all of them, you pick just the best one that meets the criteria you need, and the standard library can't beat the flexibility that other people will have to better meet those criteria without needing to partake in the process upstream.
(It's also not true that the standard library is generally pretty good IMHO, but that will start to veer us into subjectives [which I think the comment above my original post has already done anyhow]).
> When you pick a library you don't pick all of them, you pick just the best one that meets the criteria you need, and the standard library can't beat the flexibility that other people will have to better meet those criteria without needing to partake in the process upstream.
So, you're using three libraries that all depend on something like urllib - and they each use the library that fits their usecase best. Now you need to debug/review/depend on updates for 6 (7 if you also use the stdlib for something) foreign codebases rather than 3 + the standard library.
It's a trade-off between old/new good/best simple/complex (or complicated/complected). A standard lib that needs to maintain stability for 10+ years can never be "best" for all that time. But by being "good enough" it can often still be the best choice overall when the life cycle of a project is considered.
Sorry; I'll have to disagree. As part of a group that maintains Python packages for an operating system distribution, my experience has generally been more positive for the standard library compared to third-party components. Particularly when it comes to security issues.
> I think Python and Go are two examples where the standard library has very much proven invaluable.
C#. The amount of time I've wasted in Java programming teams while arguing over things like which of three quirky XML parsing implmentations[1] was the One To Use while the .net team powered off and built useful functionality...
C# seems to be in a similar position when it comes to JSON. There's both System.Runtime.Serialization.Json and System.Web.Script.Serialization built-in plus a whole bunch of third-party libraries (e.g. Json.NET seems to be popular). I'm still not sure which one to use.
Having a standard library for a specific task doesn't forbid alternatives to be implemented and used. But it discourages them enough, which is a very good thing for consistency (especially in reading code), lowers the barrier to entry (eg: I don't have to learn how to parse json with the specific library the project I'm contributing to is using), reduces binary size (I don't link three different http libraries, mine and those chosen by my dependencies) and generically reduce the time wasted by humanity in useless duplications of efforts. When the standard sucks, well, good alternatives will appear.
Also, the sad state of Python packaging is the reason why stdlib bitrotted; technically, your package manager could take care of backward compatibility: if eg at some point in the future you want to switch from a json library to a (far) better one, more in line with how idiomatic language has evolved, you just need to repackage the old one as a third party package, and make sure the packager manager brings it down for the user automatically.
The useful thing about a standard library is that it provides a common vocabulary for code, which is a great help for interoperability. For example: c++ doesn't have a 3D vector class in its standard library, so pretty much every 3D-related library invents its own & you end up having to write lots of useless glue code to convert from one to the other.
Yes, this is very true, and one of the reasons that we have a pretty big set of things like collections. Common traits are an excellent thing to put into a standard library.
I really love your usage of the phrase "common vocabulary" to describe the utility of the standard library. I'll be using it in every discussion of this sort from now on.
For what it's worth, while a lot of people complain about the Ruby standard library, when I want to reach for an HTTP library that is properly threadsafe, properly supports encodings (and in general is properly integrated with whatever version of the language the user happens to be using), Net::HTTP is a godsend.
People who complain about the ossification are often talking about the aesthetics of the API (since API tastes change, long-term stdlibs tend to feel outdated), or the level of abstraction (you have to do a lot to use Net::HTTP, at least in the past), and not about the functionality. I'd rather people build nice abstractions on top of Net::HTTP than tell me to use something else because the API is prettier.
Yeah, while that's kinda true, it's also incredibly hard to patch bugs. Contributing to Net::HTTP means dealing with a whole lot of stuff that you don't have to when contributing to a random project, and (as I'm sure you know, hahah) it's easy to use my custom patched random gem with Bundler, whereas I have to wait a year to get my Net::HTTP fix.
Yes, but the cost is that those gems have to worry about supporting a lot of different versions of Ruby. Writing a fix for Net::HTTP only involves making it work on master ;)
By extension, language builtins should also never be the place for the best facilities to do $WHATEVER_PARTICULAR_TASK. The line between builtins and standard libraries is extremely variable between languages.
There are languages with strong standard libraries: Erlang (both its normal libraries and OTP), Objective-C (Cocoa) and Mathematica are three that come to mind.
I think the stdlib is not the way to go. Give me an example of any ecosystem where the standard library is the defacto best solution?
But would agree that the only thing Rust might need are officially supported crates, while keeping the language itself fully separate.
The language is then completely free of the hastles of maintaining a stdlib that probably isn't used by most people anyway. But at the same time officially supported packages give developers and newbs a place to start, and the community some confidence that libx will be maintained in the future. I really think this solution is the best for seperation of concerns and maximizing developer productivity and efforts.
For things that the standard library can do, it is usually the best choice. It's code that's already sitting on the user's machine, why use something else?
And also, those are generally relatively straightforward things whose design are hard to get wrong (but there are exception, e.g. Python's urllib). Being a standard library and not a standard framework helps.
For what it's worth, I use Java's standard library whenever possible. It's quite extensive and the fact that the documentation is top-notch makes it a joy to use. That being said, there are still a lot of supplemental things you might want to do, which is what things like Google Guava or Apache Commons are for. But they are to supplement, not replace the standard library.
By the way, some parts of Apache Commons would be a good example of how not to design a standard library, I don't want to piece 10 objects together to do a simple utility function call.
>I think the stdlib is not the way to go. Give me an example of any ecosystem where the standard library is the defacto best solution?
Python, Go, C++'s STL, Java SDK, to name a few.
They might not be 100% perfect, but they are good and reliable, and always there for you. Hunting the latest "best" library that gets abandoned after a year (like in Javascript and Ruby often happens, and also Go too) gets old quickly.
Of course there could be a compromise approach, as you say.
A mininal standard lib for Rust PLUS a "blessed" set of Cargo packages that represent the batteries (Haskell "Platform" is like that, IIRC).
I love Python and Python's standard lib but it's hardly the defacto best solution.
urllib gets trumped by requests, ujson is much much better than the stdlib json module, hardly anyone still uses distutils over setuptools for example.
Woah, that's like a list of what not to do with standard libraries, particularly python. Have you seen the datetime module? httplib? Even go has it's weird databases module.
I think <stdio.h> is a good example. It gets the job done, its minimal and simple. That's what a standard library should be.
Hardly. See Joda Time (it only took 10 years for java.time to catch up) and Apache Commons (the situation is much better now, but who hasn't turned to Apache Commons due to shortcomings in Java's standard library?).
In any case, I definitely agree with the frustrations of trying to find libraries for Ruby.
Again, people have seem to miss the point I was making.
I wrote that those batteries "might not be 100% perfect" -- and as a Java programmer back in the '00s, I know the specific problem points with Java time that Joda tried to solve (btw, it has inspired the time lib in the latest SDKs IIRC).
The thing is, the JDK APIs, with all their issues have been a genuine force in Java adoption, and something every Java programmer relies on. Of course there'll be some pain points, but it's nothing like having to hunt for a Collections or DB abstraction or String manipulation or XML etc lib each and every time you start a project.
Having an official batteries API doesn't preclude you from using external libs (like Joda) when they are better -- whereas not having one is a genuine loss.
If I have turned to Apache Commons that was overwhelmingly for things MISSING from the SDK, not for things the SDK already did.
And don't get me started on the Commons code quality, with BS reinventions of the wheel, lame FactoryFactoryProxyFactorySingletonFlyweightFacades and the like, and code that was blisfully ignorant of enconding issues...
"Best" always depends on your criteria. Very often, the standard library is a "good but not best" solution from a pure performance or features point of view. However, it often wins for ease of deployment (nothing to install), ease of maintenance (the project is not likely to be abandoned), documentation, and standardization.
IMHO, C++ in an example where the standard library (and its laboratory, boost) is often a very good solution for the few things it covers. It's standardized, well-documented, and not that bad from a performance point of view (of course, it's not perfect).
In addition, you expect your newly hired developers to know the standard library, but not every small libraries from github...
>Give me an example of any ecosystem where the standard library is the defacto best solution?
Almost all of them? Unless the stdlib is utterly utterly terrible the cost of the extra dependency is not worth the difference in quality between the stdlib and library which does the same thing.
> Give me an example of any ecosystem where the standard library is the defacto best solution?
In Go that's certainly the case and in my experience it's a great thing, only time will tell if it will end up suffering stagnation like maybe happened to python.
Ironic that you've both picked languages that are used quite effectively, if unsexily, to solve problems. Both Python and Java are at the "plateau of productivity" - there's not a whole lot of innovation going on, but they're stable, robust, and have a large ecosystem of tools that let you get work done quickly. A good portion of this is because of the stability of a large standard library that everyone can depend upon.
> The standard library is _not_ "batteries included," on purpose. Given that we have Cargo, and it works well, tying package updates to the language version has quite a bit of downside, and very little upside.
With a good package management system, a batteries-included standard library could just be a set of packages where you are guaranteed that a version of each backward-compatible to the one provided at language release (in SemVer terms, a package with the same major version number as the one packaged with the language release) would be available and maintained for the maintenance life of the language version. It wouldn't have to prevent either bug-fixes to packages out-of-band with language version upgrades, or even out-of-band new major versions of packages that support the existing language versions so long as the old major version was maintained in parallel as long as the language version was.
A language where the standard is "we download the packages we need" (somewhat like npm on node) feels like it has a lot less mental overhead to it because the default is "users expect to get dependencies easily", and you don't have to spend so much time trying to target older but "default" installations of things.
Which with a language like Rust capable of producing static binaries, feels like a better place to be in anyway since it's a lot easier to replace 1 executable then a whole net of shared dependencies.
I've always thought a good compromise between these approaches, and for a while it kinda looked like at least Rubinius would go to something like it in the Ruby world (but efforts to modularize the ruby stdlib seem to have fizzled somewhat), would be to have a minimal stdlib but a broader set of basically LTS versions of exceptional packages, where they are expected to have important issues backported to a stable version alongside a version of the runtime itself.
This would still allow for pulling up to the bleeding edge, or simply not using, of some of those blessed packages if you choose to by specifying them in the cargo manifest, but would still provide guidance on basic useful packages.
I think it'd also be easier to boot stale packages in new releases (ie. the eternal security fail that is the yaml lib in ruby's stdlib).
For someone that is new to Rust how do I discover where the libraries are for the things I need to do?
In the past year I have needed libraries for HTTP, XML, JSON, CSV, arg parsing, image manipulation, PDF, RDBMS, a trie (and other data structures), async i/o, threads, files, ZIP, and others.
While others mentioned http://crates.io/ you might also be interested in http://rust-ci.org/ (has categories, includes build status and quite often hosted documentation).
Being a game developer, inheritance is a really important language feature. I'm not one to abuse the power. Currently, for school I've been working on an OpenGL game engine in C++. It's a component based system. The only real inheritance situation that's important to me, is to allow the user of the engine to create any object and make it inherit from GameObject (example: Duck would inherit the members and methods from the GameObject class). Everything else is a component that plugs into GameObjects (Mesh component, Transform component, etc.)
Over Christmas break, I played around with Rust and I'm really enjoying it. However, I can't figure out a nice way to inherit members from other structs. My current idea, like many others, is to keep a pointer to a "parent" object. So, Duck would have a GameObject, rather than be a GameObject.
Does anyone have any recommendations for better ways I can achieve what I would like to do? I will also accept the fact that inheritance is not needed in a language, but it does make a few situations easier.
I'm a game developer and I don't use inheritance a lot, and when I do use it it's almost never for virtual dispatch. (I wouldn't mind it being added to Rust, but I don't expect I would use it).
Anyway, if you're already doing a component based system, why do you need inheritance? Just do a normal ECS. You don't subclass GameObjects in most implementations of ECS (and this is a good thing).
I guess I should do some more research regarding that. Although, I just liked the idea of inheriting from a base "empty" game object. It makes it easy to have lists of GameObjects. Also, all of my components inherit from a Component class which makes it possible to AddComponent() and GetComponent(). That way, a user of the game engine could create a new type of component and easily add that to any game object. However, I may be over complicating that as well...
Honestly, it sounds like you are over-engineering this... Forgive me if I'm wrong, but the impression I'm getting here is that you're writing the engine before the game.
Never do this. Just don't. What you should do instead, is write a game, and while writing that game, write its engine. At the same time (Or even, write a game, and then refactor the engine out as you go).
Then, after you're done, that engine can then be extracted and made to be more generic. This is basically how every engine used in the game industry was made (although in many cases the game the engine was written with never shipped).
Trying to make a generic engine from the start will be worse in basically every measurable way. It will take longer to write, take more code, use more memory, and be slower...
Again, sorry if I misjudged your comments. Anyway. On to what you said specifically:
Having an empty base object so you can have lists of GameObject (presumably lists of `GameObject*` in reality) is basically going to destroy performance and the cache. A reasonable rule of thumb is that a read from memory will take about 100 times longer than, say, a float multiplication, unless you know it will be in the cache. Then, to operate on these game objects, you'll probably use virtual methods. Another rule of thumb is that vtables are basically never in the cache (and they're also unpredictable branches).
Really what you want to have is several flat arrays of the data each part of the engine needs to operate on. This is also good from an encapsulation standpoint, because then each part of the engine only can see what it needs, and not necessarily the whole game object. Then, the way you'd implement a component system in this style is that you'd make that array the canonical place the data lives.
This can work well for some games but isn't worthwhile for every game. (Generally I actually think the biggest benefit is that it makes gameplay and tools for non-developers easier to write.)
> Forgive me if I'm wrong, but the impression I'm getting here is that you're writing the engine before the game.
Your impression is a little wrong, see below.
> Never do this. Just don't. What you should do instead, is write a game, and while writing that game, write its engine.
I'm not making a game. We have an OpenGL Game Engine class in my Game Programming program at college. I'm creating my game engine with the idea that someone could simply include it as a library and then use its features to develop a game. This Game Engine class is now over, however, I'm still developing my engine purely for personal learning purposes. Yes, I am creating demos to test features of my engine, but I'm not writing anything that would be considered a game.
Regarding the rest of your comment, I totally agree and understand what you are saying. This actually makes sense, however, it's just not the way that we've been taught so far in my program. Being a student, we are familiar with the practices that our teachers use. This makes us somewhat close minded, but it's really nice when people (like you) give a completely different way of doing something.
Thanks for the help! I'll look into these different methods of structuring my engine.
Examples != a game, and given that scenario, I would recommend trying to make a game at this point, but I'll leave it be.
And that's fair. I didn't think this way until after working in industry for a while, and my code from when I was at school was very high level and OO.
If you're interested, Mike Acton (lead at insomniac, and one of the smartest people in the industry) had a good talk in CPPcon that you can find online about Data Driven design[0], which is basically what I'm talking about (he's a bit more extreme than I am). I'd also recommend looking at his 'Typical C++ Bullshit' slides[1] for a shorter and more amusing take on the issue.
I guess I should mention... I did write a maze game for a final project in a different class with it. This semester, I was one of the few that actually wanted to work hard on my engine. I just find it really fun because there's so many problems you have to think about when developing an engine, and even more solutions. It's very rewarding for the brain.
This semester, I used an app to keep track of how much time I put into every class.
- game engine: 106 hrs
- AI: 10 hrs
- physics: 10 hrs
- ogre: 12 hrs
As you can tell, my game engine was the thing I worked nearly every day on, mostly late at night. If I took an assumption about my class average on hours put into their game engines, I'd say a safe guess is around 15 hours if we don't include my time and the 2 others who also put an insane amount of time into their engines (it was pretty much a competition between 3 friends to outdo each other).
Back to making my maze game... We had the entire semester to work on either a solar system or a maze game. I pumped out the maze game in 3 hours with my engine on the day it was due. The thing is, I was confident with my engine. I put so much work into it, and I understand how it worked under the hood, that I knew I could produce something very fast and easily with it.
The game itself was simple. Have a first-person camera walk around the maze, pick up a key, and then go to the maze exit and open the door. Even though I did it in 3 hours, I also included the ability to pick up a gun, attach it to the camera like an FPS; added a skybox; added a simple sin wave twirl for objects sitting on the ground; and a few other small details here and there.
Anyways, sorry I went on for a little bit there. I'm just really happy with how much my engine is progressing, but it still needs a lot of work and polish. I'm way more open minded now after this discussion. I feel confident about not needing inhetitance now, especially since I feel like in a couple months I might port everything to Rust.
I also realize that it was very risky leaving that project until the last minute. If I ran into an issue, it could have severely messed up my mark. Making a game at the same time as an engine does indeed help with the engine development, I see the benefits of doing so. When I get a chance to continue working on it, I will probably continue developing the maze game along side it.
Anyways, thanks for those links, I'll be checking them out tonight!
Something that addresses your concern is coming post 1.0. Servo needs something like inheritance to model the DOM efficiently. It may or may not end up looking like inheritance, though. We determined that our possible solutions are backwards compatible, so we are doing it post 1.0, though.
(Oh, and Rust doesn't get everything Servo needs, what I mean to say is "Servo has demonstrated that there is real-world need for something like inheritance, and so we will add it.")
That's exactly what I wanted to hear! I heard some mumbles about inheritance coming back in one way or another, mostly on GitHub issues I believe. Can't wait to see what comes of that!
> Duck would have a GameObject, rather than be a GameObject
Duck should not be a class, it should be a factory function that creates a generic GameObject and configures it with the set of components that allows it to look, walk and quack like a duck.
Once you move to components, these implement all the game-related behaviour and the GameObject becomes a simple piece of scaffolding to hold components together. You don't gain anything by making different GameObject code-level classes that only differ in the components they contain (there may be a debate in the case of languages that support mixins).
It's also much easier to data-drive entity types; at some point you will even do away with factory functions for GameObject types and describe these types in data files loaded at runtime. This opens up options for designer-friendly editing tools and even 3rd party modding.
Solution: Don't inherit from GameObject. Make GameObject "final" and simply be a container of components. Move all object-specific behaviour into the components.
But, I still need inheritance. DuckComponent would need to inherit from Component in order for my GameObject to add that to the list of components attached.
If you want closer performance characteristics and simpler use patterns, in the meantime, storing the parent GameObject by value rather than through a pointer might be easier.
Stupid question, but how can a web developer relate to Rust? I had to look up 'systems programming' and Wikipedia basically told me it is writing software for certain hardware components.
My background is primarily web development, however I've been writing a text editor in rust for a few months now. Prior to that, I had never written any software outside of the web.
Honestly, its not such a big change once you get into it. I'd encourage you to just give it a go, build something just for fun with it. Be it a command line tool, game, whatever. My experience has shown me that just because you've only been involved in web development, by no means limits you from systems/low level development!
Even assuming all those things are complete, it's not necessarily the best fit for your typical consumer web application. Not unlike how you wouldn't generally turn to C++ for one.
Congrats to the Rust team on all their hard work paying off.
Looking forward to watching Rust expand into almost every corner of the development world: web, applications, systems, embedded, safety-critical, games, hard real-time, on so on!
Hehe, I also don't understand why people always talk about summer and winter like it is a universal thing. Heck, there isn't such thing as summer or winter in some countries!
Well here we are. I think this is what a lot of folks were looking for, anything before Alpha just seems too bleeding edge. Will be interesting to see what the next year holds for this language.
Congratulations to the rust team. I've been having fun for a while in the language can't wait to start developing serious tools now that we have a stable release :)
Pretty nice! I mostly used Rust Nightly till now. The language, the ecosystem, etc. seems really mature now. I've been following language development for a while now. I've never seen something that is really a new language and is that far before 1.0 or in such a short time.
The language developed rapidly, without sacrificing reinventing things, changing opinions a lot. I am not sure how they did that, but it's really impressive. Usually languages lack documentation, stability, performance, etc., have lots of rough edges, no users or libraries, but none of that is true for Rust.
I am really curious about how this was achieved. Maybe someone involved could describe how that was possible. I am sure I'm not the only one interested in this.
A year ago there was an article about using Rust for an undergraduate class on operating system development. Rust was 0.7 back then and very different and way more mature.
I just checked out the github repo on a quad core 3.4GHz i7 with 16GB ram, and make -j8 took 37 minutes -- the c/cpp stuff (like llvm) built in parallel, but all the rust stuff did not, such that 7 cores (HT) of 8 were idle for the bulk of the build.
Rather than trying to run the compiler itself on a given platform, I'd recommend cross-compiling binaries from a known platform. I'm not an expert in this aspect of Rust, but here's some example Rust code that targets the PSP: https://github.com/luqmana/rust-psp-hello (AIUI the magic is in the psp.json.in file, which communicates a target specification to the compiler).
Feel free to come ask in #rust on irc.mozilla.org if you need some experts to consult!
I have a VPS running x86_64 Debian unstable so I installed Rust on it now.
I compiled an hello world on the VPS, the content of which is:
fn main () {
println!("hello world");
}
using
rustc main.rs
and the resulting binary ran on the VPS and said
hello world
I went to the example Rust code you linked that targets PSP and had a quick look at it but it was kind of a lot at once so I went looking for alternatives and found https://github.com/japaric/ruststrap which seemed promising but the README was not very clear and the archive hosted by that person is from 2014-12-17. I cloned it to my VPS and attempted to run the ruststrap.sh which it didn't want to unless it was root so I let it be root but it ended with
+ apt-get install -qq g++-arm-linux-gnueabihf
E: Unable to correct problems, you have held broken packages.
So I'm not sure if I was supposed to run that on x86_64 or not or if maybe I was supposed to run something else first. My VPS is a bit of a mess so that could be the reason also.
I am going to go back now to the example Rust code for PSP you linked and look more at it, it seems to be the most promising at this point (though it will be a bit inconvenient for me in the long run to do any development on the VPS instead of locally.)
For a long time only 32 bit archs were supported for iOS, but just yesterday a PR landed with preliminary support of 64 bit archs. As Rust uses LLVM for codegen, it means there might be a couple of minor glitches, but overall it should be stable.
Considering that in February Apple will deny apps which do not support arm64 it is just in time :-)
So I'd say that iOS is almost first-class citizen - the only drawback here is that it is not in build bots and therefore `master` can be broken sometimes, in this case you can check https://github.com/vhbit/rust which may lag a bit but is always buildable for iOS.
Hi everyone,
Have you guys already had any cool projects and want to share with us?
I just quickly made a site here where I would love to showcase your Rust project. http://builtwithrust.com/.
Thanks all,
It would be more useful in a format which works on Kindle, since that's by far the most common. It wouldn't hurt to support both and also pdf. If there was a single page view of the book it would be quite trivial to export it to a single ebook file.
Open source tooling for creating mobi's are nowhere near epub's and Calibre does a good job of converting epub to mobi, so I don't mind when projects offer only epub, although I'm a heavy mobi user.
One reason is that let takes a pattern, so you can do things like:
let (x, mut y) = ...
Another reason is that we feel 'mut' more cleanly communicates mutability than 'var.' Another reason is that we prefer immutability by default, and let/var doesn't communicate that as nicely as let and let mut.
There are some discissions about this on the ML archives, RFC repo, or discuss, if you're interested.
'mut' conveys the same thing you are trying to convey with 'var' in a much better way. The term "variable" in programming languages doesn't carry any connotations of mutability or immutability. All it traditionally means is "named value" or "named memory address".
It's actually entirely reasonable to have an "immutable variable" -- Rust uses this phrase in its error messages and it's perfectly sensible. For example, consider this snippet:
Not very idiomatic, forgive me, but would you say that y "varies"? I would say yes, it varies for each invocation of is_even(). If y didn't vary, "if (y == 0)" would be a nonsense statement. y is certainly immutable -- you can't go assigning new values to it -- but it's definitely a variable.
The opposite of "variable" is "a constant," not immutable. 'mut' means "mutable" and "mutable vs. immutable" is the choice here. Rust got this right, I think.
They fill the same role as concepts - ie. bringing type checking to the call site when using parametrized types, rather than using a duck typed approach, which leads to the big template stack traces that you get in C++.
There are some technical details that make them different, if I recall correctly. But they're kinda similar. They're also close to Haskell's typeclasses.
// Old-style generics; monomorphized
// with S as an "input" type of MyTrait
fn foo<S, T: MyTrait<S>>(elem: T) { ... }
// Where-clause-style generics; monomorphized,
// with S as an "output" (associated) type of MyTrait
fn foo<S, T>(elem: T)
where T: MyTrait<Thing = S> { ... }
// Trait objects; dynamic dispatch
fn foo(elem: Box<MyTrait>) { ... }
To be fair, there were _some_ languages and frameworks which did not have generics on the 1.0 release. It is a bit strange to assert that a new language should.
The TL;DR of the alpha is basically this:
1. The concept of a six-week release cycle begins today, with the first beta coming in March.
2. Breaking changes will basically cease, with the exception of a list of libraries that are still unstable and features that may be tweaked (https://github.com/rust-lang/rust/wiki/Anticipated-breaking-...).
3. Given the aforementioned degree of remaining instability, users should still probably stick to the nightly releases in order to help keep their code up to date and weed out bugs in the compiler.
As ever, it deserves to be reiterated that the 1.0 release does not represent the language being "finished" in any way, only that things will stop breaking. The language will continue evolving rapidly after 1.0, and even the 1.0 release will contain several known (and sometimes rather unfortunate) restrictions that will be backwards-compatibly lifted over time.
Post-1.0, I expect there to be a large community outreach to determine which work to prioritize (for example, I foresee a great clamor for making macros more usable). With developer help I intend to publish a blog post before then detailing exactly which deficiencies Rust 1.0 will contain, and the use cases that they currently either prevent or make awkward.