I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.
We do not have a formal spec yet, and so do not technically have a memory model like C or C++ do. However, since we're built on LLVM, we do have some semantics that are pretty much based on it. Our atomics stuff is pretty much LLVM's. A lot of the work going into forming said models is trying to determine how much of LLVM we've let pollute into the language in the meantime and where it's feasible to diverge.
As part of defining the Rust semantics they will certainly tackle the question of the memory model. Dreyer and company have a track record of providing semantics and reasoning principles for weak memory models (like C11).
You might have to wait a while for a rigorous semantics though.
I'm curious if what you mean is that C++11 has a standard spec for the memory model you describe, but if Linux doesn't use it, then Linux does not actually adhere to the C++11 standard? Does that mean that Linux C++11 compilers are non-standard in general?
Let me clarify first that when I say "Linux" in this post, I'm specifically referring to the Linux kernel, not user-space software that runs on Linux.
The Linux kernel is written in C, but uses lots of GCC-specific features and inline assembly to address things that aren't covered by the C standard. Among these, the Linux kernel has its own set of atomic types and memory barriers. It documents them here: https://www.kernel.org/doc/Documentation/memory-barriers.txt
So even if Linux is being compiled on a conforming C11 compiler (the Linux kernel is written in C, not C++), it uses its own atomic primitives instead of the ones provided by the language. Hope that helps.
I think you are confusing specification with implementation. The C++ standard does not outline how a compiler implements a feature, just how it should behave to match the standard. There are several compilers that 100% adhere to the full standard, including the guarantees specified by the standard in the form of memory and time complexity (optimization).
Oh, I know that specification != implementation, but IIRC both clang and GCC have certain behaviour which do not conform to the standard. But I can't find these right now, so I'll drop it here :)
As the HN crowd seems to have quite a lot of Rust supporters, would it be a good selling point in a job description ?
i.e if (it's just currently a personal hypothesis) a company were to consider to (re)write some part of its Rest-ish microservices and that Rust was the chosen language, and was looking for people to help on that, would it make a `interesting++` in your mind ?
for real services used by real people by a not-so-startup company, in europe.
edit: I already deployed in production for my previous company some (with a very stricly limited scope) microservices in rust (with everyone approval) and it was quite a success, so I'm more and more thinking that rust is now enough developed to fit the market of language for microservices, as it's more or less "understand HTTP, read/write from redis/postgresql/mysql/memcache, do some transformation in between" and Rust now support these operations quite well
Yes it would. People love to be trained on a new software stack at someone else's expense, especially if they're not having to take a seniority hit to go there.
It just seems like an odd use case for rust. I can see how it would be attractive as an alternative to c++ or other systems language. But for shuffling data?
Also, since you mentioned PostgreSQL, if you can make the db handle json the shuffling part should be really trivial. Give node.js + pg-promise a spin for comparison.
But then again, I haven't used Rust for anything serious, so am happy to be wrong.
1. rust handle erros in a VERY expressive way, and I do want my services to log ANY errors, and to be sure to miss any (kind of like what go permits too , but Rust Errors are more flexible) , so with rust it's hard to forgot a case of error and finished with "errr my API did a 500 but i don't know why"
2. It's a (nearly) self contained binary with low memory overhead, I can have dozen of microservices like this on my workstation / CI environments without making the machine cries
So I can deploy it on cloud provider X or workstation Y without needing to worry if it has the version of node / python / php I need
3. it creates very small docker image if you're using docker
(so it beats node/python/php here)
4. if I want to make my services multi-threaded and have some internal cache for some reason I'm sure the compiler will not let me create concurrent access problem that I would otherwise only detect in production when it's too late (beats every other languages , including go , the go race detector of go is nice, but it's at runtime so not the same league)
5. no nil pointer exception things (beat other languages here too)
6. strong typing but flexible (no go, I don't want to cast everything in {}interface everytime i need to make things a bit more generic, though I know it's just a personnal opinion), because no I don't want my developer to either develop a "is_true" function , or worst to rely on type casting for if to work.
7. the compiler as my first set of test
it's a whole category of things you don't need to write test for because it's already handled by the compiler.
8. highlevel when you needs, low level when you want, I know that the language is not my bottleneck (or then I've reached the point where I don't care because my system is performant enough) , but if I need to serialize my struct into a json, I can in some line of codes.
9. plenty other little things , like scoping variables, immutable by default , possibility to have your SQL request check at compile time if you use postgresql plugin, using first-class debuggers to do step by step if you need , interfacing with C libraries pretty easily
Your above reasons why for Rust basically all apply to Haskell too, except it's higher level and garbage collected, which is why I think Haskell is the best language to write a web service in.
But, you could write embedded software, device drivers and even operating systems in Rust, that's the domain where Rust could and hopefully will dominate.
Like all things in life, higher abstraction level of Haskell is not free. The price is more complex language (which many people don't have the capacity to properly learn) and less predictable runtime behaviour.
As a self-taught programmer who's getting pretty comfortable with Haskell and also starting to dive into low-level languages (C, x86, Rust) I don't think Haskell is that conceptually complex. It's very abstract but it's also quite consistent. Because its semantics are designed around mathematical laws rather than shuffling bits between CPU registers and RAM, it doesn't have a lot of gotchas, WTFs, special cases, or safety rules to keep in your head. In Haskell, everything is an expression, all functions are closures, and all bindings are recursive, which means that you can inline or extract out expressions with incredible freedom, which makes refactoring so much easier.
Yes the runtime behavior is definitely less predictable with lazy evaluation, which is fine for web servers but terrible for systems programming.
I've always liked the idea of using "systems" languages for data shuffling, because of the ability in some cases to stretch hardware several orders of magnitude further. I read this a while ago and just dug it up:
It's a bit silly, but I do think that Rust could be a very good solution here. For one thing, in my experience it is much easier to write than C++, especially for something you'd feel comfortable pointing at users.
It would be cool to see some articles that are the reverse of "scale all the things," something like "sympathize with all the caches."
Writing a web server from scratch in an unsafe language seems like an invitation to get hacked to me. So maybe in Rust this would make sense, but C++ is a dangerous proposition, I think.
People often talk about scaling in a lopsided manner. Meaning scaling is only up, well scaling works both directions. For an architecture or runtime to be scalable, it needs to shrink as the demands placed on it fall. Scalability works in both directions.
Yes and no. Realistically you often have other constraints. E.g. for a lot of code I wrote at my last^2 job there would be no value in scaling below a single micro EC2 instance (I ran each service on its own instance and got value out of doing so), and almost all languages can scale down that far.
I agree here. I'm often attracted to new stuff, so working with something new (like Rust) would be a plus in a job description, but I also think is the wrong tool for this kind of task - which would yield a minus for the job description.
This is the classic domain of scripting languages (PHP, Ruby, node, ...), where a huge ecosystem exactly for this kind of tasks exists. But yes, type safety and error handling is not the best there.
If a good type system and a good ecosytem is desired then F# or a JVM language (Scala with Play Framework, Kotlin, ...) could be used, which would from my perspective give a more productive setup for this task.
I don't want to say that Rust is not good, but (just like C++) I think it's best use cases are other applications then webservers, e.g. high performance audio and video processing or bare metal software.
It would be for me, but like dikaiosune I've drunk the Rust koolaid almost fully by now.
But honestly, even if I hadn't it's be a selling point to me because it tells me that you're a place that isn't stuck writing everything in a single language but are willing to try new things are willing to use what seems to be the right tool for the job.
Well, sometimes depending on company size, number of developer developers, developer pool skill set, and many other factors, having a single (or very few) languages is a factor in the right tool. It's better when that's not the case though.
A friend of mine asked me about this last weekend, and I basically said that I've drunk enough kool-aid at this point to do anything to use it as my main work language. I'm lucky enough right now to have a lot of autonomy for my projects, and I'm using that to build some tools in Rust, but it's different than being able to focus on it full time.
I know the feeling. I'd sacrifice my own free time to fill the gaps I know it's missing from Rails, it's only fair as I would operate 10x faster with things I'm comfortable with and has a full set toolbox. Considering Rust but, would prefer Swift or Typescript.
Because he doesn't want a package manager that always fetches HEAD?
I'm not saying Go is a poorly designed language where the answer to every decision is the easiest one for the implementors. I'm just saying there's a legitimate case to be made.
That's a great write-up on NULL. I've always done things to avoid it but deeply thought of why we did that past the obvious stuff. Learned something that, esp with option stuff, that might pay off in the future. Thanks. :)
Have you actually had problems with nil pointers in Go? I've written it for years without facing it so perhaps you're doing something radically different if you managed to come across it.
My experience is the same. Go mitigates pointer problems very well by having zero values and explicitly separating error returns.
If anyone complains about having nil in Go, I usually assume they're either forced to work a really bad codebase or lack experience in Go.
I've worked on a Go codebase for ~2 years, and I'm pretty sure the only nil pointer dereference I've seen was due to a concurrency bug, where an object cache (implemented as an array, not sync.Pool) was not locked properly.
The problem with having nil is when you expect a value to be either a valid pointer or nil to indicate an error, and you forget to check for an error. In Go, in my experience, explicit error checking and multiple return values makes that a non-problem, however ugly some people consider it to be. If I see error assignments to _, that makes me raise my eyebrows.
How do zero values help here? Zero values would seem to me to make nil pointer panics easier to run across, because it means the language can silently insert nil into your data structures or assign nil to your variables without any syntactic indication.
> The problem with having nil is when you expect a value to be either a valid pointer or nil to indicate an error, and you forget to check for an error.
That's not the problem with null pointers. Null is used all over the place for non-error conditions. The problem is when an object legitimately might or might not be present, but the type doesn't encode that fact, so some code mistakenly assumes that the object is always present when it isn't.
> The problem is when an object legitimately might or might not be present, but the type doesn't encode that fact, so some code mistakenly assumes that the object is always present when it isn't.
Yes reworded slightly, types encode value invariants, nullable types can't encode one of their invariants.
My complain with nil in Go is that there are different kinds of it as a nil pointer to a struct is not equal to a nil pointer of the interface it implements and code like https://play.golang.org/p/NwKvIztXwP prints false.
This is the reason for the rule of always returning error type in Go, not a pointer to detailed struct error. That leads to an ugly code when a check for non-nil err is followed by a cast.
Hey, main author of iron here, I'd love to hear more about your use of iron! It's always nice to hear from users and your feedback is crucial to driving the project forward.
Feel free to respond here or reach out (contact details in my profile, you can find me as reem on the mozilla irc network).
Awesome! I'm working on a home automation platform as well! Not close to being done (started last week). I'd love to see some code or examples of what you've come up with!
This isn't the case. We have web backend libraries (servers, database thingies, template engines, I/O, etc) and it's totally possible to work on backend stuff in Rust. Rust doesn't "not target" much, really. Most of what you do in Python/Go/Ruby is also something we want Rust to work for. There is a false dichotomy of "fast language or safe language" which Rust does away with -- it's not only for situations where you would usually use C++.
> Until the rise of scripting languages we would use compiled languages for everything.
That was because at the time, forking was expensive, threading was too hard, and memory leaked like a sieve. Rust addresses all of these problems, and does it nicely.
>Almost every programming language can have a REPL, it is called an interpreter. Anyone with a CS background should know this.
That's the theory though. The practice is (was) that Lisp had an excellent REPL, and most languages didn't have one at all, or had a crappy one.
Some languages lend themselves to a REPL by making it much easier to program one. iPython is great, JS ones are decent. Go attempts at REPLs otoh are mostly BS. C++ REPLs (Cling) etc, are neither very handy nor very popular. Etc.
The truth is, some languages handle the REPL situation much better than others. And "scripting" languages are usually better than others at that.
And REPL is not really just an interpreter. You can have an interpreter without having a REPL, or without having the kind of full featured REPL we're talking about. Most BASICs run with an interpreter, but didn't have a REPL, for example.
I am pretty conformable with Lisp, which isn't a scripting language at all, offers both a very good REPL experience and has the option of compiling to native code, both at REPL and AOT.
Also BASICs not only had an interpreter, but they also had AOT compilers. Visual Basic had a REPL, it was called Interactive Window.
Scripting languages don't have AOT compilers to native code nor Lisp is a scripting language, so I don't get the point of the comment I was replying to.
On error handling, in Go most of libraries and even some part of the std lib, erros are constant string, which mean that for example you can only do
`err == foo.DUPLICATE_KEY_ERROR`
but you can't extract which key, or then you start to have to either use an external library to do some string masking/unmasking, which complicate quite things and add one more dependency, or just to say "screw it, i will just log 'duplicate key error" which is unoptimal
I really did try go, to the point I even proposed a fix in the std library for a long standing bug in the xml library https://go-review.googlesource.com/#/c/15684/ , but not all the reason stated above makes me consider go as "yes sure it's an improvement over php/python/c++" but rust is like an even bigger improvement
If I couldn't find a job writing Scala or OCaml or F# (all of which I'd consider better options for rest-ish backends) then I'd look for one writing Rust.
As I got more into this kind of language I found that the language tools gave me very strict isolation, which lowered the value of separating everything at the process level. So I've shifted towards a more monolithic style of architecture.
Maybe to illustrate the difference between programming for pay and programming for fun that many students might eventually experience. They use an extreme example to scare away those who aren't committed enough. ;)
Actually managed to sneak some Rust into production. Nobody cares what language you wrote the DLL in, just that it does what the docs say it should do.
Wonderful! Are you able to say which company you deployed this in, and if you've run into any trouble? And by DLL, was this in Windows? I can't recall if we've heard of any Windows users putting rust into production yet. We're quite interested in making sure our production users don't run into any problems. If you'd prefer to not speak in public, you could reach out to our private channel, community-team@rust-lang.org. Best of luck!
Truth is its a developer tool that just provides basic logging for debugging. Its not sexy, and its not actually shipping out of the shop. Just a few members of my dev team have adopted it. And it was written on company time, so the company technically owns it.
I Work in an aging framework for electronics testers written in a combination of C#/VB/LabVIEW its highly asychronous nature make debugging the system an issue.
"And it was written on company time, so the company technically owns it."
I figured this and you possibly breaking the rules were the reasons for discretion. I also figured you were forced to use aging crap that motivated you to try alternatives. A painful, but common, thing in industry. Hence, me reminding the other commenter not to expect a source reference on it. ;)
"I Work in an aging framework for electronics testers written in a combination of C#/VB/LabVIEW its highly asychronous nature make debugging the system an issue."
I usually try to drop papers or articles at moments like these but have nothing relevant. There's only a few articles in my collection on asynchronous systems (outside I/O) because they're so hard to verify. I did find a few resources. Would have to know if the problem is communications, data, what before I could attempt suitable references for you.
You probably already do one of my base recommendations here: tracing flows and input validation on them. The equivalent of asserts on input and/or monitors with read access to global state comparing data/states against ranges or rules in a policy. Taint-methods are also helpful where you tag data in the datatype with details about where it's been and whats happened. That's more advanced and I'm not sure there's libraries available for those platforms. The other technique they can easily handle: I did it in VB long ago.
"Screw diamonds: legacy systems are forever."
That's friggin' great. Need to make a meme image out of that with some COBOL or RPG on it. :)
> There's only a few articles in my collection on asynchronous systems (outside I/O) because they're so hard to verify.
GRR isn't hard just time consuming. Yes its not fully verified
>I also figured you were forced to use aging crap that motivated you to try alternatives.
I'm actually a polygot programmer trying to pay the bills. Working in aging tester frameworks pays a lot more then web development. I like Rust b/c its basically C-With-A-Type-System. Its still very easy to reason about how the Cee-LangVM will handle your code.
But yes. Working in a code base from the 80's and switching to Rust is lovely.
:.:.:
The core purpose is exception logging. A lot of the exceptions will get mutated/modified before being reported. "Servo drive failed." is a lot less useful then "Servo communication fault 0x274077343" which I've memorized to tell me that a power surged knocked a comport off line.
"GRR isn't hard just time consuming. Yes its not fully verified"
I believe it now after pulling papers. I even know exactly how hard it is and what parts are decidable.
" Working in aging tester frameworks pays a lot more then web development."
Makes sense. It's why I advise against commodity jobs. The niche or unpopular stuff usually pays better. Not always, but usually.
"The core purpose is exception logging. A lot of the exceptions will get mutated/modified before being reported. "Servo drive failed." is a lot less useful then "Servo communication fault 0x274077343" which I've memorized to tell me that a power surged knocked a comport off line."
Hmm. That's more straight-forward than most async. Just detail-oriented work as you said. The taint idea of tracking exact sequence of mutations combined with design-by-contract tracks of function call contexts would certainly help that. How much I can't say as your combination of tech obscures it. Don't know enough Labview mainly. Microsoft has things like Spec# and verifiers for C# part with VB6 easy to ignore if you keep logic out of it.
So, you're in better shape than many doing multi-language, legacy work. At least as far as verification concerns.
Don't learn it. Its a terrible language, and a worse IDE. Yeah you'll learn over 6 figures with 2 years experience. But you'll waste a lot of time debugging issues with the runtime itself, also the langauge itself isn't consistent which can lead to extreme headaches/development lagging. But NI will compensate lost development time with free hardware so companies love it.
The tools for tracing/testing in Labview are annoying and kinda primitive. Hence why a separate callable logger is useful. Mainly in Labview all functions start out as heap allocated monands wrapped in a mutex. So basically every function has state initially. You then have to opt-out of this to get a pure function.
>Hmm. That's more straight-forward than most async
Well yes. We're not exactly concerned with verification of the asynchronous system. Just making sure everything happens in the right order, and each thread is actually doing what its commended to do.
At a certain level you can abuse asynchronous systems to make certain measurements easier. Just assume your input has a half-Gaussian input lag level, and suddenly a 200 sample moving average becomes sufficient for smoothing data not jumping though hoops of fire doing weird data synchronization.
"But NI will compensate lost development time with free hardware so companies love it."
WOW! I never even thought about that business angle to dealing with shoddy tech. Thanks for the warnings.
"Well yes. We're not exactly concerned with verification of the asynchronous system. Just making sure everything happens in the right order, and each thread is actually doing what its commended to do."
Glad you at least have an easy approach to dealing with that nonsense. :)
OK. So, thanks to your comment, I looked up a bunch of CompSci and stuff on asynchronous verification to see what old and cutting edge methods are. Looks like it's a mostly solved problem at the conceptual level. Even found a method from NASA for Globally-Asynchronous, Locally-Synchronous (GALS) which I keep stumbling into in hardware.
So, good news, there's a number of techniques for various aspects of this at a range of mathematical abilities. Might be able to make an informal, knock-off of one or more for your use case. The bad news is that I found more papers than I wanted to thoroughly read. Not filtering them myself. If you want, I'll drop you a link to an archive or links to individual papers for you to skim at your own pace. If not, that's cool too as I needed them in my collection anyway for high-assurance, asynchronous systems. Especially given all the uptake on async in mainstream.
So, it was great you replied with those details regardless. Might have inspired a future, bullet-proof system in an unusual instance of the Butterfly Effect. ;)
It's a fair point, but there's a lot of write-only Perl, PHP, and C out there. Let's make sure we don't hold new Rust code to a higher standard that that stuff.
Ideally we should hold all new code to a higher standard than that. All programmers spend much more time reading code than writing it.
Likewise, Rust is not finalized as a language yet. That means whoever has to maintain the original commenter's code is likely to be in for a rough time if and when they ever attempt to bump the Rust version.
> if and when they ever attempt to bump the Rust version.
If the parent is using a stable release, like this 1.6 is, then bumping the version should be all they need to do. (Unless they're relying on a safety hole for some reason; we do fix those)
Rust development was done in public (unlike Go or Swift which started out privately), so we had a very long 0.x period which was public and had tons of breakages where the language designers tinkered with just about every tradeoff.
So it's easy to see where you get the impression that Rust is prone to breakages, but that doesn't happen anymore.
No worries. Every 1.x+1 is backwards compatible with 1.x, modulo some details. Usually breakage is mentioned in the context of the nightly ecosystem, which has no particular guarantee. We also reserve the right to fix soundness bugs, though we try to make upgrading easy in those situations of and when they occur (we've only had one or two so far).
Rust is not as finalized as any of the languages whose semantics are completely defined by a standard document. There is no definitive answer to the question of what a great many pieces of Rust do beyond checking out rustc from the commit date and examining the LLVM IR.
There are a very small number of languages which have those things. They are useful and we want them, but I think it's _generally_ an unreasonable standard overall.
Not if you weight by use. C, C++, Java, Javascript, and C# all have specs. Among the most popular languages, it's really only the scripting languages that have the "as our implementation does" perspective on specification. I've always used languages that have standard documents, and communities that think those standard document are very important, so rather than being unreasonable, I think of complete language documentation as the natural state of the world.
Incidentally, this isn't really a criticism, but it's precisely because I don't see Rust as finalized that it isn't. A language that's finalized but not specified is just poorly documented.
Most of those languages had no clear, formal spec early on. Especially C and C++: one getting started with UNIX and ad hoc code with the other starting as C w/ Classes plus some extra features from other languages. IIRC, Java's initial mandates hurt it because they were terrible quality in a number of areas and took up to a 15x (yes, times) performance hit in some measurements I saw. JavaScript were two, competing implementations with a ship-first mentality with a bit of it standardized later with vendor-specific stuff left in. C# did have a language specification on v1.0 whose quality I have no knowledge of. I do know they internally beta'd it starting in the late 90's. So, it was done similar to Rust but privately for years before that spec.
So, I'm not seeing your examples as relevant or a critique. Most were in use and changing [to their benefit] before any spec was created. Three of those very publicly, one privately. Another sucked partly thanks to formalizations with lots of money driving adoption. I think you should read Gabriel's Worse is Better essays... several rather than just the first... to see why pushing a partly done or evolving system is right approach for growth & accelerated improvements. Lipner at Microsoft, inventor of successful Security Development Lifecycle, incidentally thought the same thing.
Least Rust team is trying to fix problems and evolve in a robust way. That's quite rare for mainstream stuff that I've seen.
To be honest, I'm not seeing your pointing this out as relevant. I never claimed Rust should have a spec at this point in its life, or claimed any other language had a spec at an equivalent point in its life, and I explicitly disclaimed any criticism. I objected only to the statement that Rust is "just as finalized as any other language". Talking about other languages' youth and whether being specified early is a good thing is especially disingenuous, since the topic of the thread was whether Rust was a good candidate to use now, compared against other languages as they exist now, not making some kind of "fair" comparison against their early versions.
"I objected only to the statement that Rust is "just as finalized as any other language"."
I might have been too hasty there. Your context and this quote makes much more sense. I say something similar to them myself. Yet, they've already entered a 1.0 mode where they're not breaking stuff with much of it documented in guides. That's not formal but pretty final on language itself. Standard libraries and other tooling are where most of their work is right now.
So, I don't feel it's an accurate statement but it's close if we're talking the language. I prefer to say the core language is pretty stable or something like that.
Yes. Those are all old, established languages. And while they do have a _lot_ of use, there are also a significant number of languages which see heavy amount of use that have no spec. And as you mention, it's not exactly a criticism; it depends on where you fall on the maturity scale of languages you wish to use.
It's only unreasonable due to Rust's young age. At Rust's age, none of those languages had a specification either.
Indeed, I'm hoping that Rust begins a push for standardization no later than five years from now, with a formal spec ratified no later than ten (which may sound like a long timeframe, but that would still put it ahead of the average standardization curve). And we're already making a head start on a few necessary aspects of a specification, such as formally proving that our type system is actually as strong as we think it is.
Developing a standard for Rust would have incredibly few practical consequences, given that there are no alternative implementations. It would, however, consume a lot of people's time. That time could be spent on things with practical impact, such as compilation speed improvements, stabilization of features, and better optimization.
The reason why Rust was stabilized in advance of a standard is that stabilization had an immediate practical consequence: ceasing to break code. A standard, by contrast, would have virtually no practical use.
Congratulations! I'm loving Rust, it's my go-to default language now. I'm going to start messing around with piston.rs to make some basic 2d games. I already wrote an irc bot with Rust.
For those wondering, stable just meant the API defined for interactions with some libraries were subject to change. It wasn't like a problem with using the APIs, it was just the developer has to know that new releases might change how they worked or if they would even be available in the future.
The string manipulation is good in Rust, but in Rust strings are unicode and characters are unicode codepoints. IRC is an ASCII protocol and so I went with using binary slices `[u8]` for parsing.
For network stuff I used openssl[0] and std::TcpStream [1]
I didn't use a GUI. I plan on using piston.rs[2] to build games but there are Qt, GTK+ and even Cocoa integrations for rust.
Yes, its a pity that Rust requires that strings must be UTF-8. Go designers got it right as their strings are just immutable sequences of bytes making them very suitable for network protocols and binary data.
One cannot use string library in Rust against byte slices. For example, regexp works only against strings making it unsuitable for processing, say, log files, that may contain non-utf8 bytes.
This is somewhat incorrect, as Rust doesn't have a standard regex library. You're likely thinking of burntsushi's regex crate, which made its own decision to be Unicode-aware. There's no reason that someone couldn't come up with their own regex library (or fork burntsushi's, or just wrap an existing C regex library) that works as you desire.
Yea, also found that limitation of the regex library pretty annoying. But conceptually this has nothing to do with string slices being different from byte slices... It might encourage it though.
FWIW, I think getting the underlying string representation right (and relatedly how it interacts with byte buffers/slices) is really hard. So far, I think Perl 6 probably has the most well thought out structure with regard to that, but they took a decade and a half and lots of testing to come up with what they did. Then again, how it works in Perl 6 is also a function of how the language functions, so it wouldn't be a a direct mapping of concepts, but there are some really good ideas there.
There are wrappers for basically any C/C++ library you can think of. I found string support to be pretty great from the standard library (compared to C++), such as UTF-8 and string interpolation.
In terms of networking, I threw together a working console IRC client in about 60 lines of Rust (using standard library threads and TcpStreams). And I'm very much a Rust beginner. And Rust forced me to consider error handling in a way I wouldn't have otherwise.
I should also say that the community is unusually supportive: I've asked some questions on the IRC channel and got really helpful replies (and haven't noticed any of the snark that comes with a lot of IRC channels).
irc client: Everyone's least favorite Gtk project. For whatever reason, at one time there were seven active projects to develop a Gtk irc client. Used to deride poor judgement in choosing projects or lack of knowledge about what's going on. Often used self-deprecatingly, as in "I know, I'll write an irc client."
An IRC bot is a great way to learn a new programming language. You have to learn how to use threading or asynchronous operations to get multiple connections, you have to learn how to use TLS for secure connections, you have to learn how to manipulate strings, how to read configuration files. It's a super good project for learning.
I've poked at the language a bit over time and while I don't think I'll ever "get" rust, I can say the folks in #rust on irc.mozilla.org are friendly and helpful, which can't be said for all languages.
The docs stomped me, too. I sent suggestions for metaphors and such that might help confused people. They're hard at work on the issues from what I've seen, though. So don't give up yet: check back in a year or so to see if revisions bring you extra clarity. :)
This will let you have just one [dependencies] block, which is a bit nicer looking. I've been changing examples to use it instead of that older style; they're equivalent if you prefer it though.
(Life will be easier for you in future if you avoid * constraints, by using things like `router = "0.1"` instead. This means authors can make breaking changes to their crates without breaking you, e.g. router might make a major change and release 0.2.0, but you'll still be able to run `cargo update` without risking picking that up.)
For many projects (that aren't depended on by other projects), adding versions in Cargo.toml is just more work without payoff. Specifically, if one is already shipping Cargo.lock (in a way that it is used), `cargo update --precise` (in some form) allows the same granular control.
I'd also note that while theoretically restricting versions works, in practice having multiple versions of a single crate within a rust project often leads to build failures due to crate A having a public API that uses types from crate B, and crate C using both, but not getting the same version of B that A got. This leads to type errors at build time.
With `cargo add` its even easier than adding them with "*" dependencies. Just `cargo install cargo-extras` to get `cargo add`, and then `cargo add my_dependency` and it will add the dependency with the most recent version as the requirement.
Yeah, the Cargo.lock file means that * constraints aren't a problem until you need to update a library, e.g. to get an emergency bug fix. And, you are right that cargo update's --precise option allows manually getting the same benefits of explicit requirements, but even doing this once seems like more work than just including the right version, especially with the risk of `update`ing wrong (although, again, Cargo.lock can save you here, if you're tracking it in version control). The combination of `cargo search` (built-in) and `cargo add` (external, others have linked it) makes it pretty easy to just add the right versions at the outset.
> I'd also note that while theoretically restricting versions works, in practice having multiple versions of a single crate within a rust project often leads to build failures due to crate A having a public API that uses types from crate B, and crate C using both, but not getting the same version of B that A got. This leads to type errors at build time.
This is a problem, yes, but focusing on it by itself is somewhat of a red-herring: if A doesn't restrict B's version there's no guarantee that A compiles against whatever version of B that happens to be chosen, i.e. you don't even get to the point where there could be problematic interactions.
In any case, there has been serious discussion about how cargo can solve the problem of crates exposing other crates in their API, and the current proposed solution is to make cargo more strict about allowing multiple versions of such crates. #2064 is the relevant issue.
Specifically, there are two 'weak lang items' that need to be defined. Lang items are not yet stable. So while you can build libraries with only core, the final binary still needs to be compiled with nightly.
For now.
Furthermore, you can use a no_std library with an application that uses std just fine. The use case not quite covered is "I want an application with no standard library at all, period."
'lang item' does roughly mean symbol, but they have their own Rust syntax to communicate their existence to the compiler. Lang items are special functions / types that can only be defined once globally, must be defined for certain features to work. They are how the standard library hooks into the Rust language for a variety of purposes, and to a large extent what makes it possible for Rust to define much of its functionality in libraries instead of in the language.
$ cargo new example --bin
$ cd example
$ cat > src/main.rs
#![no_std]
fn main() {}
^D
$ multirust run stable cargo run
Compiling example v0.1.0 (file:///home/steve/tmp/example)
error: language item required, but not found: `panic_fmt`
error: language item required, but not found: `eh_personality`
error: aborting due to 2 previous errors
Could not compile `example`.
In general a "lang item" is some portion of Rust that the language expects to be defined in some manner. They must all be defined for compiling to work. `libcore` defines many of these language items, but does not for these last two. `libstd` does.
FWIW, the stable release purposely having this functionality removed is pretty annoying. Using periodic versioned releases is much nicer than sporadic snapshots (I've avoided the issue by not mucking with STM32 stuff in a while). A flag like #![use_unstable_features] would be friendlier.
The stable release does not have functionality removed, only deprecated. As in, this functionality was never present in a previous stable release, and now is.
Sure, but the functionality was present back when snapshots were the only game in town. I'd hope progress to versioned releases meant that users could make their life easier by using them rather than having to stay using snapshotted nightlies.
I'm glad now libraries are at least supported, which is good enough for my purpose (linking done with gcc, no idea if that's still state of the art for rust-on-stm32 or what), but it had been an annoyance. But I believe procedural macros are still in the same boat.
Somewhat tangential, this is the kind of blind spot encouraged by the rustup.sh degeneracy.
When snapshots were the only game, it was all unstable. There's no difference from today.
EDIT: sorry, it seems I've mis-read you. I get it now. Unfortunately, we have a strong compatibility story, and so must take strong measures to enforce it. You can use the nightly released on the same night to get a build that's like a particular stable, but allows features.
> Unfortunately, we have a strong compatibility story, and so must take strong measures to enforce it. You can use the nightly released on the same night to get a build that's like a particular stable, but allows features.
Unfortunately, the crates.io system does not follow the same policy, and crates regularly require feature flags.
Yup! The majority of crates do build with stable, but many require features. We've been talking about a way to expose this in the UI so you can tell, but it's slightly more complex than you'd think at first, and nobody has put the time in yet.
Sure, that is simply stating a different way. With the resulting implication that if one wants procedural macros, the stable 1.x releases have unfortunately been empty progress.
Breakage from forwards-incompatibility isn't my problem, it's potentially having to tackle it every morning rather than in batch mode when there's a new release.
edit: If this compatibility story really dictates that releases can't contain beta features, even with a #![unstable_beta_features_1_6] directive, then a separate beta release would be nice. But now it looks like I'm asking for something additional.
Oh yeah, forgot about that. Different purpose than what I'm talking about though. Perhaps mine would be more appropriately named "1.6 with alpha features".
I think the issue here is the difference between "is" and "essentially". Having a guaranteed matched release with alpha features would apparently be useful, for at least one person. Not being guaranteed could cause some real headaches under certain circumstances.
What are those circumstances? There are no stability guarantees for unstable features, so I don't see how a nightly build that happens to correspond to a major version has value over any other nightly build. The stability you would get by sticking to "1.6 + features" seems precisely equal to the stability you would get from picking a recent nightly and sticking to that build.
It sounds to me like the real issue here is that procedural macros aren't stable, not anything about the release process.
Stabilizing procedural macros would fix this one issue, but IMHO a different release process would have fix the pain as well. My case specifically:
Wanting hard-to-stabilize features like procedural macros and no_std.
Using external dependencies that work with stable, but when used with unstable they expect you are updating nightly.
Not updating nightly due to a wacky idea that people should control tools and not vice-versa. The first time I open a project in a week is not the time I want it to fail compiling due to language/library changes.
Maybe my impression is caused by the previous rapid change of nightlies (renaming in libraries, etc), and using month-old snapshots wouldn't be too bad these days. But still after several major releases I'd hope the stable tarballs would have been suitable for general use.
Instead of removing not-yet-stable features, they should be put behind a feature gate. With the idea that anybody who explicitly opts-in to unstable features and then complains when they're not forward-compatible can simply be told to pound sand.
By complaining that when you update nightly you have to update your code that uses unstable features, you are in effect asking us to stabilize everything. Unstable means unstable.
> ith the idea that anybody who explicitly opts-in to unstable features and then complains when they're not forward-compatible can simply be told to pound sand.
This never works in practice. We'd have too many crates that depended on nightly features. There would be 10x the number of complaints here on HN if we did what you suggested and enabled unstable features on stable. The only option would be to de facto stabilize everything.
Again, it really sounds like your complaint is just that procedural macros aren't stable. I'm sorry they've taken longer than you wanted to finish, but we have lots of work on our plate.
> By complaining that when you update nightly you have to update your code that uses unstable features
This isn't it. I feel like, when using unstable, that I've had to update packages that would not have needed updating were I using stable (and still would have worked fine). Maybe I just got a bad impression and switched away from nightlies around the time wider-demanded features started to stabilize.
> Again, it really sounds like your complaint is just that procedural macros aren't stable.
No, this really isn't it. I fully expect that code using unstable features is going to require changes when I update. I think the root of my gripe comes about from intermediate versioning being punted to "just run nightlies", meaning that rather than being able to reference older versions of packages that work with my compiler version, I'm effectively forced to upgrade and create breakage in my code right then.
> I'm sorry they've taken longer than you wanted to finish, but we have lots of work on our plate.
Please take whatever time is required to get them "right". And thank you in general.
But a new nightly shouldn't be breaking stable features, only unstable ones. That's why I'm confused as to why upgrading nightly could possibly break packages, aside from changing unstable features, which you shouldn't be using if you want stability.
I'm trying to remember/piece together what happened.
I believe I added a dependency that wouldn't compile with the snapshot I was running (10JUL2015?). So then I updated to a new snapshot (14AUG2015?), and a previous dependency would no longer compile (since it hadn't been updated). I think this could have been due to libc changes, which I guess were marked unstable (and this was the result of them being removed from the main rustc distribution?). I did some manual tweaking, and/or the problem fixed itself in a few days.
I next came back to rust a few months later, where after that experience, I went ahead and updated to the 1.4 release. I figured libraries should generally aim to keep working with numbered releases, regardless of the upgrade treadmill. I guess (in your framework) that's applying unstable expectations to numbered releases, which is what I've been doing this whole thread - I don't care that code will break at future point, but I do desire consistency in the meantime!
I was next surprised to find out that the syntax extensions for eg serde were used completely differently with the numbered releases because procedural macros were disabled (I hadn't yet touched my own projects that use procedural macros or libcore).
The snapshot story is probably better these days, given that libraries should be more stable. It has always been annoying feeling like my rust is continually "out of date" and dependencies could break and require me to update, involuntarily breaking my code. This has been a problem all along, and I had hoped it was over with numbered releases. Alas.
From my perspective the main source of pain is that rust eschews having traditional numbered unstable releases. Instead creating a harsh dichotomy between uber-stable releases and uber-changing snapshots. One can alleviate some of that pain only if they're willing to take the web 2.0 give-up-control-of-your-computing plunge and run eg rustup.sh, which of course helps the fundamental problem to persist. Granted, the problem will be moot as rust libraries become more stable and the stable distribution becomes more featureful. We're apparently just not there yet for my purposes.
Cuil, a downvote for detailing my perspective? I only bring up this meta issue because it's another symptom of the same problem.
If one is onboard with the butt model of constant updates, then of course my problem looks odd and self-imposed. But such assertions that one's own perspective is universal is a source of many of the world's problems.
> By complaining that when you update nightly you have to update your code that uses unstable features, you are in effect asking us to stabilize everything.
No, I think the point is to just have a specific unstable that corresponds to a specific stable as closely as possibly, and designated as such., Conceivably crates that needed unstable features, but not bleeding edge, could require at least that unstable release, while others that really need bleeding edge features could use nightly. The issue here is that your release procedure treats unstable features differently than everything else, making any crate or code that wants to use any unstable feature resort to using a nightly of some sort. Admittedly, I can see reasons why this might be by design.
You might have good reasons why you think that won't work or be useful (I can think of a few, such as not wanting to promote people building on a specific implementation of an unstable feature). At a minimum, it is at least slightly more work to make sure this new item is released, depending on the release process.
P.S. The prevailing wisdom so far when wanting to use unstable features from a point release is to use a nightly form the same day, but the release policy seems to indicate that the nightly from that time is actually two release versions ahead of the stable that was released the same day:
> This process happens in parallel. So every six weeks, on the same day, nightly goes to beta, beta goes to stable. When 1.x is released, at the same time, 1.(x + 1)-beta is released, and the nightly becomes the first version of 1.(x + 2)-nightly.[1]
I'm a bit unclear on what the best thing to do would be if I wanted to use features from something that is as close as possible to a particular stable release.
I had the same response when I realized a feature I really wanted was only on nightly (and would probably still be interested). However, in the last month-ish I've been using a fresh nightly every day, multirust has made it super easy to update, and I've only had breaking changes to my dependencies once.
I only switched to building releases at 1.4 because I thought the nightly treadmill could finally be over. Alas I guess I'll skip installing 1.6 and just switch back to sporadically-updated checkouts whenever some dependency has breaking changes (ooh, such stable).
I'm not looking to get the rats on my face and switch to butt-based development ala rustup/multirust/etc.
Really? Last time I checked it still didn't work. Will have to try when I get home.
The problem was that the config is compiled into a dynamic library and gets passed the config struct. The old alloc method kind of died on me when the dynamic library was trying to alter that struct. So I had to enable jemalloc to keep it from crashing.
Hmm. Yeah, I mean, jemalloc is the default allocator, so something must be wonky here. You don't need to explicitly use jemalloc. That's what you get by default.
Quite the opposite; jemalloc has been the default, and using the system allocator has been the "hassle" since well before 1.0.
However a few key platforms have jemalloc disabled because it's buggy (deadlocks or worse). I think as of a few days ago it's pretty much universally off on windows.
Also whether jemalloc is used depends on how you build your thing -- dylibs use the system allocator (because they're subordinate), static libs (rlibs) inherit from the thing they're linked into, and executables use jemalloc (because they're in control).
I have a question for Rust fans: how do you deal user interaction? Do you have a favorite user interface library? Do you separate the UI from the program and communicate either via IPC or http+html? If you don't care about cross platform capabilities, is there a great library on the platform you do care about?
Haven't tried it yet (backwaters don't usually look kindly on AWS :) ), but that looks like what you're talking about. There are a ton of libraries for Rust these days (relative to the amount of time since 1.0, at least).
Rusoto maintainer here: The different AWS services are added to Rusoto through code generation based on service definitions from the Python project botocore. We're in the process of moving our code generation from Python to Rust, and once that's in place, we'll be adding remaining services pretty quickly.
It's too bad that compiler plugins / libmacro are still so experimental. Once they're stable, the kind of code generation you're doing as a separate step could be done entirely at compile time. I did some preliminary tests with an eye towards an AWS crate, but decided that a crate that only works with nightly and frequently breaks is of dubious value. Still, it's cool that the Rust compiler will eventually be extensible enough to compile json directly.
Anyways, thanks for Rusoto. It's going to make a project I'm starting next month significantly simpler!
Yes, I've been following this issue closely. Nick Cameron is working on a revamped macro system that will replace the current compiler plugin system for the purposes of syntax extensions. It's probably still gonna be a while, unfortunately, since it'll take some time to get that kind of stuff vetted and baked into the language.
Here are some recent posts by Nick on his experiments and progress (in order of publication):
Go lacks an elegant and predictable syntax, unlike Python and Rust. Also, Rust is more potent as a language, it's safer, and it has crates - I prefer a centralized package management as it helps discovery, rating, analytics, and other things we know work great with RubyGems, PyPI, and NPM.
Yes, that was part of it. It was a complicated situation, from which we all learned a lot.
(For those not familiar: the libc crate went from 0.1 to 0.2. Many people were depending on libc="*". Cargo allows for multiple copies of a library with incompatible versions. But the releases were incompatible for a reason... it caused problems. This fix is one of the things we're doing to address it, there's more coming in the future.)
After trying to understand some Rust, it seems to me that it's just as complicated as C++, from the programmer's perspective. Was I mistaken in thinking that being simpler to program in than C++ was one of the goals?
I don't use C++ or Rust in a daily context, but I can say without a doubt that Rust is the easier language to grasp. I don't have to decide anything about headers, linking, Makefiles, etc. to get things up and running; I can declare dependencies in Cargo.toml and it just works.
The Rust standard library feels so much more modern and easy to use than the STL. The documentation is fantastic, too.
C++ pointers, casting, OO, templates, and all the billion other language features feel like traversing down a rabbit hole. Rust is small enough to grok in a day. The syntax is incredibly expressive and a joy to use.
I would say it is far easier to learn how to write good systems code in Rust than C++, but saying you can grok it in a day is a bit optimistic. On the surface level, perhaps. But the semantics of the type system can take longer to internalise, depending on one's background.
I've never done anything with Rust, but as a long-time C++ programmer, I expect they have done away with a whole lot of C++ complexity. For example:
Don't forget your & in your function declaration or you'll silently copy objects.
Don't forget to make your destructor virtual in your base class (don't forget to add it if the class wasn't originally going to be a base class!) or you get fun crashes on delete every once in a while.
Use "explicit" with your one-parameter constructors to avoid fun with automatic conversions.
std::string is practically useless
Use "." with classes/structs/references and "->" pointers (why am I the compiler?!)
Think you know what "a = b;" does? Think you know what the =,==,!=,<,>,^,,+,-,/,=,+=,-=,/=,^= operators do? Not unless you checked to see if someone override them!
Ever tried to read some code that uses templates?
If you declared Class(int x) and Class(char x) and then do Class('a'), it silently gets constructed with the integer version (or maybe it's with Class(const char x), can't remember, which should say something right there.)
Quick, what does "const int const x;" mean?
Quick, what does "int* a, b;" do, and is that what you expected?
Is it safe to do "a = b"? You don't know until you read through all the class hierarchies to see if the object has a pointer, and if so, if it has an assignment operator that works properly ("properly" being somewhat relative to the purpose of your object).
Fun times debugging stuff like
if (...)
statement1;
statement2;
Can you call a virtual function from a constructor? Which version of C++ are you using?
I have a soft spot in my heart for C++, but boy is there a lot to remember, and the mistakes/typos can kill a good half-day, minimum. I'm excited for the day I have a project I can use Rust on, looking forward to something I don't have to be always on my guard with!
I know very little C++ and a bit of Rust so there might be mistakes:
> Don't forget your & in your function declaration or you'll silently copy objects.
I think this still exists in Rust to a limited extent but it's mitigated:
* Rust uses move semantics by default, so unless the type was specifically marked as Copy[0] it will be moved into the function which should be noticeable
* references are actual types, `&Foo` can't be confused with `Foo` and (outside of method receivers) references are created explicitly, so the callsite knows whether it's passing a reference
> Don't forget to make your destructor virtual in your base class (don't forget to add it if the class wasn't originally going to be a base class!) or you get fun crashes on delete every once in a while.
Rust has almost no inheritance support, and the realms of static and dynamic dispatches are pretty well separated (trait objects are dynamic, the rest is mostly static) so that should not be an issue.
> Use "explicit" with your one-parameter constructors to avoid fun with automatic conversions.
Rust has very little implicit behaviour (auto-deref is the main one I think), and I don't think it has implicit/automatic conversions.
> std::string is practically useless
Without more information as to its failing, I can't say whether String (and &str) is more useful. IME it certainly is useable and useful.
> Use "." with classes/structs/references and "->" pointers (why am I the compiler?!)
Yup, this is done through auto-defer (of types implementing the relevant trait of course), though it does have a separate `::` for other context.
> Think you know what "a = b;" does? Think you know what the =,==,!=,<,>,^,,+,-,/,=,+=,-=,/=,^= operators do? Not unless you checked to see if someone override them!
Can't override assignment itself, you can override other operators[1] and there is an accepted RFC for overloading augmented assignments[2]. Though these overrides are trait-based which should limit the shenanigan space somewhat.
> Ever tried to read some code that uses templates?
Rust's generics are slightly simpler, but they're lacking features (no template specialisation currently) so… not sure.
> If you declared Class(int x) and Class(char x) and then do Class('a'), it silently gets constructed with the integer version (or maybe it's with Class(const char x), can't remember, which should say something right there.)
See item above, the very declaration doesn't exist in Rust, you'd need to have two separate factory functions.
> Quick, what does "const int const x;" mean?
Should be clearer in rust, type declarations are fairly regular.
> Quick, what does "int* a, b;" do, and is that what you expected?
I think that's equivalent to `int* a; int b`?
Type information is an optional postfix addendum in Rust, so the ambiguity doesn't exist, but if you want explicit types you don't get the convenience of shortened declaration. The closest to this code would be
let (a, b): (Box<u32>, u32)
which isn't much of a gain over
let a: Box<u32>;
let b: u32;
if you're not unpacking an existing structure
> Is it safe to do "a = b"? You don't know until you read through all the class hierarchies to see if the object has a pointer, and if so, if it has an assignment operator that works properly ("properly" being somewhat relative to the purpose of your object).
See above, assignment itself can't be overloaded (at least at the moment)
> Fun times debugging stuff like
I think the only places where braces are optional in Rust are non-ambiguous 1-expression lambdas (`|| {expr}` can be written as `|| expr`) and 1-expression match arms e.g.
match foo {
v1 => { expr1 },
v2 => { expr2 },
}
can be written
match foo {
v1 => expr1,
v2 => expr2,
}
> Can you call a virtual function from a constructor?
Rust doesn't have constructors so no. As for the intent, since Rust doesn't have constructors you can't have partially built values (unless you forcefully partially build them) so the answer should still be no.
I think the GP was but I read the parent comment as just a stab at comparing and contrasting C++ and Rust. I found it useful and interesting in that aspect.
Rust's memory management strategy has been described before as "you have to think about it, but you don't have to worry about it", which I think applies to the language more broadly in terms of a comparison vs. C++. The latter is very flexible and powerful, but generally assumes the programmer is perfect, meaning mistakes often won't be caught at compile time and are often subtle to track down, while Rust has the assumption that humans are fallible, so many mistakes are caught. The fact that they're caught means learning Rust front-loads a pile of things, because the compiler complains about them, which can easily make Rust seem more complicated despite the core concepts of ownership/borrowing really being a subset of modern C++.
Multiple startups have wanted to or used Rust for systemsy components because their devs were mainly from dynamic language camps and they felt that in the process of learning Rust you end up learning systems programming, which is less so the case for C++.
Rust does have a lot of features, but for most programming often you don't have to dabble in the more advanced features, which is great. Whilst working on Servo I mostly stuck to the basic features which I was familiar with for a long time and only learned the complicated stuff after a few months out of interest.
One major difference is that Rust forces you to learn about ownership and borrowing up front. This is cognitive load you would have to take in any systemsy language at some point when learning about memory management, Rust just makes you learn it early.
You might be thinking of different start-ups, but I was under the impression that start-ups are using Rust not because they want to learn system programming, but because they need the low-level control but aren't confident using C++ and find Rust's compiler-driven safety very compelling.
Yeah, that's one part of it, but someone (I think Steve?) mentioned that some startups have realized that the process of learning Rust automatically teaches low level programming concepts, whereas with C++ it's less so. Or something like that;
We have a number of similar features, but ours are generally more straightforward since we didn't have to worry about backwards compatibility. We also have some features that C++ does not, and they have some we do not.
When I've dabbled in C++ in the past, worrying about safety (does my custom trie implementation segfault? is this iterator still valid? can I trust the code I just vendored into my repo?) was the main source of complexity, both in initial writing of my code and in debugging it. Working in Rust has been a comparative (and simple) delight for me.
Every bit helps, seriously. No contribution is too minor, in my opinion. People think that only big stuff counts, but my career is literally built off of paragraphs of doc revisions.
So, I've recently tried to figure out what rust it's been getting lots of attention in HN.
So, it allows you to have safety and control.
I thought that is very neat.
I've got three questions for experts.
One, what type of applications is Rust intended for?
Two, I like JS because I can code in the client, and server in one language. Will there ever be a web server framework for rust and an api that allows me to modify the dom?
Three, what are your predictions for the future of rust?
1. Rust was designed for low-level systems programming, and in particular the multi-threaded kind, where performance and memory control are important. So things like web browsers (it's been developed alongside development of the Servo web browser), games, operating systems, embedded systems etc. It's basically a better C++.
2. There are already several web frameworks, in various stages of earliness. See [1]. I don't think it will ever be a good idea to write Rust client-side though (even though I believe there is an emscripten backend for it, so it's possible).
3. I see a very bright future for Rust. It could use some ergonomic improvements (but it's miles ahead of C++ even now, mind you), but is otherwise a very well designed language that fits very snuggly in its niche. And it has some very strong backing from Mozilla. I expect it will slowly take over large amounts of territory from C++ and possibly from C, as these developers start to realize the benefits of Rusts safety guarantees. It's also quite unique in that its safety entices many higher-level (python, ruby, C# etc.) developers into trying system programming, which could be really transformative to the system programming scene. For example, one thing that has already come out of this is the cargo package manager, which is similar to npm, bundler etc. but (before cargo) had yet to find its way down to the lowly systems programmers. Many (former?) C++ developers actually tout it as one of the major benefits of Rust.
With Web Assembly coming in the near future, Rust will actually be a very great choice. Right now they are targeting c/c++, but Rust of course will be a great alternative.
Do note that, for many things on the web, JavaScript is PLENTY fast. It's when you are doing crazy things like CAD in the browser, it'll be amazing.
Is there a good way to look up no_std crates? For crates written with no_std, is there a keyword we should be tagging things with?
I have several crates providing access to GPIO/SPI/I2C under Linux and would like to put together a roadmap for having the interface to these types of devices that is portable to various platforms and devices (e.g. Linux as well as Zinc, ...).
I'm not using rust for heavy/intense db/API stuffs currently w/Go. but in the future, i'm looking on using it as Main language for our API's. I like how safe it is and its explicitly.
I have looked at both Rust and Go. What I have felt is that, Rust is too restrictive. I mean, you have to fight the compiler a lot harder then you have to do in Go. Some times, for example, if you are writing device drivers or real time embedded programs, then that is great.
But for web services? I think it is overkill. I think Go strikes a nice balance. I would love to be convinced otherwise though. So please tell me, what am I missing?
C11 (and C++11) defined a memory model and atomic operations for shared-state lock-free concurrency. But that model and the atomic operations aren't being used by Linux, because they didn't match up with the semantics of the operations that Linux uses. (See https://lwn.net/Articles/586838/ and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p012...).
I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.