Is it me (and have I been hiding/asleep under a rock for a long time) or is Linus' tone vastly (and pleasantly) different from what his previous persona is known to be?
I actually loved (I mean really loved) reading the response as it not only was encouraging but also highly respectful and almost glowing with a trained sort of nuance! Just wow!
From what I've seen, Linus has always been mostly quite reasonable and constrictive, even before his 2018 "refection" that some others mentioned. Most of the time it was direct, but helpful, polite, and constructive. This is probably a big contributor to Linux's success.
It's just that on occasion he would rant and rave to people and call them retards or whatnot. People get a bit of a skewed perception because only those messages make the news and are famous, but that's not really representative of all the messages.
And when he did rant and rave, it was pretty much always towards established contributors who had screwed up somehow; people who in his opinion ought to know better, and most of the time he did have a good point. I don't approve of his style, but it's not like he would randomly call people idiots. It's not as if you ever ran the risk of being scolded by Linus if you were a new contributor sending a patch or anything.
tl;dr: Linus has always been like this, and was just occasionally an asshole.
That's the thing. I am very sure perception was tainted by my own laziness and "falling for" populist PR and it took pure coincidence to see the real side! Sigh.
"News" by definition is biased towards both rare and negative events, because common everyday things don't tend to be news. "New contributor gets a very helpful response from Linus" isn't news, and neither is "man spills someone's pint in a pub, they apologize and everything is okay", "woman walks home at night nothing happened", etc.
Quite frankly I consider most of the news to be worse than useless and actively harmful (and news is different from journalism; journalism is great).
(also, I see now I misspelled constructive as "constrictive" in my previous comment do'h facepalm)
I don't read the lkml enough to truly know, but in 2018 he made a public apology and (IIRC) took some time off to go work on his behavior. So maybe that's it!
I agree that I found his messages here to be quite pleasant.
his infamous rants are outliers. caricatures of Linus as some whackjob that always goes on lengthy, profanity-ridden tirades at the slightest minimal provocation are all vastly overstated.
I think he probably makes thousands of comments, yet the controversial ones make it out to the general community and form his reputation.
Also, I think what's really interesting is that with his feedback, could the language be actually be adapted to the kernel?
This would be fascinating because traditionally the code (the kernel in this case) has to adapt to the foibles of C. This could be the reverse, where not only could the language adapt, it might make the kernel technically better, and easier to read and modify.
> Is it me (...) or is Linus' tone vastly (...) different from what his previous persona is known to be?
Indeed, and I hate it. It's actually very scary, a creepy change in personality. Like the last scene in "one flew over the cuckoo's nest", where the main character is lobotomized and loses the spark on his eyes. Deeply, deeply troubling to behold. I wonder if the real Linus still exists behind all this or he's gone forever.
He's really changed. There is a subreddit /r/linusrants that documents his rants. In the last year there have been 9 posts, but most of those are reposts from previous years or not really rants or aimed at technologies, not people. Three years ago there were 27 posts.
And what makes this even more impressive is that he managed to do it despite being - in manys eyes - a very nice and reasonable person to start with, it wasn't like everything was falling apart around him and everyone was leaving their jobs as maintainers AFAIK, he just decided it would be better for the project if he left this part of his personality behind.
RFC 2116 addresses part of the issue. The other part of the problem would be to support disabling the non-fallible APIs at compile time.
Also, RFC 2116 assumes that it's sufficient to provide (for instance) Vec::try_reserve, and require callers to always call that before calling anything that might expand the Vec. That wouldn't eliminate the runtime panics. It might be necessary to go a step further, and actually provide fallible versions of individual Vec methods. (Or there may be other potential solutions.)
Yeah `try_reserve` is ~~racy and~~ (edit, strikethrough) awkward to use. And having a bunch of a panics which "should" be dead code because of the proceeding `try_reserve` at best adds a bunch of noise burdening static analysis.
A bunch of us, anticipating a reaction like Linus' have been arguing that we need `try_` versions of everything and* a way to prevent the other ones from being used. (Cargo features?) I sincerely hope we finally get that.
> A bunch of us, anticipating a reaction like Linus' have been arguing that we need `try_` versions of everything and* a way to prevent the other ones from being used.
This has been anticipated since panic on OOM was introduced.
Adding `try_` version of everything is a horrible solution.
What you want is for the normal APIs to return Result<R,E> where E=! when panic on OOM and some OOM error otherwise.
The largest irony of all time is that, while returning an error on OOM works well on Windows, it makes little sense on Linux because overcommit is often enable by default. Linux and Linux users are directly responsible for the "panic on OOM" that turns out now prevents liballoc from being used in the Linux kernel.
The obvious fix here is for the kernel to use overcommit internally \s
But it's still non-local. one has to manually account for what allocation will be done, and keep the reservation in sync across refactors. This isn't a race but still scares me.
which would be a race. That is not `try_reserved`'s fault, and a rather easy-to-spot example, but I wonder if there are variations on this which are easier to miss.
They probably meant it is a race, not racy. Although, it's not a race I think? The method will attempt to reserve, or return Err, just like malloc would (it's not can_reserve)
let v = vec![0; 42];
v.try_reserve(1);
frob(&mut v); // I expect this not to expand the array
v.push(1); // fails, because frob took the space
Note that this requires passing a mutable reference to frob. Absent an explicit contract in the api documentation I wouldn't expect a function that takes a mutable reference to a vec not to mutate it arbitrarily.
One option for avoiding this would be to pass a mutable reference to a slice, which allows frob to mutate elements of the vec without allowing it to push.
frob(&v[..])
It can't be a race in the traditional sense, as the borrow checker will enforce only one person being able to write at a time.
Even for normal user space code having a way to detect panic at compile time and abort compilation would be great. Ideally together with a way to wrap functions that do panic by catching it and throwing an error instead so the panic infrastructure can still be used. I've sketched that out on HN the other day:
That is still something that happens at runtime, not compile-time. What the kernel would need is not only for the fallible alloc APIs (like Box::try_new) to be expanded to cover all their needs, but also to be able to remove the ability to use the infallible/panicky ones (like Box::new) to ensure, at compile time, that panic will never be called.
The second half doesn't exist either. I want to disallow panic and turn it into an error at compile time. The panic unwinding would be replaced with an error check by the compiler.
> Note that this function may not catch all panics in Rust. A panic in Rust is not always implemented via unwinding, but can be implemented by aborting the process as well. This function only catches unwinding panics, not those that abort the process.
The most notable way this can go wrong in pure Rust code is that panic-while-panicking aborts. So you have to be careful that your destructor can never panic.
It's kind of a funny space; right now Rust handily gives you "no allocations" (this is where I live) or "infallible + fallible allocations" (this is alloc/std by default) but not "only fallible allocations". This sort of thing is basically filling out the quadrant of options.
Yup, it is. If this plan linked above were to be implemented, what would happen is that you would get the same behaviors by default, but with a new setting, you'd get some APIs removed. That's backwards compatible.
I was thinking that when reading the exchanges. This is a really good interaction. Linus and the kernel have strong requirements for good reasons, and the rust teams are trying to address them in useful ways.
Rust in the kernel is not a simple thing, but I think both rust and the kernel will benefit.
Do you think a more ergonomic `#[no_panic]`[0] would require Rust wait for an entire effect system, or be valuable enough to add to the compiler as a one-off?
Yeah!
Even he kind of appreciates one of my favorite Rust features.
> So "Result<T, E>" is basically the way to go, and if the standard Rust
library alloc() model is based on "panic!" then that kind of model
must simply not be used in the kernel.
This is the problem with SW developers: when something starts working they want to "improve" it until it doesn't work. See firefox for a nice example (it is so sloooow). (no i do not have an SSD but earlier it was reasonable. Now the change to rust is making it slower without any visible improvement in security- still have to update every couple of weeks because of CVEs)
It comes from the ASCII C0 [transmission] control code NAK, which means "negative acknowledgment"--something was wrong with the last transmitted data block or command, or the reception of it. As opposed to "ACK", which means a block or command was successfully received and accepted. In the original 1963 standard the mnemonic was "ERR", but changed to "NAK" for the 1965 standard.
In programmer lingo it can means something similar to the original meaning--rejecting a request for faultiness or incompleteness--or it can mean something more like, "no"--answering in the negative.
> We decided to go with Rust's
idiomatic style, i.e. keeping `rustfmt` defaults. For instance, this
means 4 spaces are used for indentation, rather than a tab. We are
happy to change that if needed -- we think what is important is
keeping the formatting automated.
Hahaha, preemptive offering compromise on tabs vs spaces warms my soul, good stuff.
I also like the strategically elevated abstraction. Doesn't even mater how many warriors are sent into this battlefield, we're not even gonna show up. Although not really novel these days with respect to formatting. Perhaps IT is growing up?
> The project is still in an early phase with the goal to compile the offical Rust test suite. There are no immediate plans for a borrow checker as this is not required to compile rust code and is the last pass in the RustC compiler.
This is essentially a reimplementation of rustc. I'm not going to say "don't boil the ocean" but it will likely be a while before this is at feature parity with the main compiler. Even with a full-time developer.
There is a project to use gcc as a backend for rustc, which might be a more tractable option to removing the llvm dependency in the short term:
> That company would have been better off paying for that.
This is only the case if you assume their primary goal is to get a reimplementation as fast as possible. In my understanding that is not their primary goal. Their primary goal is to not require a Rust compiler in the build process. This means that the approach you suggest doesn't fit requirements.
"Their primary goal is to not require a Rust compiler in the build process"
Do you mean their goal is not to require `rustc` in the build process? Because I'm not sure how they're going to compile rust code without a rust compiler.
They wouldn’t need to compile Rust code, because said compiler would be implemented in C++. It would produce a Rust compiler, but would not require one to exist in order to do so.
mrustc for bootstrapping, rust_codegen_gcc (which doesn't seem like it would make stage1 much harder to compile) once the rest of the compiler has been brought up, isn't that the more realistic plan?
The goal for the dev is to be able to compile rust code in governmental code base. As all the static checking will still be done with rustc, it doesn't need to be a full compiler.
If you're talking about gccrs, no, it's a full reimplementation. I think your parent is saying developers can use rustc (including its borrow checker) while iterating, then produce the release builds using gccrs from code that is known to borrow check.
I think the page is just saying they are so far from being at the borrow checker stage they aren't going to worry about it initially. But it sounds like eventually they'd implement it.
This is as much good news for Linux as it is for Rust. In the long run, there is a nontrivial risk that Linux could be replaced by an OS written in a safer language. Moving to a safer language within Linux is likely to ensure Linux's long-term survival in smartphones and servers alike.
The good news is that Rust is going to need to get serious about first-tier support for all architectures supported by the Linux kernel if this is to go anywhere.
There's also rust_codegen_gcc, which I'm much more hopeful about. It uses GCC as the code generation backend, but keeps the existing Rust frontend, rather than duplicating code and potentially diverging.
I say "risk" because any such OS would probably be available under a license more permissive than the GPL. The result would almost certainly be that large parts of the OS would become proprietary (NVIDIA has already tried this with Linux's limited module support).
Of course, moving to Rust is also a risk in this respect, since the LLVM toolchain is BSD-licensed, but hopefully Rust support will be added to GCC in the not-too-distant future.
Except of course, the GPL is why linux remains so popular. A linux killer cannot be a linux killer unless it is GPL or similar. Or else BSD would have over taken linux a long time ago.
On the server side I think Linux will be replaced with something more resembling a hypervisor for WASM rather than another OS. Of course even here Linux will die at mainframe speeds if not slower. As a toolbox for IT experts it may live on forever if it doesn't calcify.
Does Rust support every architecture that the kernel supports? Seriously asking, not rhetorical.
I get that new Rust code will be limited to modules, but here's my concern:
I do work for embedded systems based on the PPC architecture. I'm not sure if Rust supports PPC, but let's just assume we're talking about an architecture that's not supported.
Suppose some company creates a PCIe Ethernet adapter and writes their driver in Rust. If Rust doesn't support PPC, then I won't be able to use that hardware even though there's no physical or technical reason why it wouldn't work. If it was written in C like everything else, then there's a good chance it would just work (assuming they didn't make any endianness assumptions).
In fact, I'd bet that the majority of the drivers written for the kernel are really written for X86 and ARM, but just so happen to work for PPC as well for free. I'd be concerned about getting left in the dust, so to speak.
What's really cool about Rust is not that it's a technical improvement on (Ada, C, Cobol, C++, Fortran, Lisp, Haskell, ...) because it isn't yet. It has some really great features, and a lot of potential. It's still young. gccrs is required. It has quite a way to go before it can rival some of the most important _languages_ we use. It's getting there.
What's really cool is the humility, flexibility, and creativity, of the community who develop Rust.
What's really really cool is Linus is holding a hoop into the kernel
Rust is absolutely a technical improvement on some of the listed languages (C, C++, maybe FORTRAN and COBOL). The borrow checker is truly novel outside of some research languages that precede Rust. The other languages you list are sufficiently different from Rust that it’s hard to make a comparison.
Of course Rust is a technical improvement over substantially older languages. To claim otherwise is basically to claim that the twin fields of programming language theory and language design have discovered and achieved nothing in the last 40 years, which is an strange thing to believe.
As explained in the thread. Rust the language itself doesn't know what the heap is. The heap is a library-level concept. You can use Rust perfectly well in heap-less/stack-only environments.
When we see more Rust implementations, are they going to inherit Rust's package manager?
I like the security ideas of Rust, but I'm very much against a compiler including a package manager that points at a particular central repository by default, with all the political problems that that entails, not to mention longevity issues.
wrt the discussion about colored unsafe for interrupt safety, would it be possible to have functions that sleep (or whatever) take a zero-sized struct called NotInterrupt or something? Then entry points to Rust code that are known to not be called from interrupts (unsafely) construct a NotInterrupt, and the rest of the code just passes it around. I think with some clever use of phantom references, you can convince the borrow checker to prevent a NotInterrupt from outliving the context it was created for, and as it's a ZST, it should be zero-cost.
So the question is the one you asked above, but also, more generally, "is it possible to statically disallow panic on OOM?" and related things.
Rust the language itself, that is, the language + the core library, doesn't know anything about allocations at all. There are none. Concept doesn't exist. So that's fine.
Rust also provides the "alloc" library, which can be layered on top of core. This gives you standard APIs for allocation. They also contain several data structures that use allocation internally. These data structures have APIs that may panic on OOM.
So, Linus saw those things, and naturally asked questions about how required they are. The answer is "not required, but it's work to get rid of them, there's a few options, and we didn't want to do that work until we got a higher-level gut check from you."
Does this RFC include all the requisite “safe” interfaces to enable writing a fully featured driver in Rust? If not, is it theoretically possible to create those safe interfaces within Linux’s API driver infrastructure? Or will some unsafe API usage always be necessary in Rust?
I remember there was an issue writing a rust-safe wayland compositor on top of wlroots because wlroot’s ownership model was simply not able to be cast in terms of interfaces that rust was able to prove were safe. [1] Is this not an issue in Linux?
I haven't done any Linux work (except for some stuff with device tree that I've mostly forgotten), but that post I made about Way Cooler may not be generalizable to all C APIs. It's also possible that I simply was not creative enough with my API design - since that blog post a few people have reached out with alternative designs that avoid some of the issues I ran into. I haven't dug into them however, since I'm no longer interested in working on Way Cooler.
It's not so much "Rust can't represent this ownership model" as "this ownership model is basically orthogonal to Rust's so you have to put much more effort in to write idiomatic Rust code compared to writing it in C". I would love for someone to come along and prove me wrong with a better wlroots wrapper.
Also, Wayland composites are much, much simpler than the Linux kernel, so that train of thought doesn't necessarily scale out.
Yes I saw that but it wasn’t clear on whether it was possible or if there was a clear plan to make it 100% safe.
You’re saying some unsafe code is always necessary, why should that be the case? I think the big issue with wlroots was its callback-based API which is common with Linux-internal APIs as well. On a theoretical basis I’m not sure what this means for Rust. Is it simply not possible in the abstract to cast these types of APIs in a form that can be statically proven to be safe? Or is this a deficiency in the current design of Rust? Are all “callback-based” APIs inherently unsafe in Rust? Is it always theoretically possible to recast those APIs in a form that Rust can prove is safe? I would just want to understand exactly why it’s possible to write 100% safe Rust programs in user space and not in Linux kernel space.
These are the types of questions I would ask when evaluating whether or not it’s worth investing in and using Rust for my Linux driver.
It is impossible to make it 100% safe because in order to do that, Rust's language semantics would have to include the semantics of the underlying hardware APIs. Imagine I'm writing a text-mode VGA driver. To do this, I have to write at the memory starting at 0xB8000, because it's a memory-mapped device. The only way this would be safe is if Rust the language understood the VGA spec, because otherwise, it looks like you're accessing arbitrary memory. The only way to call the wfi instruction would be if Rust the language understood ARM semantics directly. Etc etc etc.
The callback thing is a red herring; it's not the fundamental issue here. Rust code can use callbacks just fine, in the general case. It also wasn't the fundamental issue with that API either.
Would svd2rust (or a project like it) be usable for that (assuming device manufacturers displayed some competency and emitted consumable descriptions of their HW APIs instead of reference manuals)?
Those projects generate code with unsafe inside (and sometimes outside). They are very useful for getting safe interfaces, but don't help remove the concept of unsafe entirely.
I see. When writing the VGA driver, writing to memory space between 0xB8000 and the upper bound is technically safe because we know nothing else is mapped there. So you could wrap writes to that region in a “SafeVideoMemoryWrite”function call, and designate that function as safe to call. I believe this is done in the standard library of Rust for efficiency purposes. Is there no way to designate user-level safe interfaces built on top of unsafe? Put another way, is there a way for a user to extend rust’s notion of safety?
But honestly at that point, using Rust doesn’t seem to be very much different than using C in terms of safety guarantees. It still requires a programmer capable of competently ensuring required runtime properties.
You would do it conceptually in that way, yes; you'd provide a safe API, and then use unsafe inside of it to implement it. The standard library is just regular old Rust code, you can do the exact same thing. And in fact, you'd want to, in order to isolate the unsafety as much as you can.
> But honestly at that point, using Rust doesn’t seem to be very much different than using C in terms of safety guarantees.
The difference is that it's limited in scope, and auditable. Even in a kernel, if you do it right, unsafe is the vast, vast minority of code. Let's be extremely generous and put it at 10% (Redox, an OS in Rust, had about 2% unsafe last I checked). That means that you still have a much, much significantly smaller space in to look for these bugs.
In a few words, when dealing with kernel modules and drivers (and I guess every low-level implementation), ensuring that the upper (Rust) layers are safe is still the job of a person: a person, with a great understanding of both worlds, writing the unsafe functions that glue safe-land and the rest of the kernel together.
> The difference is that it's limited in scope, and auditable. Even in a kernel, if you do it right, unsafe is the vast, vast minority of code.
That’s true but it’s also slightly misleading. Any code that uses the unsafe wrappers technically must be checked and all code that uses that code must checked in turn, ad Infinitum. Misuse of the Unsafe wrapper can occur at any level. For instance, if you misuse a DMA command that corrupts memory, it’s not simply the DMA command wrapper that must be checked, the entire sequence of logic that led to the bad command being executed must also be checked.
> Any code that uses the unsafe wrappers technically must be checked
If that is the case, then your unsafe wrappers are unsound.
Safe functions need to be impossible to use in an unsafe way or else they should marked as unsafe.
That could take the form of a runtime check that the function's invariants are maintained or a proof that the function's invariants are always maintained.
That’s true, so now I see where the issue of unavoidable unsafe usage comes in. It’s not always possible to create a safe and general wrapper for all driver functionality, though in special cases maybe. I agree now that using Rust still offers something even if it can’t be guaranteed that the code is 100%. Thanks for explaining.
If you're a driver, at some point you'll probably need to write bytes into a device register, mapped to a raw address in physical memory. That's a fundamentally unsafe thing to do: it relies on you as a programmer setting up the structure of the data in those registers correctly. Get it wrong and you can scribble over some arbitrary piece of memory, for example. The compiler can't check your working. It doesn't know the specification of every piece of hardware - that's what drivers are for.
because you have to interact directly with hardware at a low level at some point (poking registers, implementing syscalls, implementing process and memory address space isolation, context switching between rings, etc), that will always be inherently unsafe.
Userland Rust depends on these guarantees to provide safe code, something has to implement them.
It will be interesting if this gains traction, while C++ in the kernel never did. Many, many bugs in the Linux kernel would have been prevented with C++ (the entire class of `goto cleanup;` bugs, for example).
Basically, we think (and we hope Linus agrees) that Rust has the relevant benefits of C++ without the downsides that have so far kept C++ from being adopted. (It has other great features that C++ doesn't have like compiler-enforced memory safety, too, but I agree with you that RAII is a pretty important and obvious win for the kernel just by itself.)
Yeah, it's possible to write a disciplined form of C++ that abides by those rules, but I think it's important that the idiomatic form of error handling in Rust is returning Result types instead of unwinding, the standard library doesn't have coercions that allocate, etc. If you write some Rust code following a Rust tutorial, it would probably be kernel-suitable. That's also largelytrue of C but not of C++.
(And Linus noticed one of the big parts where Rust doesn't live up to this - the idiomatic thing for memory allocation failure is to unwind - and that's a solvable problem.)
If it `can be idiomatic` it means it's not currently idiomatic, and C++ isn't really heading in that direction. There's also a vast suite of C++ programmers out there to who it's very much not idiomatic and downright foreign. So if you're going to fight against how everyone else writes the language, it's best not to use the language.
No, by “can be idiomatic”, I meant “is already idiomatic in many codebases”. To name one prominent example among many, LLVM disables exceptions and requires that errors be returned as a value: https://llvm.org/docs/CodingStandards.html#do-not-use-rtti-o....
Disabling exceptions isn’t “fighting against how everyone else writes the language” - it’s pretty common and well-supported.
> I think that the standard Rust API may simply not be acceptable
inside the kernel, if it has similar behavior to the (completely
broken) C++ "new" operator.
I can't really do a "steel man" rebuttal because over the years Linus has argued against a language he calls C++, but which is a made up language invented for the purpose of hating it. A straw man, if you will. It is difficult and frankly not worth the trouble to try to rebut an argument that has never risen above the level of "crap", "broken", and "bullshit".
In the particular case of operator new, it's clear from his cumulative statements that Linus is unaware that the programmer provides global ::new, and can make it do whatever the hell he wants it to do. You can also just forbid it and use placement new everywhere.
To be fair, I think most C++ programmers are unaware of replacing global ::new and most don't know about placement new either (from what I've observed). I think lots of people who write library style code do (whether it be for boost, stl, or in their company) but outside of that most C++ devs are not library style devs and are a bit out of the loop on these topics.
Also, I agree with what you said early about RAII solving entire classes of issues that C (and C++ without RAII) has.
They've already responded that their use of "alloc", which is designed more for userspace, is just a temporary thing that they plan to replace before any serious submissions.
The alternatives to goto cleanup; have drawbacks too - like being less readable (cleanup code is before the main function body or destructors do things implicitly).
The goto bugs are often failures to release locks in the error paths, or failure to free some other resource, or sometimes double frees. C++ programs that use RAII locks rarely have these types of issues.
Many casual users never notice these bugs, but if you put a Linux box under enough pressure that some kmalloc fails, it will fall to pieces because the error paths are full of bugs and little-exercised. C has probably the worst error-path control flow of any of the high-level languages.
> Would this make Linux the first *NIX-based OS that is written in a language other than C? At least, with high usage...
It depends, but it's certainly not the first POSIX implementation in some high level language other than C. For example:
* Real-Time Executive for Multiprocessor Systems (RTEMS) is a real-time operating system (RTOS) designed for embedded systems. While it's often compiled down to as slim-as-possible it does support POSIX, and it's written in Ada. It's been used in a lot of projects, but because it's really an RTOS, RTEMS is typically embedded within dedicated systems and not something you'd normally interact with directly. https://en.wikipedia.org/wiki/RTEMS
* The BiiN system had an operating system written in Ada, and I think it implemented most of POSIX. However, the BiiN hardware never sold well, so it disappeared. I can't even find much about it on the web.
* BeOS implemented a lot of POSIX, and it was mostly C++. Haiku re-implements much of BeOS and is also written mostly in C++.
If you don't qualify it with "high usage", redox is that. There's also a semantic question; is Linux with some drivers done in Rust really an OS written in a language other than C? It's not like they intend to replace all the existing C with Rust.
"In 1969, Ken Thompson wrote the first UNIX system in assembly language on a PDP-7..." [1]
Half a century ago this living legend wrote an operating system. Can you imagine how much experience this single person holds today? It must be hard for him not to roll his eyes when talking to juniors.
Writing an operating system is not all that hard–thousands of undergraduates do so yearly. What's hard is coming up with UNIX and implementing it as something that others can use.
Following a tutorial with thousands of Github repos to look at from the comfort of you favorite operating system isn't even remotely comparable to writing Unix in 1969.
Those undergraduates do so with development tools that had hundreds of thousands of man-hours invested into them. We can't pretend like this doesn't make their job any easier.
> "Please note that the Rust support is intended to enable writing drivers and similar "leaf" modules in Rust, at least for the foreseeable future. In particular, we do not intend to rewrite the kernel core nor the major kernel subsystems (e.g. `kernel/`, `mm/`, `sched/`...). Instead, the Rust support is built on top of those."
Might be useful to recall that Rust's 1.0 release was in 2015. It may be a little premature to reimplement parts of Linux in such a young language... Rust may be a fad and may not catch on. I have used it a bit and I don't think it offers many significant benefits over C++.
"A company like Microsoft" writes production code in literally every language.
The stakes are low and nobody cares if your little team writes code targeting a Brainfuck compiler. (Yes, this is a real thing; there's a Brainfuck compiler developed somewhere in the vast guts of Google. No, I don't have a link to share with you, sorry.)
No. There's all sorts of hoary crap in the core parts of Windows - Pascal, C++, C, C# and various dialects thereof, JavaScript, Visual Basic, Prolog. This just off the top of my head.
It's not really an endorsement because the bar to getting into a Google or Microsoft or Facebook codebase is as low as it gets in this industry.
As someone who actually works at Microsoft, I can confidently say that a toy third-party language would have absolutely zero chance of getting approved for production code in any major shipping product. The first question that any manager will ask the developer pitching this is, "Where do I find the devs to maintain this codebase long term? And what happens if the language dies upstream?" - and you better have really good answers to those, or else a very convincing story explaining how the productivity boost is worth it.
Even Rust isn't all that easy. That it cleared the hurdle at all on so many teams already (and Windows especially!) is extremely impressive for a piece of tech so new.
The question wasn't whether Rust is good or not, but rather whether it's entrenched enough to not be considered a mere fad anymore. Regardless of your opinion on the merits of Microsoft products or their development process, the point is that it's one of the largest software development companies, and - as enterprises tend to be - is more conservative
in terms of technological stacks used for production code.
And? Companies use immature technology all the time, for better or for worse. Why does a few microsoft employees using Rust have any effect on its maturity?
I think there is a big ideological element to this. As a Rust outsider, I feel like it's the computer science embodiment of the current political discourse. There is something about the "one true cause" mentality, and the need to co-opt projects with other goals and steer them towards the cause.
Honestly, it is a really weird experience to have around a programming language, and to me is offputting for something that otherwise could have some very interesting merits.
I expect people wont agree with me, and I'm not arguing against the technical merits of the language, I'm just saying that it stands out because of the ideological following it has, and that is cause for concern if ideology is a big reason people try and push to integrate it in something as important as linux.
Some of us are mentally scarred from decades of dealing with weak type systems, undefined behavior, and general inability to be sure of what's going on in gargantuan C++ codebases.
Rust was more or less specifically designed to appeal to people like us. Throw a rope to a drowning person and they'll grab onto it hard.
Rust won't make legacy code and legacy coders go away. In fact, as time goes on the amount of crappy legacy Rust code and low-skilled legacy Rust coders will only grow.
> In fact, as time goes on the amount of crappy legacy Rust code and low-skilled legacy Rust coders will only grow
But it will make those codebases a lot easier to deal with because the Rust compiler enforces correctness in a myriad of ways that other languages don't. I would much, much rather deal with a legacy Rust codebase than a legacy C++ one.
This isn't true. Or, rather, there are also a myriad of ways that C++ enforces correctness that Rust doesn't.
Legacy codebases are smelly not because of a lack of tools for enforcing correctness. They're smelly because a) programming is hard, and b) a boatload of things are more important than correctness, in the real world.
> I would much, much rather deal with a legacy Rust codebase than a legacy C++
Obviously, because right now legacy Rust codebases are only 5 years old, while legacy C++ codebases are 30+ years old.
But in 20 years it will make no difference. The Rust of 2031 that will need to accommodate 30 years of legacy backwards compatibility will be no prettier than C++ today.
> there are also a myriad of ways that C++ enforces correctness that Rust doesn't.
There are? Like what?
> But in 20 years it will make no difference. The Rust of 2031 that will need to accommodate 30 years of legacy backwards compatibility will be no prettier than C++ today.
I don't really believe this. There are 20 year old Java codebases, and yes they can be a mess and hell to work with, but they're not nearl as bad as C++ ones. And Rust is stricter than Java.
Exceptions are terrible for correctness. They introduce hidden control flow paths that developers forget about and fail to handle correctly. (Also they break the "pay for what you use" principle, so C++ is split into two ecosystems, one which uses exceptions and one which doesn't.)
Exceptions are better than magical return values to indicate errors, that's true, but Rust Results achieve the same thing without the downsides of exceptions.
> That's because Java doesn't attempt to solve difficult problems.
Java isn't my favourite language, but that's just silly. GraalVM will do as a counterexample.
> They introduce hidden control flow paths that developers forget about and fail to handle correctly.
The problem that exceptions solve involve control flow paths that aren't supposed to be handled. Exceptions are not for handling recoverable errors, they are for graceful aborts in a complex, layered and modular program. (E.g., any multi-threaded server, for example.)
> The problem that exceptions solve involve control flow paths that aren't supposed to be handled. Exceptions are not for handling recoverable errors, they are for graceful aborts in a complex, layered and modular program. (E.g., any multi-threaded server, for example.)
For control flow paths that are considered irrecoverable (ie. "this can never happen" branches), Rust has panic!(), which defaults to unwinding the stack, calling Drop implementations (destructors) along the way.
panic!() unwinding only kills the thread it occurs in and Rust's thread-related APIs are designed around preventing data that's been left in an inconsistent state from being observable in other threads without explicitly acknowledging that you're dealing with something like a mutex that's set its "poisoned" flag.
For control flow paths that are considered recoverable, Result<T, E> is basically a way to get checked exceptions which work naturally with higher order functions and have a more concise "call the defined conversion to the specified error return type if necessary, and re-throw" syntax.
If you implement the From/Into interface to define how to convert the error type you received into the error type you're returning, the ? operator will do an "unwrap the Ok value or convert and do an early return of the Err value" in a single character.
A lot of people use the thiserror crate to define their custom error types, which has helpers to makeimplementing the From/Into interface trivial... possibly as trivial as annotating an enum variant with #[from], depending on what you want out of it.
Also, this is from 2005 and chose a "considered harmful" title, but this article makes some good points:
Fortunately I am mostly able to avoid working on legacy C++ code these days (rr being the exception, and it's only 70K lines).
Rust code accumulates some cruft over time, but Rust's type system and safety guarantees put a floor on how crappy the code can be. The first Rust code I ever wrote is still part of my project five years later; it's less than perfect, but it's free of undefined behavior now just as it was then. The data structure parsers I wrote four years ago look a bit ugly to me now, but are free of exploitable security bugs, and always were. Etc.
What does that even mean in the absence of a formal standard and competing compiler implementations? (Hint: nothing.)
If you refuse to define anything then you automatically make the "undefinedness" problem go away. (But not the pain it causes.)
> Rust's type system and safety guarantees put a floor on how crappy the code can be
No. The floor on crappiness is defined by the problem domain. Powerful features allow for powerful takes on crappiness.
Unless you want Rust to stay a teaching language forever, you must necessarily introduce abusable features. (See Python for a real-time slow-motion elaboration of this train wreck if you don't believe me.)
> What does that even mean in the absence of a formal standard and competing compiler implementations? (Hint: nothing.)
Undefined behaviour is stuff that you compiler's optimizers have been promised can never occur, so they are allowed to transform your code based on that assumption.
Here's a post I made on Reddit in 2019 with a list of resources on what undefined behaviour is:
(TL;DR: It injects a call to EraseAll because calling Do while it's still null is undefined behaviour, and Do is a static, so the optimizer determines that the only possible answer within the rules it was given is that code outside that compilation unit will have called NeverCalled to set Do = EraseAll before invoking main().)
> What does that even mean in the absence of a formal standard and competing compiler implementations?
It means that when the language designers are asked "what is the behavior of this code (that compiles and doesn't use 'unsafe')?" they never throw up their hands and say "it's undefined behaviour, you must not write that code" (as the C++ definition often says).
They may say "oops, we're not sure what it should do, we need to clarify the language definition and write some tests to ensure the compiler does that". (This happens in C++ too.)
> Powerful features allow for powerful takes on crappiness.
I don't know what this means.
> Unless you want Rust to stay a teaching language forever
A lot of companies big and small are using Rust in production so this is not a compelling premise.
"Undefined behavior" in a C++ context is a legalese feature of the ISO standard. The standard defines, very precisely, the behavior of a compliant C++ compiler. In places where behavior cannot be specified (for logical or practical reasons), the behavior is marked as "undefined".
In absence of a standard effectively every language construct is 'undefined behavior'.
I have zero reason to believe that Rust won't meet Python's fate.
The Rust people don't have a standard and don't understand why they need one; in fact, their lack of standards is somehow touted as a benefit. Apparently, people think that if there is no standard for "defined" and "undefined" behavior that everything is "defined" by default.
(They're absolutely wrong, of course; it's actually the opposite.)
What you seem to be describing is "unspecified behaviour" or "implementation defined behaviour". If your program contains undefined behaviour the compiler makes no guarantees about how your program will behave.
No. If we accept your argument then this would be correct:
> In absence of a standard effectively every language construct is 'implementation defined behaviour'.
But the parent said:
> In absence of a standard effectively every language construct is 'undefined behavior'.
Undefined behaviour != implementation defined behaviour. Both of these things exist in C and are separate. You can rely on implementation defined behaviour giving some consistent result on a given, compiler and hardware. You cannot rely on the behaviour of your program if you hit undefined behaviour.
I didn't state 'implementation defined behavior' which encompasses more than the Rust implementation. The Rust implementation defines the behavior, for example signed int overflow.
That's clear enough for programmers to rely on. It is nonsense to argue that this is equivalent to C's "the compiler may do anything it wants" just because the "Rust Book" is not called the "Rust Standard" or not "formal" enough.
It doesn't no, but if I advocate for the language enough and get it used enough, I can guarantee for the entire rest of my career I can maybe avoid ever having to write any more new projects in said languages. I can just avoid jobs where such languages are still used commonly.
Wait ten more years and Rust will become one of these 'said languages'. (Maybe even faster; the Rust people make it their principle to never learn from others' mistakes. There is no way they won't repeat them.)
You're just making stuff up. Most aspects of Rust's design are based on lessons learned from mistakes and good ideas in C++ and other languages. There was a conscious effort to avoid inventing more new things than necessary.
Is it an ideology or is it just a growing consensus that Rust is really as good as everyone says it is? While I can see how it might look ideological, it doesn’t “feel” ideological to me at least. It feels like a few years ago we hit critical mass and crossed the point where enough people feel like Rust is never going away, that people began to really rely on it, and once they showed this work to other people those people liked what they saw and decided to give it a try and Rust is growing organically on the back of “doing the right things” and just generally being “a good language” for so many more things than people are used to.
Bryan is a hardcore C guy, and can write safe and reliable C code. But he has reached the limits of what C can offer in terms of composability and abstraction.
Like Linus, he wants and needs the kind of high performance and dependable behavior for systems programming that isn't possible with a garbage-collected language.
Agreed. Which is why I prefer garbage collected languages. Rustaceans, however, harbor an irrational fear of garbage collection pauses, hence "memory safety-first ideology" built on garbage collection is heresy. The one true path to memory safety nirvana is through the holy borrows checker. :)
I came to Rust for things like sum types, lack of unexpected null values, monadic error handling (Result<T, E>), and the borrow checker's ability to enable the typestate pattern.
(https://cliffle.com/blog/rust-typestate/ for more on what the typestate pattern is but the TL;DR: is "verifying correct traversal of a state machine at compile time". For example, making it a compile-time error to try to set an HTTP header after you've started streaming the body.)
Before Rust, I'd spent 15 years with Python as my preferred language, and I had experience with TypeScript, CoffeeScript, JavaScript, PHP, Bourne Shell, and the "used very little or very long ago, so I forgot" kind of experience with Lua, XSLT, Perl, C, C++, Visual Basic, QBasic, and DOS/Windows Batch Files.
For me, it's purely about Rust reducing the amount of time I have to spend writing Python unit tests to get the level of confidence I want in my codebases without having to put up with the quirks of a pure functional language like Haskell that doesn't put high value on long-term API stability. (And yes, I do use MyPy and type annotations heavily.)
It's also a big boost to the value proposition that, with no garbage collector of its own, it's easy to integrate Rust modules into my PyQt GUIs or Django+Celery web apps using rust-cpython or PyO3. Trying to hand off objects between multiple garbage collectors in the same address space without something like "serialize the whole thing, hand over ownership of the bag of bytes, then deserialize it" is a recipe for pain.
I also have extensive experience of every language you mention and I have coded a fair bit of Rust too. I have over 20 years of experience in Python, C, C++, ELisp, Java, JavaScript, etc. Rust forces you to spend time and effort thinking about something that is automatic in high-level languages, namely collection of garbage. It's silly to claim that forcing developers to spend more time on bookkeeping causes fewer bugs.
Testing is orthogonal to typing, and I believe that those who claim that the latter can make up for the former simply do not understand how to test software. Correct typing is the foundation for a correct program, but the lack of type errors absolutely do not indicate a lack of bugs. Logic errors that are not caused by type errors are far more common, far more dangerous, and are not caught by type checkers.
> It's also a big boost to the value proposition that, with no garbage collector of its own, it's easy to integrate Rust modules into my PyQt GUIs or Django+Celery web apps using rust-cpython or PyO3. Trying to hand off objects between multiple garbage collectors in the same address space without something like "serialize the whole thing, hand over ownership of the bag of bytes, then deserialize it" is a recipe for pain.
I've written a binding for CPython in Factor. There is some plumbing work for sure, but no, it is not that complicated. You manage objects created by the foreign memory manager using special tokens. The difficult part is callbacks; host language calling CPython which calls back to the host language. rust-cpython's documentation doesn't mention callbacks at all so I guess it doesn't support it.
> I also have extensive experience of every language you mention and I have coded a fair bit of Rust too. I have over 20 years of experience in Python, C, C++, ELisp, Java, JavaScript, etc. Rust forces you to spend time and effort thinking about something that is automatic in high-level languages, namely collection of garbage. It's silly to claim that forcing developers to spend more time on bookkeeping causes fewer bugs.
Your mileage may vary but, on average, I find the time lost to bookkeeping is dwarfed by the time saved on not having to use testing to verify invariants which are upheld by the type system.
Granted, my coding style in Python was already quite similar to what Rust lends itself well to.
...plus, I just find it more relaxing to code in a language where I can delegate more of that to the compiler's type-checker.
> rust-cpython's documentation doesn't mention callbacks at all so I guess it doesn't support it.
First, rust-cpython is the older, less advanced binding. PyO3 forked off from it to explore more advanced API designs that, at the time, required API-unstable features only available in the nightly Rust but PyO3 now runs on stable Rust.
Second, I haven't tried closures with rust-cpython yet but, for functions, the py_fn! macro is how you wrap a Rust function into something you can inject into a namespace in the Python runtime.
As for calling Python functions from Rust in rust-cpython, you call the run or eval methods on the object which indicates that you've taken the GIL.
I suppose, if closures aren't supported, you could work around it by using methods on a py_class!-wrapped Rust object instead.
Third, I avoid C++ these days and limit my modern use of C to retrocomputing projects. It's just not worth the mental effort to write C in my projects, so I stick to binding things in ways where I get a memory-safe, type-safe binding and someone else can be responsible for stressing out over the correctness of the binding generator.
> Your mileage may vary but, on average, I find the time lost to bookkeeping is dwarfed by the time saved on not having to use testing to verify invariants which are upheld by the type system.
Memory management is orthogonal to type checking. I agree with you that static type checking has advantages but that is not what I meant by bookkeeping. The bookkeeping is Rust's borrow checker, explicit memory management in languages like C, or even weak references in some languages. This is time lost which wouldn't have been lost had the programmer choosen to use a language with tracing garbage collection.
In fact, you can think of automatic garbage collection as the language upholding certain invariants about memory that the programmer otherwise would be forced to ensure themself.
Thus a Java programmer has to think less about memory handling than a Rust programmer. Less to think about means less bugs. You may argue that it is worth it because Rust is faster than Java. I have not seen benchmarks that proves that but, even so, I can count on one hand the number of times Java's gc has caused significant problems even in performance sensitive code.
> Second, I haven't tried closures with rust-cpython yet but, for functions, the py_fn! macro is how you wrap a Rust function into something you can inject into a namespace in the Python runtime.
Right, it's an engineering problem so I'm sure it's solvable in Rust. After all, it was solvable in Factor which is a gc:ed language. My point was that I doubt Rust's lack of gc makes it easier to embed CPython.
...and I'm saying that Rust is the only language where the benefit of using it for its type system has, for me and so far, outweighed the downsides of having to think about memory management.
Java? Pain in the ass with all those non-inferred type signatures and comparatively poor POSIX API integration. Also, I've yet to encounter a Java GUI, AWT, Swing, SWT, or otherwise, that wasn't buggy and sluggish under X11, and the startup time is, in my experience, even worse than the Python-based CLI utilities I'm often migrating to Rust to get improved startup time. (Also, checked exceptions don't compose well with higher-order functions. Monadic error handling does.)
C#? Not as bad as Java, but still doesn't have a value proposition that would make me switch away from Python.
C or C++? No. I use Rust for getting a stronger type system on top of something that's still memory-safe, not its performance. (Aside from the aforementioned "If Rust is offering, sure I'll take faster startup for my CLI tools AND monadic error handling AND explicit nullability".)
Vala? I forgot to mention that I played around with it and it's got all the ills you'd expect of a niche compile-to-C language.
TypeScript on Node.js? Worse than Python+MyPy in pretty much every way that matters to me except for having native sum types.
Haskell? Sorry. You'd have to pay me to code in a pure functional language, even without my dislike for its syntax and the ecosystem's philosophy of not being afraid to break APIs to advance the state of the art.
etc. etc. etc.
Still, as I've said before, I tended to already use Python for stuff that's well-suited to Rust. For example, so far, I've yet to need an Rc or Arc aside from the ones actix-web embeds in its data containers.
...and if I did, I certainly would want something along the lines of the borrow checker double-checking that I'm not introducing data races in threaded code, and a system akin to Rust's for compiler safety checks on non-memory resources managed through RAII.
...plus, it's shamefully rare to find things that match Serde for declarative serialization and deserialization, let alone exceed it.
That said, I'm a "right tool for the job" kind of guy and I still use Python for anything that involves SQL for want of a library like Django ORM or SQLAlchemy+Alembic which abstracts over the difference between SQLite and PostgreSQL DDL, doesn't use the database or a raw SQL file as the authoritative source of truth for the schema, and has a migrations system which auto-generates draft migrations by diffing the authoritative schema against the database.
Rust' memory safety feature includes "fearless concurrency".
Popular GC languages like Java or Go are not able to prevent a class of data race bugs that Rust prevents at compile time. Languages like Haskell of course handle it much better, but are for a multitude of reasons not popularly used in production.
And as the sibling comment points out, GC languages are not applicable everywhere. This discussion is in the context of the first not-C language being added to the Linux kernel. Can you imagine a GC language being similarly considered?
So yes, the borrow checker is amazing. That feeling of satisfaction and confidence when the program finally compiles after a round of serious coding or refactoring is unparalleled among the mainstream languages I've tried so far.
Well the set of all programs where garbage collectors can be used is smaller than the set of all programs where manual memory management can be used (either done by the compiler or manually by the programmer). This doesn't counter your point completely, but it bears thinking about.
I've complained about apple, barely. vscode, because I hate electron. And rust, because reasons. I wasn't confrontational, or a jerk. Only expressing mild disappoinment or dislike. All of them negative vote comments. Have you ever seen a comment complaining something popular is bad, or the commenter doesn't like it with a positive count?
Rust seems neat but one thing I've always liked about Linux is how few dependencies it needs to compile. The current rust compiler is not small and depends on LLVM which is not great.
> LLVM does not target all of the architectures that Linux supports and just because a target is supported in LLVM does not mean that the kernel will build or work without any issues.
You can compile most common Linux kernel targets with clang, but the kernel was historically optimized with GCC in mind, so I believe there are still size and speed impacts from using LLVM.
Not to mention plain ol' inertia. The vast majority of people compiling the kernel are using GCC. Switching to a different toolchain or maintaining two toolchains (assuming there are no subtle ABI issues between a GCC kernel and LLVM driver) isn't necessarily trivial, and requiring that just to build a single driver might rub some people the wrong way. That's not a _problem_ with LLVM, but it is an inconvenience.
ARM seems to have pretty thoroughly won the category of “embedded targets that are capable enough to run Linux”, though. Other than ARM and (presumably in the future) RISC-V, what other architectures are still relevant for new embedded designs that would run Linux?
Obviously there is still significantly more diversity at the low end of the embedded world, but you wouldn’t run Linux on e.g. an MSP430, even if you somehow had a compiler that would support that.
This is why the idea for Rust in the kernel, for now, only targets drivers. Drivers are target specific, so you can use it only for drivers on architectures that are LLVM capable.
Note that the portability issue mentioned by many here is much less of an issue when writing drivers. After all drivers tend to be architecture specific.
This is why the idea is to start using Rust in the kernel for drivers.
> At the present time, we require certain nightly features. That is, features that are not available in the stable compiler. Nevertheless, we aim to remove this restriction within a year by either `rustc` landing the features in stable or removing our usage of them otherwise.
Using unstable features in something like the Linux kernel just seems like a bad idea. Why not wait until Rust is a little more matured, and doesn't need to use nightly features?
They might. There are five things that they're using that they can't work around:
One is going to be stable in Rust 1.52.0, the next release of Rust. This will happen before the next kernel release, so it barely counts. It's also a documentation tooling feature.
One is related to symbol mangling, and so in theory could be worked around I'd imagine, but is landing soon enough they don't think it's worth it.
The final three are related to each other; two of them look like they're being stable pretty soon, and the third, while it's less clear on timeline, is one pretty small thing.
So, the answer is basically that the exposure here is pretty small. How much that matters is a social question as much as a technical one. While "unstable" is a binary designation, within it is a spectrum of unstability. "This feature has no path to stabilization" to "this will be stable in the next Rust" and all things in between. Maybe allowing some unstable things that are closer to the latter is acceptable, maybe not.
Sure, writing kernel code in a more modern language could be a big improvement in the short run (well, as soon as one can write the code, I guess). But what about the long(er) run? By the time this project gets ready for serious consideration (no dependencies on experimental features of Rust, support for all relevant architectures), there could well be a dozen different languages available that improve greatly on Rust, and transitioning away from (a mix of C and) Rust would be difficult compared to transitioning away from C.
I know that many think that Rust is the be-all and end-all of programming languages, so much so that there's this effort to put Rust into Linux despite the requirement of nightly Rust versions and Rust still having inadequate support for various architectures. But the hype regarding Rust always seemed misguided to me, Rust is a language built upon the (interesting) idea of ensuring memory safety without a garbage collector, which is cool, but I get the feeling that most Rust fans forget that the actual goal is correctness, not memory safety.
On another note, using C++ in the Linux kernel could give a similar kind of improvement, but with much lesser costs than with Rust.
> (no dependencies on experimental features of Rust,
Unclear if this is a hard requirement, even if it is, it's stated to be feasible on the timescale of "less than a year".
> support for all relevant architectures)
This is already fulfilled; this is for drivers only, which are inherently platform-specific, and so only drivers that are usable on the supported platforms would be written.
> there could well be a dozen different languages available that improve greatly on Rust
As mentioned above, I doubt that this is true within a year, but beyond that, any new language is also going to have all of these same growing pains, but be a decade behind. When they're ready, they should be considered, but "don't do a good thing now because maybe, some unknown amazing thing might appear" isn't usually a good way to make decisions.
> > there could well be a dozen different languages available that improve greatly on Rust
> As mentioned above, I doubt that this is true within a year, but beyond that, any new language is also going to have all of these same growing pains, but be a decade behind. When they're ready, they should be considered, but "don't do a good thing now because maybe, some unknown amazing thing might appear" isn't usually a good way to make decisions.
This is true, but Linux is a long-term project, we don't know what the future will bring, but I'd be surprised if it wasn't still a mainstream kernel in 20 or even 40 years.
It seems to me it would be wise to look at the future as well, and perhaps things are the wrong way around; rather than asking "what existing languages could work for the kernel?" the question should be "what would the ideal language for the kernel look like?"
It always seemed to me that Rust is a bit of a strange choice for the Linux kernel; I can see why it has some appeal, but it's also a fairly large and complicated language.
How "complex" or "minimal" a language should be is an old debate that we've all done at least a few times, so don't want to repeat it here; for my part, I feel comfterable with both approaches; I'm just as happy programming in Ruby as in Go for example (even though they're polar opposites in many ways), but both approaches come with their own set of advantages and downsides.
I'm not so sure that a large and complex language is really the best fit for this particular purpose. For example writing a C compiler for a new architecture is comparatively easy whereas porting Rust to something new will be much harder.
Given Linux's long-term prospects it might be wise to invest in an "ideal" language rather than a "it'll work" language.
Then again, I don't work on Linux so what do I know :-)
Frankly, one of the best things we could do to help this hypothetical future language be adopted is to have precedent for Linux using another language.
We're really trying two things here - getting Linux to use some Rust, and getting Linux to use some other language, with an independent compiler. That second part is pretty significant by itself (we've had discussions about build systems, compiler ABI bugs, etc.), and even if we were just trying to add C Except Different, it would be a major project on its own.
Once we've figured out how to make them interoperate and what the ground rules are (architecture compatibility, where new development goes, whether the new language can be mandatory, release timeframes, etc.), a third language will be on much better-explored territory.
(I do actually think that Rust is pretty close to ideal for the Linux kernel as it currently exists, though! As I argue in https://ldpreload.com/p/kernel-modules-in-rust-lssna2019.pdf / https://www.youtube.com/watch?v=RyY01fRyGhM , a lot of Rust constructs happen to match idioms and abstractions that the kernel uses with C. But yes, there's a lot of research on both good kernels and good programming languages yet to be done.)
Right; going from n=1 to n=2 is usually a lot of work, but going on to n=3 and beyond after that is usually much easier.
It's good this is finally been considered; there have been projects like this for decades but they never made much headway. I remember seeing this presentation about a memory safe driver framework over 10 years ago, and while it was functional (according to the authors) it never seemed to see much implementation (IIRC it generated C code, I can't recall the name of the project). There's also stuff like Cyclone[1], and I never understood why that never got any traction as it embodies more or less the same ideas as Rust (the homepage even recommends Rust now), and there's of course D, although the lack of adoption there can probably be explained by the licensing (D can run without a GC).
Personally, I don't think C++ would give a similar improvement to Rust or a future competitor to Rust. C++ has a severe useability problem. Many companies have to really restrict what subset of C++ is used based on use cases. I don't know what contributing to the linux kernel is like, but I imagine having readable code really helps. Paring down C++ to the subset that gives measurable improvements while also being readable seems daunting. Where do you draw the lines? Will concepts be allowed, what about template heavy code and SFINAE? If you go down the C with classes route (which doesn't provide too much advantage), it might be easier to for existing devs to work on it, but if you go too far the other way, you could alienate a lot of people who contribute to the kernel already and don't have the time or will to understand the complex template deduction system of C++.
Rust seems to be much more opinionated on how to do certain tasks, which might make it an easier to use for the kernel because there will be one way to do certain tasks. That's my read on things at least.
"There may be something better in the future, so we shouldn't change anything ever" is not a very strong argument.
No one thinks Rust is the end-all-be-all. They simply think that Rust is better than C, quite easily at that. And Rust does focus on correctness... If people wanted memory safety, almost any not-C/C++ language would be good enough. Rust's correctness story is wonderful, which is why I myself don't focus on the garbage-collector thing when talking about Rust.
And C++ has been thoroughly rebuked by Linus, and his statements have only gotten stronger with time.
This 100%. I've seen rust fans quote "correctness" without knowing what that word means, saying Rust gives it to you for free. It does NOT, and it conveys the wrong message.
This comment getting downvoted is everything wrong with HN.
Can you link to Rust fans saying Rust gives you correctness for free? I've seen the opposite-- Rust folks interested in proofs and property tests and fuzzing as well as what Rust gives you for free, which they understand is not "correctness".
I am pleased that these are all addressable things. We'll see!