Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Big downside is now you will have a dichotomy of functions that block using futures and functions that block at the OS level and no sane way to intermix them. Rust essentially becomes two languages. Async/await sugar doesn't fix this.

Would be great if functions could be written in a general way for both IO models and users could select the implementation at their convenience.



  > no sane way to intermix them.
My understanding is, the idea is to put the blocking stuff in a threadpool with https://github.com/alexcrichton/futures-rs/tree/master/futur...


Right, there may be a lot of back and forth marshaling between using thread pools and not depending on whether the library you're using is futures based or not.

Maybe you use one library that is futures-based and one that isn't. Maybe the library you use is mostly non-blocking except for one use of sleep() or another esotorically blocking call. It's just annoying and prone to error. Most people may not even be aware of the subtly blocking nature of the code they use in their futures-based project.

This is what I mean by having two different languages. Libraries written for one aren't always/simply compatible with the other. You'll have a growing community of nominally "futures-based" rust libraries too.

This is why people use Go or Erlang. It just removes the need to have to think about this. Not saying they are generally better than Rust, and some Rust people may even like having a futures-based sublanguage, but I suspect most programmers will be loathe to have to deal with the extra mental tax.


> Right, there may be a lot of back and forth marshaling between using thread pools and not depending on whether the library you're using is futures based or not.

So just like if you use cgo. You can't get away from having to deal with the issue entirely; the most you can do is to punt it to the FFI layer. There is the question of how much of the community is using blocking vs. nonblocking I/O, to be sure, but Go has a version of that too: how much of the community is using cgo vs. how much of the community is writing in pure Go.

> This is what I mean by having two different languages.

Calling them "two different languages" is a huge exaggeration. You simply block or switch to a thread pool: it's very easy.

> I suspect most programmers will be loathe to have to deal with the extra mental tax.

I like having the low-level control over blocking vs. not, especially in situations where I can't use async I/O everywhere (for example, my work on Servo). In fact, it's essential.

Ultimately this is going to come down to "you should be willing to pay a performance and control tax for a more ergonomic model" vs. "you shouldn't give up performance and control for a small amount of ergonomics". Yes, there is a tradeoff here. That's fine. Taking Go's side of the tradeoff would make Rust unusable for my domain, and for many others (which is why M:N was the most controversial issue ever in the Rust community, with most of the community demanding it to be removed, while in Go nobody questions it). Some people may not want Rust's side of the tradeoff, and that's fine too.


I see your cgo analogy but at the same time it's much less pronounced there since the programming interface is the same, the programmer is supposed to assume everything will work as it should (even if it doesn't always). In this case it's a different programming interface and I think that stresses the issues.

Regarding your comment on preferentially having control over blocking/async code. I think that's right. At the same time, some C++ programmers would say that they prefer having to think carefully about how memory is managed in there program (say, for the benefit of fast no bounds checking). C++ draws a line, Rust draws a line, Go draws a line, Java draws a line, and Python draws a line. These lines are somewhat about technical superiority and somewhat about programmer identity/preference but they are mostly about domain-specific constraints and necessary tradeoffs. This futures-based approach will be sufficient (if somewhat inconvenient) where Go/Erlang can't be used, e.g. where GC pauses are absolutely intolerable.


It's not just about GC pauses. Rust isn't "little Go" that you reach for only when you can't afford a GC. Many people choose Rust for the cargo package manager, generics, pattern matching, mature optimizer that prioritizes runtime speed over compilation speed, ability to write libraries callable by any language, fast FFI, compiler-enforced data race prevention, memory safety in multithreaded mode, etc. etc. These benefits apply to servers too. And many of these benefits are what lead to the futures model being more appropriate than the M:N model for the language.

Go has its benefits too, of course! One of those benefits is that blocking I/O is a simpler mental model. Both languages can happily coexist without one being in the shadow of the other.


That's true. But caring about the details is the bread and butter of the kinds of programs that Rust is targetted at. If you can get away with the reduced performance, abstracting away those details is a totally reasonable choice. Rust is not and never can be a programming language for all programmers, and that's super okay.


> Would be great if functions could be written in a general way for both IO models and users could select the implementation at their convenience.

We tried this with a compile-time switch between 1:1 and M:N threading in earlier versions of Rust and the results pleased nobody. It was slow, complex, and unwieldy.


It's my understanding that the alternative green thread runtime used stack swapping.

What I'm referring to here is the same futures method under the hood but transparent to the user.

It's also my understanding that there were unrelated engineering constraints that caused it to be unwieldy, such as binary size. I believe it's possible to provide an alternative runtime without it necessarily affecting the main configuration or limited environments like embedded devices.


> What I'm referring to here is the same futures method under the hood but transparent to the user.

CPS transforming the entire program is possible in theory, but if you want the same zero-cost behavior you will run into the same issues I outlined elsewhere: higher order control flow analysis will be necessary, and it will fall down a lot.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: