I'm generally a fan of Zig, but it's a little sad seeing them go all in on green threads (aka fibers, aka stackful coroutines). Rust got rid of their Runtime trait (the rough equivalent of Zig's Io) before 1.0 because it performed badly. Languages and OS's have had to learn this lesson the hard way over and over again:
> While fibers may have looked like an attractive approach to write scalable concurrent code in the 90s, the experience of using fibers, the advances in operating systems, hardware and compiler technology (stackless coroutines), made them no longer a recommended facility.
If they go through with this, Zig will probably top out at "only as fast as Go", instead of being a true performance competitor. I at least hope the old std.fs sticks around for cases where performance matters.
I'm not sure how you got the perception that we're going "all in" on green threads, given that the article in OP explicitly mentions that we're hoping to have an implementation based on stackless coroutines, based on this Zig language proposal: https://github.com/ziglang/zig/issues/23446
Performance matters; we're not planning to forget that. If fibers turn out to have unacceptable performance characteristics, then they won't become a widely used implementation. Nothing discussed in this article precludes stackless coroutines from backing the "general purpose" `Io` implementation if that turns out to be the best approach.
That is lovely to hear. I think the general conscious is that not a single programming language has done Async right. So people are a little sceptical. But Andrew and the team so far seems to have the do it right mentality. So I guess people should be a little more optimistic.
The entire language is single thread. But I/O uses a separate thread pool.
> memory usage
Are you talking about extra 120 bytes per Promise?
> function coloring
How does it manifest in JS? You can `await` non-async function without any issues, anything potentially async is awaited, if it doesn't end up doing async inside there is no problem.
Does the BDFL want this though, or is it just one person's opinion that it might be nice? Given how he has been aggressively pruning proposals, I don't put any hope in them anymore unless I see some kind of positive signal from him directly.
e.g. I'd feel a lot more confident if he had made the coroutine language feature a hard dependency of the writergate refactor.
I'm aware, but Zig isn't a democracy where the core team votes, right? Has Andrew actually expressed that he wants the proposal? Without that we're left with scraps like this commit message where he seems ambivalent. https://github.com/ziglang/zig/commit/d6c90ceb04f8eda7c6b711...
Andrew, I know you read these threads sometimes, give us a sign so I can go down the mountain with my stone tablets and tell the people whether we'll have coroutines
We don't know whether or not we'll have stackless coroutines; it's possible that we hit design problems we didn't foresee. However, at this moment, the general consensus is that we are interested in pursuing stackless coroutines.
While Andrew has the final say, as Loris points out, we always work to reach a consensus internally. The article lists this an an implementation that will probably exist, because we agree that it probably will; nobody is promising it, because we also agree that it isn't guaranteed.
Also, bear in mind that even if stackless coroutines don't make it into Zig, you can always use a single-threaded blocking implementation of `Io`, so you need not be negatively affected by any potential downsides to fibers either way.
This new `Io` approach has made it strictly more likely than it previously was that stackless coroutines become a part of Zig's final design.
But how will that actually work? Your stackless coroutines proposal talks about explicit primitives for defining a coroutine. But what about a function that's not designed for any particular implementation strategy - it just takes an Io and passes it on to some other functions? Will the compiler have a way to compile it as either sync or async, like apparently it did before? It would have to, if you want to avoid function colors. But your proposal doesn't explain anything about that.
Disclaimer: I'm not actually a Zig user, but I am very interested in the design space.
Right, the proposal doesn't discuss the implementation details -- I do apologise if that made it seem a little hand-wavey. I opted not to discuss them there, because they're similar-ish to the way we lowered stackless async in its stage1 implementation, and hence not massively interesting to discuss.
The idea is that, yes, the compiler will infer whether or not a function is async (in the stackless async sense) based on whether it has any "suspension point", where a suspension point is either:
* Usage of `@asyncSuspend`
* A call to another async function
Calls through function pointers (where we typically wouldn't know what we're calling, and hence don't know whether or not it's async!) are handled by a new language feature which has already been accepted; see a comment I left a moment ago [1] for details on that.
If the compiler infers a function to be async, it will lower it differently; with each suspension point becoming a boundary where any stack-local state is saved to the async frame, as well as an integer indicating where we are in the function, and we jump to different code to be resumed once it finishes. The details of this depend on specifics of the proposal (which I'm planning to change soon) and sometimes melt my brain a little, so I'll leave them unexplained for now, but can probably elaborate on them in the issue thread at some point.
Of course, this analysis of whether a function is async is a little bit awkward, because it is a whole-program analysis; a change in a leaf function in a little file in a random helper module could introduce asynchronocity which propagates all the way up to your `pub fn main`. As such, we'll probably have different strategies for this inference in the compiler depending on the release mode:
* In Debug mode, it may be a reasonable strategy to just assume that (almost) all functions are asynchronous (it's safe to lower a synchronous function as asynchronous, just not vice versa). The overhead introduced by the async lowering will probably be fairly minimal in the context of a Debug build, and this will speed up build times by allowing functions to be sent straight to the code generator (like they are today) without having to wait for other functions to be analyzed (and without potentially having to codegen again later if we "guessed wrong").
* In Release[Fast,Small,Safe] mode, we might hold back code generation until we know for sure, based on the parts of the call graph we have analyzed, whether or not a function is async. Vtables might be a bit of a problem here, since we don't know for sure that a vtable call is not async until we've finished literally all semantic analysis. Perhaps we'll make a guess about whether such functions are async and re-do codegen later if that guess was wrong. Or, in the worst case... perhaps we'll literally just defer all codegen until semantic analysis completes! After all, it's a release build, so you're going to be waiting a while for optimizations anyway; you won't mind an extra couple of seconds on delayed codegen.
> a change in a leaf function in a little file in a random helper module could introduce asynchronocity which propagates all the way up to your `pub fn main`
If this doesn't make the argument that Zig has certainly not defeated function coloring, I don't know what would.
The fact that this change to how my program runs is hidden away in the compiler instead of somewhere visible is not an improvement.
> If this doesn't make the argument that Zig has certainly not defeated function coloring, I don't know what would.
if you're opting into stackless coroutines then yeah you're opting into their viral nature, but the point is that you don't have to. as the application author your dependencies won't opt you forcefully in using stackless coroutines (or any singular execution model), which is currently the case with other languages.
this is what it means to defeat function coloring.
Small marketing suggestion: maybe "limit function coloring" instead of "defeat function coloring". I like Zig's approach so far, but clearer terms would help avoid pointless arguments and disappointments that a certain proglang supremacist community loves.
> it's safe to lower a synchronous function as asynchronous
Is it though? I believe this would be an issue if you want to pass that function as a function pointer to an FFI function, in which case it must be sync.
I’m confused about the assertion that green threads perform badly. 3 of the top platforms for high concurrency servers use or plan to use green threads (Go, Erlang, Java). My understanding was that green threads have limitations with C FFI which is why lower level languages don’t use them (Rust). Rust may also have performance concerns since it has other constraints to deal with.
Green threads have issues with C FFI mostly due to not being able to preempt execution, when the C thing is doing something that blocks. This is a problem when you have one global pool of threads that execute everything. To get around it you essentially need to set up a dedicated thread pool to handle those c calls.
Which may be fine - go doesn't let the user directly create thread pools directly but do create one under the hood for ffi interaction.
The problem is when C calls expect to be on a particular thread. Either the main event thread, in a GUI context, or the same thread as some related previous call.
Different languages give different level of control over that. There are languages with one main thread pool and perhaps some specialized ones that users don't have control over. Go would be an example of this.
It is also possible for languages to make user creatable thread pools - possibly even with affinity to cores, allowing fibers to run only on a single thread. Crystal is coming along that path. So far it seems to be coming around fairly nicely but I havn't had to battle the GC in anger yet.
It actually has much the same benefits of Rust removing green threads and replacing them with a generic async runtime.
The point here is that "async stuff is IO stuff is async stuff". So rather than thinking of having pluggable async runtimes (tokio, etc) Zig is going with pluggable IO runtimes (which is kinda the equivalent of "which subset of libc do you want to use?").
But in both moves the idea is to remove the runtime out of the language and into userspace, while still providing a common pluggable interface so everyone shares some common ground.
That paper (P1364R0) was contentious, and I regard it as being severely motivated reasoning, published only to kill off competing approaches to C++ coroutines.
I have definitely gotten the impression that green threads will be the favored implementation, from listening to core team members and hanging around the discord. Stackless coroutines don't even exist in the language currently.
What does "favored" mean if event loop and direct blocking are relatively trivial and provided also/ If I can trivially use them, what do I care what Andrew or someone in core thinks? The control is all mine, and near zero cost (potential vtable indirection).
And would Rust be "all-in" if tokio was in std, so you could use its tasks everywhere? That would be a very similar level of "all-in" to Zig's current plan, but with a seemingly better API.
I understand the benefit of not being in std, but really not a fundamental issue, IMO.
In the 2026 roadmap talk Andrew Kelley spoke of the fact that stackless coroutines with iouring is the end goal here (but the requires an orthogonal improvement in the compiler for inlining that data to the stack where possible).
Oh man. I think the biggest mistake Rust has ever made is their async model. It’s been nothing short of a disaster. Zig supporting green threads and other options is spectacularly exciting. Thus far no one has “solved” async. So very exciting to see what Zig can come up with.
https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p13...
> While fibers may have looked like an attractive approach to write scalable concurrent code in the 90s, the experience of using fibers, the advances in operating systems, hardware and compiler technology (stackless coroutines), made them no longer a recommended facility.
If they go through with this, Zig will probably top out at "only as fast as Go", instead of being a true performance competitor. I at least hope the old std.fs sticks around for cases where performance matters.