The occurrence of data races depends on the specific non-deterministic sequence of execution of concurrent codepaths. Just because you have 100% code coverage does not mean you've covered every potential execution sequence, and its almost never practical to actually execute every possibility to ensure the absence of data races. Depending on the probability that your data race will occur, it could indeed be something you have to make stars align for TSAN to catch.
Not to talk my own book, but there is a well-known alternative to C++ that can actually guarantee the absence of data races.
It "could" for some algorithms, yes, but for a lot of algorithms, that kind of star alignment simply isn't necessary to find all the data races, was my point. And yes, TLA+ etc. can be helpful, but then you have the problem of matching them up with the code.
I feel like in a subtle way you're mixing up data races with race conditions, especially given the example you site about incrementing an atomic variable.
TSAN does not check for race conditions in general, and doesn't claim to do so at all as the documentation doesn't include the term race condition anywhere. TSAN is strictly for checking data races and deadlocks.
Consequently this claim is false:
>The issue is that even if it statically proved the absence of data races in the C++ sense, that still wouldn't imply that your algorithm is race-free.
Race-free code means absence of data races, it does not mean absence of the more general race condition. If you search Google Scholar for race free programming you'll find no one uses the term race-free to refer to complete absence of race conditions but rather to the absence of data races.
There's "data race" in "C++ ISO standard" sense, and then there's "data race" in the general CS literature (as well as all the other terms). Two threads writing a value to the same memory location (even atomically) is usually a data race in the CS/algorithm sense (due to the lack of synchronization), but not the C++ sense. I'm not really interested in pedantic terminology here, just trying get a higher level point across about what you can & can't assume with a clean TSAN (and how not to clean your TSAN errors). Feel free to mentally rewrite my comment with whatever preferred terminology you feel would get my points across.
This isn't pedantry, if you're going to talk about how specific tools work then you need to use the actual terminology that the tools themselves use or else you will confuse yourself and anyone you talk to about them. If we were discussing general concepts about thread safety then sure we can be loose about our words, but if we're talking about a specific tool used for a specific programming language then we should make sure we are using the correct terminology, if only to signal that we have the proper domain knowledge to speak about the subject.
>Feel free to mentally rewrite my comment with whatever preferred terminology you feel would get my points across.
If I rewrite your comment to use data race, then your comment is plainly incorrect since the supporting example you give is not a data race but a race condition.
If I rewrite your comment to use race condition, then your comment is also incorrect since TSAN doesn't detect race conditions in general and doesn't claim to, it detects data races.
So what exactly am I supposed to do in order to make sense of your post?
The idea that you'd talk about the pros and cons of a tool like TSAN without knowing the difference between a race condition and a data race is kind of absurd. That you'd claim my clarification of these terms for the sake of better understanding your point is a form of pedantry is sheer hubris.
Hold on, before attacking me. Say we have this Java program, and assume the semantics of the common JRE/JVM everyone uses. Do you believe it has a data race or not? Because the variable is accessed atomically, whether whether you mark it as volatile or not:
class Main {
private static String s = "uninitialized";
public static void main(String[] args) {
Thread t = new Thread() {
public void run() { s = args[0]; }
};
t.start();
System.out.println(s);
}
}
And I sure as heck have not heard anyone claim such data races are impossible in Java. (Have you?)
>When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race.
Yes, your program contains a data race, by the definition used in the JLS. The set of outcomes you may observe from a data race are specified. I'm not sure if this choice was intentional or not, but there is a guarantee that you will either print the argument or "uninitialized" and no other behavior, because String relies on final field semantics. This is would not be true in c/c++ where the equivalent code is undefined behavior and you could see any result.
In Java you can have a data race and use it productively for certain niche cases, like String.hashcode - I've also contributed some to the Guava concurrency library. This is not true in c/c++ where data races (by their definition) are undefined behavior. If you want to do the tricks you can in racy Java without UB, you have to declare your variables atomic and use relaxed memory order and possibly fences.
What you wrote is indeed a data race, it'll race s, but you mention the semantics of the JRE and I wonder if you actually know what those are because that's crucial here.
You see Java has a specific memory ordering model (many languages just give you a big shrug, including C before it adopted the C++ 11 model but Java spells out what happens) and that model is very sophisticated so it has an answer to what happens here.
Because we raced s, we lose Sequential Consistency. This means in general (this example is so trivial it won't matter) humans struggle to understand what's going on in their program, which makes debugging and other software engineering impractical. But, unlike C++ loss of Sequential Consistency isn't fatal in Java, instead we're promised that when s is observed in the main thread it will either be that initial "uninitialized" string or it will have the args[0] value, ie the name of the program because these are the only two values it could have and Java does not specify which of them observed in this case.
You could think of this as "atomic access" and that's likely the actual implementation in this case, but the Java specification only promises what I wrote.
In C++ this is game over, the language standard specifically says it is Undefined Behaviour to have any data race and so the behaviour of your program is outside the standard - anything at all might happen.
[Edited: I neglected originally to observe that s is set to "uninitialized", and instead I assumed it begins as null]
> But, unlike C++ loss of Sequential Consistency isn't fatal in Java
I have no idea what you mean here. Loss of sequential consistency is in no way fatal in C++. There are several access modes that are specifically designed to avoid sequential consistency.
Regarding the rest of your comment:
You're making exactly my point though. These are guaranteed atomic accesses -- and like you said, we are guaranteed to see either the old or new value, and nothing else -- and yet they are still data races. Anyone who agrees this is a data race despite the atomicity must necessarily understand that atomics don't imply lack of data races -- not in general CS terminology.
The only way it's correct to say they are mutually exclusive is when you define "data race" as they did in the C++ standard, to imply a non-atomic access. Which you're welcome to do, but it's an incredibly pedantic thing to do because, for probably >95% of the users of C++ (and probably even of TSAN itself), when they read "data race", they assume it to mean the concept they understand from CS. They don't know that the ISO standard defines it in its own peculiar way. My point here was to convey something to normal people rather than C++ committee language lawyers, hence the use of the general term.
> Loss of sequential consistency is in no way fatal in C++. There are several access modes that are specifically designed to avoid sequential consistency.
Sure, if you work really hard you can write a C++ program which doesn't meet the 6.9.2.2 intro.races definition of a data race but does nevertheless lose sequential consistency and so it has in some sense well-defined meaning in C++ but humans can't usefully reason about it. You'll almost certainly trip and write UB when attempting this, but assuming you're inhumanly careful or the program is otherwise very simple so that you don't do that it's an exception.
My guess is that your program will be miscompiled by all extant C++ compilers and you'll be unhappy with the results, and further that if you can get committee focus on this at all they will prioritize making your program Undefined in C++ rather than "fixing" compilers.
Just don't do this. The reason for the exclusions in 6.9.2.2 is that what we want people to do is write correct SC code but using primitives which themselves can't guarantee that, so the person writing the code must do so correctly. The reason is not that somehow C++ programmers are magicians and the loss of SC won't negatively impact the correctness of code they attempt to write, quite the contrary.
>The only way it's correct to say they are mutually exclusive is when you define "data race" as they did in the C++ standard, to imply a non-atomic access.
A data-race does not imply a non-atomic operation, it implies an unsynchronized operation. Different languages have different requirements for what constitutes a synchronized operation, for example in Python all reads and writes are synchronized, in Java synchronization is generally accomplished through the use of a monitor or a volatile operation, and in C++ synchronization is generally accomplished through the use of std::atomic operations.
The fact that in C++ atomic operations result in synchronization, while in Java atomic operations are not sufficient for synchronization is not some kind of language lawyering or pedantry, it's because Java makes stronger guarantees about the consequences of a data race versus C++ where a data race can result in any arbitrary behavior whatsoever. As such it should not come as a surprise that the cost of those stronger guarantees is that Java has stronger requirements for data race free programs.
But of course this is mostly a deflection, since the discussion was about the use of TSAN, which is a data race detector for C and C++, not for Java. Hence to the extent that TSAN detects data races, it detects them with respect to C and C++'s memory model, not Java's memory model or Python's memory model, or any other memory model.
The objection I originally laid out was your example of a race condition, an example which can happen even in the absence of parallelism (ie. a single-core CPU) and even in the absence of multithreaded code altogether (your example can happen in a single threaded application with the use of coroutines). TSAN makes no claim with regards to detecting race conditions in general, it only seeks to detect data races and data races as they pertain the C and C++ memory models.
I am not "deflecting" anything, I am going to the heart of the matter.
Let me lay this out very explicitly. This comment will likely be my final one on the matter as this back-and-forth is getting quite tiresome, and I'm not enjoying it, especially with the swipes directed at me.
For the sake of this discussion, assume the command-line arguments behave the same in both languages. (i.e. ignore the C++ argv[0] vs. Java args[0] distinction and whatnot.)
Somehow, you simultaneously believe (a) that Java program contains data races, while believing (b) the C++ program does not.
This is a self-contradictory position. The programs are well-defined, direct translations of each other. They are equivalent in everything but syntax. If one contains a data race, so must the other. If one does not, then neither can the other.
This implies that TSAN does not detect "data races" as it is usually understood in the CS field -- it does not detect anything in the C++ program. What TSAN detects is only a particular, distinct situation that the C++ standard also happens to call a "data race". So if you're talking to a C++ language lawyer, you can say TSAN detects all data races within its window/buffer limits. But if you're talking to someone who doesn't sleep with the C++ standard under their pillow, they're not going to realize C++ is using a language-specific definition, and they're going to assume it flags programs like the equivalent of the Java program above, which has a data race but whose equivalent TSAN would absolutely not flag.
Yes, C++ and Java have different conditions for what a data race is.
That your position hinges on thinking all languages share the same memory model suggests a much deeper failure to understand some of the basic principles of writing correct parallel software and while numerous people have tried to correct you on this, you still seem adament on doubling down on your position so I don't think there is much point in continuing this.
> That your position hinges on thinking all languages share the same memory model suggests a much deeper failure to understand some of the basic principles of writing correct parallel software and while numerous people have tried to correct you on this, you still seem adament on doubling down on your position so I don't think there is much point in continuing this.
I never suggested "all languages share the same memory model". You're severely mischaracterizing what I've said and putting words in my mouth.
What I said was (a) data races are generals properties of programs that people can and do discuss in language-agnostic contexts, and (b) it makes no sense to say two well-defined, equivalent programs differ in whether they have data races. Reducing these statements down to "all programs share the same memory model" as if they're somehow equivalent makes for an incredibly unfaithful caricature of everything I've said. Yes, I can see there's no point in continuing.
> data races are generals properties of programs that people can and do discuss in language-agnostic contexts
"Data race" is a specific property defined by a memory model, which is normally part of a language spec; it's not usually understood as an abstract property defined in terms of outcome, at least not usefully. If you talk about data races as abstract and language-spec-agnostic properties, then yes, you're assuming a memory model that's shared across all programs and their languages.
> "Data race" is a specific property defined by a memory model, which is normally part of a language spec; it's not usually understood as an abstract property defined in terms of outcome, at least not usefully.
Really? To me [1] sure doesn't look useless:
> We use the standard definition of a data race: two memory accesses to the same address can be scheduled on different threads to happen concurrently, and at least one of the accesses is a write [16].
You're welcome to look at the [16] it cites, and observe that it is from 1991, entirely in pseudocode, with no mention of a "memory model". It so happens to use the phrase "access anomaly", but evidently that is used synonymously here, per [1].
> If you talk about data races as abstract and language-spec-agnostic properties, then yes, you're assuming a memory model that's shared across all programs and their languages.
No, nobody is assuming such a thing. Different memory models can still exhibit similar properties when analyzing file accesses. Just like how different network models can exhibit similar properties (like queue size bounds, latency, etc.) when discussing network communication. Just because two things are different that doesn't mean they can't exhibit common features you can talk about in a generic fashion.
Java defines what is or isn't a "data race" in one way, as part of its spec. C++ defines that same term "data race" in another way, as part of its spec. Your linked papers use a definition of "data race" which they define themselves based on a claimed 'standard definition' which is different than both the Java and C++ definitions of the same term. The point here is that the definition of "data race" is not universal or objective. When you want to evaluate whether or not some bit of code exhibits a "data race" then without qualifying what you mean when you say "data race" that property is gonna be evaluated in the context of the language, not some higher-level abstract assumption. You can talk about whatever set of common properties of a "data race" that are invariant to language or whatever and that you want to talk about, that's fine, but you need to make that expectation explicit if you want to have a productive conversation with anyone else, because "data race" by itself is context-dependent.
> This implies that TSAN does not detect "data races" as it is usually understood in the CS field -- it does not detect anything in the C++ program. What TSAN detects is only a particular, distinct situation that the C++ standard also happens to call a "data race"
TSAN defines its own specific definition of "data race" which is invariant to the languages that it evaluates. In particular the C++ definition of "data race" is more lenient than the TSAN definition of "data race"; C++ doesn't consider two atomic accesses of the same memory (at least one being a write) under memory_order_relaxed to be a data race, but TSAN does.
TSAN _could_ detect the C++ program as having a data race, for sure. And if it could evaluate Java programs, it _could_ also detect the Java program as having a data race, too.
> C++ doesn't consider two atomic accesses of the same memory (at least one being a write) under memory_order_relaxed to be a data race, but TSAN does.
I'm confused what you're saying here. If you take the program I linked, which uses relaxed ordering, and add -fsanitize=thread, TSAN doesn't flag anything. That seems inconsistent with what you're saying?
P.S. note that even using memory_order_seq_cst wouldn't change anything in that particular program.
First of all, TSAN can identify the presence of a data race, but it can't prove the absence of any data races. If -fsanitize=thread doesn't flag anything, that's insufficient evidence to say that there aren't any data races in the code, at least as TSAN defines data race, which is stricter than how C++ defines data race.
You've now received many comments from many different commenters that all kind of say the same thing, which is basically that your understanding of a "data race" is not really accurate. Those comments have included pretty detailed information as to exactly how and when and why your definition isn't totally correct. If I were you I'd take the L and maybe read up a bit.
> First of all, TSAN can identify the presence of a data race, but it can't prove the absence of any data races. If -fsanitize=thread doesn't flag anything, that's insufficient evidence to say that there aren't any data races in the code
I stated as much in my own first comment, with more details on when/why this does/doesn't occur.
> at least as TSAN defines data race, which is stricter than how C++ defines data race. [...] If I were you I'd take the L and maybe read up a bit.
Before assuming I haven't: I have. And the reading does not agree [1] [2] with your idea that TSAN sometimes considers relaxed atomic writes to be data races. Hence my replies. Remember, you wrote:
>> C++ doesn't consider two atomic accesses of the same memory (at least one being a write) under memory_order_relaxed to be a data race, but TSAN does.
I have not seen a single word anywhere -- not in the docs, not in example code, and (I think) not even from anyone here other than you -- suggesting that TSAN considers memory_order_relaxed writes to constitute a data race. It certainly does not flag them in the most trivial test I could think of, which I had already linked here [3].
If this is just my ignorance or misunderstanding, then instead of telling me to go do my own reading, please enlighten me and provide one link or example that demonstrates that TSAN considers atomic writes to the same memory to be a data race? I would be very glad to learn this, as I wasn't aware of this before, and am not seeing anything suggesting such.
> Those comments have included pretty detailed information as to exactly how and when and why your definition isn't totally correct.
I have linked to papers going back decades showing that "data race" has a general definition that don't entirely match up with what people have said. I myself have also explained in great detail how the general definition differs from the C++ one. I don't know what else to possibly provide here, but I'm done.
I haven't participated in this thread yet, but I would like to drill down on your TSan example. It seems to me that the window of timing for TSan to catch it is _super_ tight, as the overhead of creating a thread is very large relative to the other operations.
For something like TSan, which allows programs to execute normally with additional instrumentation, this timing matters, and so it's not a great example. An equivalent program being simulated in something like Loom would be much more convincing.
I'm a little confused, as you agree with your parent commenter that TSan not raising a flag is not conclusive. But you also appear to be using TSan not flagging the program as some kind of evidence in the same comment.
Thanks for following along and trying to clarify this, I really appreciate it.
The answer to your questions is that timing is not the issue in my example. You can notice this easily if you strip std::atomic<> from the type. TSAN can and does catch it just fine. The atomicity itself is what tells TSAN to not consider this a data race.
What probably threw you off was that I (sloppily) used "timing" as a way to say "proximity in the history buffer". [1] It's not wall clock time that matters, it's the number of memory accesses that fit in TSAN's history buffer. (This should also explain your confusion w.r.t. "tight" timing.)
Hence, the conclusivity depends entirely on the reason it wasn't flagged. (This is why I explained the failure modes quite precisely in my very first comment: not all of them have randomness involved.) If it wasn't flagged because the history buffer wasn't large enough, then obviously it's not conclusive. But if it wasn't flagged because TSAN noticed it and deliberately exempted it, then obviously it doesn't consider it a data race.
> The answer to your questions is that timing is not the issue in my example. You can notice this easily if you strip std::atomic<> from the type. TSAN can and does catch it just fine.
If you strip the std::atomic from your example, then you obviously lose read/write atomicity on the value, which should be trivial for something like TSAN to detect and flag.
> The atomicity itself is what tells TSAN to not consider this a data race ... the conclusivity depends entirely on the reason it wasn't flagged. (This is why I explained the failure modes quite precisely in my very first comment: not all of them have randomness involved.)
"The conclusivity" can only ever be "inconclusive" or "has a data race", it can never be "does not have a data race", because that's just not how TSAN (or any similar tools) work. See [1]. In your original C++ program there is no happens-before relationship between the thread you spawn and the main thread, so there is a data race by the TSAN definition, even though the read and the write are atomic, and even if a given execution of that code by TSAN doesn't yield an error. It's not about timing, at least not exactly -- it's about the guarantees of the scheduler and its execution of threads, which is non-deterministic without explicit synchronization in the application (or something along those lines)..!
You've linked to papers that define "data race" in a specific way. That doesn't mean that when anyone says "data race" in any other context they are using any of those papers' definitions, for reasons that have been exhaustively explained.
Actually I don't think your c++ program contains a data race, because the writes that populated the argument string happened before the atomic read. If you copied the string on the other thread before writing the pointer or you used a non atomic variable, I bet tsan would catch it.
> Actually I don't think your c++ program contains a data race, because the writes that populated the argument string happened before the atomic read
But the write to the static variable from the second thread is entirely unordered with respect to the read, despite the atomicity. If lack of ordering is your criterion for data races, doesn't that imply there is a data race between that write and that read?
No, because it's atomic. If that was a data race, and data races are UB, there would be no point in having relaxed atomics.
It's not my criterion, it's defined in the language standard. You can't have a data race on an atomic except for std::atomic_init. The reads on the string contents themselves are ordered because the args string is initialized before the thread is launched and the other one is static. If the launched thread allocated a new string and then populated the atomic, that would be a data race, unless stronger memory order was used by both the writer and the reader.
Never mind, I don't think we're disagreeing on what the C++ standard definition of data race is. I think I thought you were saying something different in your comment.
Also the programs are not direct translations of each other - a direct translation to Java would use varhandle and opaque (equivalent to relaxed), and then would not contain a data race.
You can't precisely convert it to c++, because there's no c++ construct that precisely matches Java behavior - namely atomic with relaxed memory order but permitting aliasing and load/store optimizations. The spiritually closest thing would just be a regular variable, which would be a data race that tsan would catch.
You can go the other way, porting that c++ to Java using varhandle and opaque memory order.
The Java specification which can be found here [1] makes clear that with respect to its memory model the following is true:
1. Per 17.4.5 your example can lead to a data race.
"When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race."
2. Per 17.7 the variable s is accessed atomically.
"Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values."
However, atomic reads and writes are not sufficient to protect against data races. What atomic reads and writes will protect against is word tearing (outlined in 17.6 where two threads simultaneously write to overlapping parts of the same object with the result being bits from both writes mixed together in memory). However, a data race involving atomic objects can still result in future reads from that object to result in inconsistent values, and this can last indefinitely into the future. This does not mean that reading from a reference will produce a garbage value, but it does mean that two different threads reading from the same reference may end up reading two entirely different objects. So, you can have thread A in an infinite loop repeatedly reading the value "uninitialized" and thread B in another infinite loop repeatedly reading the value args[0]. This can happen because both threads have their own local copy of the reference which will never be updated to reflect a consistent shared state.
As per 17.4.3, a data-race free program will not have this kind of behavior where two threads are in a perpetually inconsistent state, as the spec says "If a program has no data races, then all executions of the program will appear to be sequentially consistent."
So while atomicity protects against certain types of data corruption, it does not protect against data races.
> Two threads writing a value to the same memory location (even atomically) is usually a data race in the CS/algorithm sense (due to the lack of synchronization), but not the C++ sense
You seem to conflate the concepts of "data race" and "race condition", which are not the same thing.
Two threads writing to the same memory location without synchronization (without using atomic operations, without going thru a synchronization point like a mutex, etc.) is a data race, and almost certainly also a race condition. If access to that memory location is synchronized, whether thru atomics or otherwise, then there's no data race, but there can still be a race condition.
This isn't a pedantic distinction, it's actually pretty important.
> Two threads writing a value to the same memory location (even atomically) is usually a data race in the CS/algorithm sense (due to the lack of synchronization), but not the C++ sense
Not only are you incorrect, it’s even worse than you might think. Unsynchronized access to data in c++ is not only a data race, it’s explicitly undefined behavior and the compiler can choose to do whatever in response of an observed data race (which you are promising it isn’t possible by using the language).
You are also misinformed about the efficacy of TSAN. Even in TSAN you have to run it in a loop - if TSAN never observes the specific execution order in a race it’ll remain silent. This isn’t a theoretical problem but a very real one you must deeply understand if you rely on these tools. I recall a bug in libc++ with condition_variable and reproducing it required running the repro case in a tight loop like a hundred times to get even one report. And when you fixed it, how long would you run to have confidence it was actually fixed?
And yes, race conditions are an even broader class of problems that no tool other than formal verification or DST can help with. Hypothesis testing can help mildly but really you want at least DST to probabilistically search the space to find the race conditions (and DST’s main weakness aside from the challenge of searching a combinatorial explosion of states is that it still relies on you to provide test coverage and expectations in the first place that the race condition might violate)
TSAN observes the lack of an explicit order and warns about that, so it is better in some sense than just running normally in a loop and hoping to see the occurrence of a specific mis-ordering. But that part of it is a data race detector, so it cannot do anything for race conditions, and as soon as something is annotated as atomic, it cannot do anything to detect misuse. It can be better for lock evaluation, as it can check they are always acquired in the same order without needing to actually observe a conflicting deadlock occurring. But I agree you need more formal tooling to actually show the problem is eliminated and not just improbable
Geez. I'm well aware it's UB. That was just sloppy wording. Should've said "not necessarily", okay. I only wrote "not in the C++ sense" because I had said "even atomically", not because I'm clueless.
Joe Armstrong goes to lengths to describe the benefits of "privately addressable" actors in his thesis (though he uses different terminology). As far as I'm aware, Erlang actors are also privately addressable. cf:
> System security is intimately connected with the idea of knowing the name of a process. If we do not know the name of a process we cannot interact with it in any way, thus the system is secure. Once the names of processes become widely know the system becomes less secure. We call the process of revealing names to other processes in a controlled manner the name distribution problem— the key to security lies in the name distribution problem. When we reveal a Pid to another process we will say that we have published the name of the process. If a name is never published there are no security problems.
> Thus knowing the name of a process is the key element of security. Since names are unforgeable the system is secure only if we can limit the knowledge of the names of the processes to trusted processes.
That may be what is in the thesis, but in real Erlang, any process can list all processes on any node it can reach: https://www.erlang.org/doc/apps/erts/erlang.html#processes/0 (As the docs say, it lists "processes on the local node" but I'm fairly sure any process can RPC that to any connected node to get the local processes on that node) From there you've got a lot of introspection on the processes in question.
And beyond that, there is no sandboxing in Erlang that allows you to do anything like spawn a process that can't access the disk or network or anything like that. So in practice that doesn't even hardly buy you anything on a real system because if you were somehow running unauthenticated Erlang code you've already got access corresponding to the OS permissions of the running Erlang process. (Though for those not familiar with Erlang, "somehow running unauthenticated Erlang code" is very unlikely, to the point that it's not a realistic threat. I'm just speaking hypothetically here.)
The thesis may cover how such systems could be turned to a more secure context but it does not correspond to current Erlang.
There are many layers of capabilities. Unguessable process IDs would be necessary for network capabilities. A sandboxing environment would be necessary for system or process level capabilities. It's still worth having the network security even if process security isn't there. Very few language implementations can provide that level of security.
My point here is merely to make sure that people do not come away from this thread thinking that Erlang has, well, anything like this at all. It isn't an especially insecure language as it lacks most of the really egregious footguns, but it isn't a specially secure one either, with any sort of "capabilities" or anything else.
This really only works if the process names are unguessable? Erlang PIDs are if not predicatable at least appear to be guessable, e.g. they look like this: <0.30.0>
100%. So tiring that the discourse around this is based on 15 minute demos and not actual understandings of the trade offs. Varun Gandhi's post that you link to is great.
Based on my experience with Rust, a lot of what people want to do with its "constant generics" probably would be easier to do with a feature like comptime. Letting you do math on constant generics while maintaining parametricity is hard to implement, and when all you really want is "a trait for a hash function with an output size of N," probably giving up parametricity for that purpose and generating the trait from N as an earlier codegen step is fine for you, but Rust's macros are too flexible and annoying for doing it that way. But as soon as you replace parametric polymorphism with a naive code generation feature, you're in for a world of hurt.
Implementations are not exported or public at all: they are used in functions and those functions are exported. For correctness, you want those implementations to be resolved consistently (this is what coherence is). This post gives the example of unioning two sets: you need to know that they're ordered the same way for your algorithm to work.
So the problem isn't that the implementation is public, it's that its used somewhere by a function which is public (or called, transitively, by a public function). For a library, code which is not being used by a public function is dead code, so any impl that is actually used is inherently public.
You might say, okay, well can binaries define orphan impls? The problem here is that we like backward compatibility: when a new impl is added to your dependency, possibly in a point release, it could conflict with your orphan and break you. You could allow users, probably with some ceremony, to opt into orphan impls in binaries, with the caveat that they are accepting that updating any of their dependencies could cause a compilation failure. But that's it: if you allow this in libraries, downstream users could start seeing unsolvable, unpredictable compilation failures as point releases of their dependencies introduce conflicts with orphan impls in other dependencies.
It would still be consistent; everything with my crate resolves `impl Foo for Bar` to what I define, everything with other crate resolves `impl Foo for Bar` to what they defined, and any other crate would have a compilation error because those crates didn't `impl Foo for Bar`.
If I for some reason exported a method like `fn call_bar(foo: Foo) -> Bar` then I think it would use my `impl Foo for Bar` since the source code for the trait impl was within my crate. What happens if instead I export like `fn call_bar<F: Bar>(foo: F) -> Bar)` is probably a bit more up to debate as to whose trait impl should be used; probably whichever crate where F being Foo is originally known.
I think they did say binaries can define ophan impls; and the only way somebody should be able to break your code is by changing the trait definition or deleting the implementing type. Otherwise your implementation would override the changed implementation. This seems fine because even if I locally define `Foo` which lets me to `Foo impl Bar`; if you then delete Bar then my code breaks anyways.
Of course it can change, that's what removal of coherence does.
It seems to me to be a logical impossibility to allow orphan implementations, and allow crate updates, and not have trait implementations changing at the same time. It's a pick-two situation.
Your conclusion is correct. I'm very happy with the two that Rust picked and tired of people pretending that there will be a magical pick three option if we just keep talking about it.
I also think Rust has picked the right default, but I wouldn't mind having an opt in to the other pair of trade-offs. There are traits like `ToSql` that would be mostly harmless. Serde has tricks for customizing `Serialize` on foreign types, and this could be smoother with language support. Not every trait is equivalent to Hash.
Consider Java for example. In Java, interfaces are even more restrictive than traits: only the package which defines the class can implement them for that class, not even the package which defines the interface. But this is fine, because if you want to implement an interface for a foreign class, you create a new class which inherits from it, and it can be used like an instance of the foreign class except it also implements this interface.
In Rust, to the extent this is possible with the new type pattern it’s a lot of cruft. Making this more ergonomic would ease the burden of the orphan rule without giving up on the benefits the orphan rule provides.
This post is written by a fan of implicits, so it frames it as "better" than traits, though at the end it admits it is in fact a complex trade off, which is the truth. In my opinion, the trade off favors traits, but others may feel differently.
The core difference between traits (also called type classes) and ML modules is that with traits the instance/implementation has no name, whereas for ML modules they do. The analogy here is between Rust/Haskell's traits/typeclasses and ML's signatures and between Rust/Haskell's impls/instances and ML's structures. In Rust/Haskell, implementations are looked up by a tuple of types and a trait to determine the implementation. The advantage of this is that you don't need to name the impl and then invoke that name every time you use it; since we usually don't think of "Hash for i32" as something which has a meaningful name beyond the relationship between Hash and i32, this is quite nice.
But coherence requires that instances resolve consistently: if I hash an integer in one code location to insert into a map and then hash it again in a different location to do a lookup on the same map, I need to hash integers the same way each time. If you care about coherence, and the correctness property it implies, you can't allow overlapping impls if impls aren't named, because otherwise you aren't guaranteed a consistent result every time you look up the impl.
This introduces another problem: you can't see all the impls in the universe at once. Two libraries could add impls for types/traits in their upstream dependencies, and the incoherence won't be discovered until they are compiled together later on. This problem, called "orphan impls," causes its own controversy: do you just let downstream users discover the error eventually, when they try to combine the two libraries, or do you prohibit all orphan impls early on? Rust and Haskell have chosen different horns of this dilemma, and the grass is always greener.
Of course with implicits, this author intends a different solution to the problem of resolving instances without naming them: just allow incoherence (which they re-brand as "local coherence"). Instead, incoherence impls are allowed and are selected in a manner based on proximity to code location.
As the post eventually admits, this does nothing to solve the correctness problem that coherence is meant to solve, because code with different nearest impls can be compiled together, and in Rust such a correctness problem could become a memory safety problem, and how you figure out if the impl you've found for this type is actually the nearest impl to your code is left as an exercise to your reader. But sure, since you've rebranded incoherence to "local coherence" you can do some juxtaposed wordplay to call coherence a "local maxima" because achieving it has the downside that you can't have arbitrary orphan impls.
I read through this, and thought to myself: "wow, what a response that elucidates the PL design tradeoff space while giving real world examples of languages that occupy various points on that space; all as concisely and economically as possible."
And then I read the user name. Of course it's boats!!
Thank you for all your work! I want to say that especially since I've noticed a lot of shallow dismissal of your work recently (shallow because the dismissal often doesn't engage with the tradeoffs of whatever alternative solution it proposes in the context of Rust, among other things), and would like you to know there's a lot of us who are very very grateful for all the productivity and empowerment you've enabled through your contribution to Rust.
Let's assume for the sake of argument that the standard library didn't implement Hash for i32.
You could then have two crates, A and B, with different implementations of Hash for i32, and both could instantiate HashMap<i32>.
This can be made to work if we recognize the HashMap<i32> in crate A as a different type than the HashMap<i32> in crate B.
This only really works if orphan implementations are exported and imported explicitly to resolve the conflict that arises from a crate C that depends on A and B.
If C wants to handle HashMap<i32>, it needs to decide whether to import the orphan implementation of Hash for i32 from crate A or B (or to define its own). Depending on the decision, values of type HashMap<i32> can move between these crates or not.
Basically, the "proximity to code location" is made explicit in a way the programmer can control.
This makes type checking more complex, so it's not clear whether the price is worth it, but it does allow orphan implementations without creating coherence problems.
Implementations are not imported at all because they are not named. Like I wrote, named implementations (ala ML modules) is a valid alternative, but one with a much greater annotation burden.
You could imagine having named impls that are allowed to be incoherent as an additional feature on top of coherent unnamed impls, but to use them you would need to make any code that depends on their behavior parameterized by the impl as well as the types. In fact, you can pretty trivially emulate that behavior in Rust today by adding a dummy type parameter to your type and traits.
Right, but what I'm describing is a tradeoff point that's between the extremes, where implementations are unnamed but can still be explicitly imported.
Making my example more explicit, you'd need syntax along the lines of
// inside crate C
use A::impl std::hash::Hash for i32;
This syntax would explicitly be limited to orphan implementations.
I suppose to further clarify, there's still some coherence requirement there in that crate C can't import the conflicting implementations from both A and B. Which could then perhaps be worked around by adding syntax to spell types along the lines of
HashMap<i32 + A::impl Hash, V>
Which you could argue is a form of naming implementations, I suppose? I'm not familiar with ML. You could maybe also think of it as a more ergonomic way of doing (more or less) those wrapper types.
In any case, the annotation burden only exists where it's actually needed to enable orphan implementations.
And either way, multiple different impls can safely coexist within the overall set of code that's linked together, with everything being statically checked at compile time.
I think rather than at odds with without.boats is saying, this is very much aligned with what they are suggesting. While not literally `use A::impl std::hash::Hash for i32` is for all intents and purposes naming the impl.
Similarly, `HashMap<i32 + A::impl Hash, V>` is what they are talking about when they refer to parameterizing code on the impl chosen.
Essentially, yes. What I don't see is their claim that it's a "much greater annotation burden". Compared to what? Rust today just doesn't allow this at all, and if you use a wrapper type to simulate it, you definitely end up with more "annotations" (boilerplate).
FWIW It's not at all clear to me how this requirement would be implemented in practice: "This syntax would explicitly be limited to orphan implementations."
Maybe I'm missing something, but the compiler can tell whether an implementation is an orphan. That's how you get an error message today if you try to write one. So I don't know what difficulty you have in mind.
I'm pretty sure the article resolves the implicit dependencies at the point of the declaration. (Did I misunderstood it?)
So, you don't have a `data HashMap datatype`, you have a `data HashMap hashAlgo datatype`, where hashAlgo is decided implicitly by the context. That's the entire reason it's called "implicit".
Every other usage of the data knows how to hash your values because of that `hashAlgo` parameter. It doesn't matter where it happens.
Great write up, and you're absolutely right that implicits are moving towards ML modules. Quite possibly a production system would end up being synonymous with ML modules out of the need for named impls.
Small nit in terminology, the implicits described are coherent. Part of their value over previous implicit work is that they are coherent and stable. I have generally seen the property you're referring to called canonicity, which they do lack.
This is nothing to do with async Rust; monoio (and possibly other io-uring libraries) are just exposing a flawed API. My ringbahn library written in 2019 correctly handled this case by having a dropped accept future register a cancellation callback to be executed when the accept completes.
You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
The solution proposed in this post doesn't work, though: if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked. io-uring's async cancellation mechanism is just an optimization opportunity and doesn't synchronize anything, so it can't be relied on for correctness here. My library could have submitted a cancellation when the future drops as such an optimization, but couldn't have relied on it to ensure the accept does not complete.
> You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
This is still a suboptimal solution as you've accepted a connection, informing the client side of this, and then killed it rather than never accepting it in the first place. (Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.)
Now it's true that "never accepting it in the first place" might not be possible with io_uring in some cases but rather than hiding that under drop the code, it should be up front about it and prevent dropping (not currently possible in rust) in a situation where there might be uncompleted in-flight requests before you've explicitly made a decision between "oh okay then, let's handle this one last request" and "I don't care, just hang up".
If you want the language to encode a liveness guarantee that you do something meaningful in response to an accept rather than just accept and close you do need linear types. I don't know any mainstream language that encodes that guarantee in its type system, whatever IO mechanism it uses.
This all feels like the abstraction level is wrong. If I think of a server as doing various tasks, one of which is to periodically pull an accepted connection off the listening socket, and I cancel that task, then, sure, the results are awkward at best and possibly wrong.
But I’ve written TCP servers and little frameworks, asynchronously, and this whole model seems wrong. There’s a listening socket, a piece of code that accepts connections, and a backpressure mechanism, and that entire thing operates as a unit. There is no cancellable entity that accepts sockets but doesn’t also own the listening socket.
Or one can look at this another way: after all the abstractions and libraries are peeled back, the example in the OP is setting a timeout and canceling an accept when the timeout fires. That’s rather bizarre — surely the actual desired behavior is to keep listening (and accepting when appropriate) and do to the other timed work concurrently.
It just so happens that, at the syscall level, a nonblocking (polled, selected, epolled, or even just called at intervals) accept that hasn’t completed is a no-op, so canceling it doesn’t do anything, and the example code works. But it would fail in a threaded, blocking model, it would fail in an inetd-like design, and it fails with io_uring. And I really have trouble seeing linear types as the solution — the whole structure is IMO wrong.
(Okay, maybe a more correct structure would have you “await connection_available()” and then “pop a connection”, and “pop a connection” would not be async. And maybe a linear type system would prevent one from being daft, successfully popping a connection, and then dropping it by accident.)
> maybe a more correct structure would have you “await connection_available()” and then “pop a connection”
This is the age-old distinction between a proactor and reactor async design. You can normally implement one abstraction of top of the other, but the conversion is sometimes leaky. It happens that the underlying OS "accept" facility is reactive and it doesn't map well to a pure async accept.
I’m not sure I agree. accept() pops from a queue. You can wait—and-pop or you can pop-or-fail. I guess the former fits in a proactor model and the latter fits in a reactor model, but I think that distinction misses the point a bit. Accepting sockets works fine in either model.
It breaks down in a context where you do an accept that can be canceled and you don’t handle it intelligently. In a system where cancellation is synchronous enough that values won’t just disappear into oblivion, one could arrange for a canceled accept that succeeded to put the accepted socket on a queue associated with the listening socket, fine. But, in general, the operation “wait for a new connection and irreversibly claim it as mine IMO just shouldn’t be done in a cancellable context, regardless of whether it’s a “reactor” or a “proactor”. The whole “select and, as one option, irrevocably claim a new connection” code path in the OP seems suspect to me, and the fact that it seems to work under epoll doesn’t really redeem it in my book.
This is a simple problem I have met and dealt with before.
The issue is the lack of synchronization between cancellation and not handling cancel failure.
All cancellations can fail because there is always a race when calling cancel() where the operation completes.
You have two options, synchronous cancel (block until we know if cancel succeded) or async cancel (callback or other notification).
This code simply handles the race incorrectly, no need to think too hard about this.
It may be that some io_uring operations cannot be cancelled, that is a linux limitation. I've also seen there is no async way to close sockets, which is another issue.
> You have two options, synchronous cancel (block until we know if cancel succeded) or async cancel (callback or other notification).
> This code simply handles the race incorrectly, no need to think too hard about this.
I still think the race is unnecessary. In the problematic code, there’s an operation (await accept) that needs special handling if it’s canceled. A linear type system would notice the lack of special handling and complain. But I would still solve it differently: make the sensitive operation impossible to cancel. “await accept()” can be canceled. Plain “accept” cannot. And there is no reason at all that this operation needs to be asynchronous or blocking!
(Even in Rust’s type system, one can build an “await ready_to_accept()” such that a subsequent accept is guaranteed to succeed, without races, by having ready_to_accept return a struct that implements Drop by putting the accepted socket back in the queue for someone else to accept. Or you can accept the race where you think you’re ready to accept but a different thread beat you to it and you don’t succeed.)
TCP connections aren’t correct representations of the liveness of sessions. The incorrectness is acute when it’s mobile browsers connecting over LTE to load balanced web servers. That’s why everyone reinvents a session idea on top of the network.
> Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.
listen(2) takes a backlog parameter that is the number of queued (which I think it means ack'd) but not yet popped (i.e. listen'd) connections.
> if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked.
If the accept completes before the cancel SQE is submitted, the cancel operation will fail and the runtime will have a chance to poll the CQE in place and close the fd.
The rest of this blog discusses how to continue processing operations after cancellation fails, which is blocked by the Rust abstraction. Yes, not everyone (probably very few) defines this as a safety issue, I wrote about this at the end of the blog.
I don't consider Yosh Wuyts's concept of "halt safety" coherent, meaningful or worth engaging with. It's true that linear types would enable the encoding of additional liveness guarantees that Rust's type system as it exists cannot encode, but this doesn't have anything to do with broken io-uring libraries leaking resources.
Continuing process after cancellation failure is a challenge I face in my actual work, and I agree that "halt-safety" lacks definition and context. I have also learned and been inspired a lot from your blogs, I appreciate it.
Agree. When I hear “I wish Rust was Haskell” I assume the speaker is engaged in fantasy, not in engineering. The kernel is written in C and seems to be able to manage just fine. Problem is not Rust. Problem is wishing Rust was Haskell.
Well, it's "about" async Rust and io-uring inasmuch as they represent incompatible paradigms.
Rust assumes as part of its model that "state only changes when polled". Which is to say, it's not really "async" at all (none of these libraries are), it's just a framework for suspending in-progress work until it's ready. But "it's ready" is still a synchronous operation.
But io-uring is actually async. Your process memory state is being changed by the kernel at moments that have nothing to do with the instruction being executed by the Rust code.
You are completely incorrect. You're responding to a comment in which I link to a library which handles this correctly, how could you persist in asserting that they are incompatible paradigms? This is the kind of hacker news comment that really frustrates me, it's like you don't care if you are right or wrong.
Rust does not assume that state changes only when polled. Consider a channel primitive. When a message is put into a channel at the send end, the state of that channel changes; the task waiting to receive on that channel is awoken and finds the state already changed when it is polled. io-uring is really no different here.
What you're describing is a synchronous process, though! ("When a message is put..."). That's the disconnect in the linked article. Two different concepts of asynchrony: one has to do with multiple contexts changing state without warning, the other (what you describe) is about suspending threads contexts "until" something happens.
Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
With io-uring the kernel writes CQEs into a ring buffer in shared memory and the user program reads them: its literally just a bounded channel, the same atomic synchronizations, the same algorithm. There is no difference whatsoever.
The io-uring library is responsible for reading CQEs from that ring buffer and then dispatching them to the task that submitted the SQE they correspond to. If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE. According to this blog post, monoio fails to do so. That's all that's happening here.
> If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE.
So, first: how is that not consistent with the contention that the bug is due to a collision in the meaning of "asynchronous"? You're describing, once more, a synchronous operation ("when ... cancel") on a data structure that doesn't support that ("the kernel writes ..." on its own schedule).
And second: the English language text of your solution has race conditions. How do you prevent reading from the buffer after the beginning of "cancel" and before the "dispatch"? You need some locking in there, which you don't in general async code. Ergo it's a paradigm clash. Developers, you among them it seems, don't really understand the requirements of a truly async process and get confused trying to shoehorn it into a "callbacks with context switch" framework like rust async.
> Developers, you among them it seems, don't really understand the requirements of a truly async process and get confused trying to shoehorn it into a "callbacks with context switch" framework like rust async.
This is an odd thing to say about someone who has written a correct solution to the problem which triggered this discussion.
Also, you really need to define what truly async means. Many layers of computing are async or not async depending on how you look at them.
Saw this show up after the fact. Maybe it's safe enough for me to try to re-engage: The point I was trying to make, to deafening jeering, is that the linked bug is a really very routine race conditions that is "obvious" to people like me coming from a systems programming background who deal with parallelism concerns all the time. It looks interesting and weird in the context of an async API precisely because async APIs work to hide this kind of detail (in this case, the fact that the events being added to the queue are in a parallel context and racing with the seemingly-atomic "cancel" operation).
APIs to deal with things like io-uring (or DMA device drivers, or shared memory media streams, etc...) tend necessarily to involve explicit locking all the way up at the top of the API to make the relationship explicit. Async can't do that, because there's nowhere to put the lock (it only understands "events"), and so you need to synthesize it (maybe by blocking the cancelling thread until the queue drains), which is complicated and error prone.
This isn't unsolvable. But it absolutely is a paradigm collision, and something I think people would be better served to treat seriously instead of calling others names on the internet.
Hi, I’m also from a systems programming background.
I’m not sure what your level of experience with Rust’s async model is, but an important thing to note is that work is split between an executor and the Future itself. Executors are not “special” in any way. In fact, the Rust standard library doesn’t even provide an executor.
Futures in Rust rely on their executors to do anything nontrivial. That includes the actual interaction with the io-uring api in this case.
A properly implemented executor really should handle cases where a Future decides to cancel its interest in an event.
Executors are themselves not implemented with async code [0]. So I’m not quite able to understand your claim of a paradigm mismatch.
[0]: subexecutors like FuturesUnordered notwithstanding.
I think we just have to end this, your tone is just out of control and you're doing the "assume bad faith" trick really badly. But to pick out some bits where I genuinely think you're getting confused:
> Rust has ample facilities for preventing you from reading from the buffer after cancellation
The linked bug is a race condition. It's not about "after" and if you try to reason about it like that you'll just recapitulate the mistakes. And yes, rust has facilities to prevent race conditions, but they're synchronization tools and not part of async, and lots of developers (ahem) seem not to understand the requirements.
Based on this post, when you drop a monoio TcpListener nothing happens. If there is an accept inflight, when it completes the reactor wakes your task, which ignores the wake up and goes back to sleep. INSTEAD when you drop the TcpListener it should cancel interest in this event with the reactor, and when the event completes the reactor should clean up the state for the complete event (which means closing the newly open file descriptor in this case).
Does this involve synchronization? Yes! Surprise surprise, when you share state between concurrent processes (whether they be tasks, threads, processes, or userspace and the kernel) you need some form of synchronization. When you say things like “Rust’s facilities to prevent race conditions [are] synchronizations tools and not part of async” you are speaking nonsense, because async Rust in all its forms are built on these synchronization primitives, whether they be atomic variables or system mutex’s or what have you.
To the moderators (dang), do people get to keep their account here just because they're a "famous" poster despite writing the way they're doing all over this post? I'm assuming other posters have been banned for substantially less aggressive behaviour...
> Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
I think you're being unduly harsh here. There are a variety of voices here, of various levels of expertise. If someone says something you think is incorrect but it seems that they are speaking in good faith then the best way to handle the situation is to politely provide a correct explanation.
If you really think they are in bad faith then calmly call them out on it and leave the conversation.
I've been following withoutboats for ~6 years and it really feels like his patience has completely evaporated. I get it though, he has been really in the weeds of Rust's async implementation and has argued endlessly with those who don't like the tradeoffs but only have a surface level understanding of the problem.
I think I've read this exact convo maybe 20+ times among HN, Reddit, Github Issues and Twitter among various topics including but not limited to, async i/o, Pin, and cancellation.
I freely admit I’m frustrated by the discourse around async Rust! I’m also very frustrated because I feel I was iced out of the project for petty reasons to do with whom I’m friends with and the people who were supposed to take over my work have done a very poor job, hence the failure to ship much of value to users. What we shipped in 2019 was an MVP that was intended to be followed by several improvements in quick succession, which the Rust project is only now moving toward delivering. I’ve written about this extensively.
My opinion is that async Rust is an incredible achievement, primarily not mine (among the people who deserve more credit than me are Alex Crichton, Carl Lerche, and Aaron Turon). My only really significant contributions were making it safe to use references in an async function and documenting how to interface with completion based APIs like io-uring correctly. So it is very frustrating to see the discourse focused on inaccurate statements about async Rust which I believe is the best system for async IO in any language and which just needs to be finished.
> So it is very frustrating to see the discourse focused on inaccurate statements about async Rust
> No, ajross is very confidently making false descriptions of how async Rust and io-using operate. This website favors people who sound right whether or not they are, because most readers are not well informed but have a ridiculous confidence that they can infer what is true based on the tone and language used by a commenter. I find this deplorable and think this website is a big part of why discourse around computer science is so ignorant, and I respond accordingly when someone confronts me with comments like this.
They had an inaccurate (from your point of view) understanding. That's all.
If they were wrong that's not a reason to attack them.
If you think they were over-confident (personally I don't) that's still not a reason to attack them.
Again, I think ajross set out their understanding in a clear and polite manner. You should correct them in a similar manner.
> has argued endlessly with those who don't like the tradeoffs but only have a surface level understanding of the problem
But that's really not what's going on here.
ajross has an understanding of the fundamentals of async that is different to withoutboats'. ajross is setting this out in a clear and polite way that seems to be totally in good faith.
withoutboats is responding in an extremely rude and insulting manner. Regardless of whether they are right or not (and given their background they probably are), they are absolutely in the wrong to adopt this tone.
>ajross has an understanding of the fundamentals of async that is different to withoutboats'.
ajross has an understanding of the fundamentals of async, but a surface level understanding of io-uring and Rust async. It's 100% what is going on, and again, it something I've seen play out 100s of times.
>Rust assumes as part of its model that "state only changes when polled".
This is fundamentally wrong. If you have a surface level understanding of how the Rust state-machine works, you could make this inference, but it's wrong. This premise is wrong, so ajross' mental model is flawed - and withoutboats is at a loss of trying to educate people who get the basic facts wrong and has defaulted to curt expression. And I get it - you see it a lot with academic types when someone with a wikipedia overview of a subject tries to "debate". You either have to do an impromptu of 101 level material that is freely available or you just say "you're wrong". Neither tends to work.
I'm not saying I condone withoutboats' tone, but my comment is really just a funny anecdote because withoutboats engages in this often and I've seen his tone shift from the "try to educate" to the "you're just wrong" over the past 6 years.
No, ajross is very confidently making false descriptions of how async Rust and io-using operate. This website favors people who sound right whether or not they are, because most readers are not well informed but have a ridiculous confidence that they can infer what is true based on the tone and language used by a commenter. I find this deplorable and think this website is a big part of why discourse around computer science is so ignorant, and I respond accordingly when someone confronts me with comments like this.
Alternatively there's a problem with being "really in the weeds" of any problem in that you fail to poke your head up to understand other paradigms and how they interact.
I live in very different weeds, and I read the linked article and went "Oh, yeah, duh, it's racing on the io-uring buffer". And tried to explain that as a paradigm collision (because it is). And I guess that tries the patience of people who think hard about async[1] but never about concurrency and parallelism.
[1] A name that drives systems geeks like me bananas because everything in an async programming solution IS SYNCHRONOUS in the way we understand the word!
the post only talks about "future state", maybe I'm not clearly to point out this. with epoll, accept syscall and future state changing is happened in the same polling, which io_uring is not. Once accept syscall is complete, future has already advanced to complete, but actually it is not at that moment in the real world Rust.
It's true, there's a necessary layer of abstraction with io-uring that doesn't exist with epoll.
With epoll, the reactor just maps FDs to Wakers, and then wakes whatever Waker is waiting on that FD. Then that task does the syscall.
With io-uring, instead the reactor is reading completion events from a queue. It processes those events, sets some state, and then wakes those tasks. Those tasks find the result of the syscall in that state that the reactor set.
This is the difference between readiness (epoll) and completion (io-uring): with readiness the task wakes when the syscall is ready to be performed without blocking, with completion the task wakes when the syscall is already complete.
When a task loses interest in an event in epoll, all that happens is it gets "spuriously awoken," so it sees there's nothing for it to do and goes back to sleep. With io-uring, the reactor needs to do more: when a task has lost interest in an incomplete event, that task needs to set the reactor into a state where instead of waking it, it will clean up the resources owned by the completion event. In the case of accept, this means closing that FD. According to your post, monoio fails to do this, and just spuriously wakes up the task, leaking the resource.
The only way this relates to Rust's async model is that all futures in Rust are cancellable, so the reactor needs to handle the possibility that interest in a syscall is cancelled or the reactor is incorrect. But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.
To be fair, I’m not sure if there exists any zero cost IOCP library.
The main way people use IOCP is via mio via tokio. To make IOCP present a readiness interface mio introduces a data copy. This is because tokio/mio assume you’re deploying to Linux and only developing on windows and so optimize performance for epoll. So it’s reasonable to wonder if a completion based interface can be zero cost.
But the answer is that it can be zero cost, and we’ve known that for half a decade. It requires different APIs from readiness based interfaces, but it’s completely possible without introducing the copy using either a “pass ownership of the buffer” model or “buffered IO” model.
Either way, this is unrelated to the issue this blog post identifies, which is just that some io-uring libraries handle cancellation incorrectly.
These features are slow to be accepted for good reasons, not just out of some sort of pique. For example, the design space around combining `if let` pattern matching with boolean expressions has a lot of fraught issues around the scoping of the bindings declared in the pattern. This becomes especially complex when you consider the `||` operator. The obvious examples you want to use work fine, but the feature needs to be designed in such a way that the language remains internally consistent and works in all edge cases.
> Pin didn't take much work to implement in the standard library. But its not a "lean" feature. It takes a massive cognitive burden to use - to say nothing of how complex code that uses it becomes. I'd rather clean, simple, easy to read rust code and a complex borrow checker than a simple compiler and a horrible language.
Your commentary on Pin in this post is even more sophomoric than the rest of it and mostly either wrong or off the point. I find this quite frustrating, especially since I wrote detailed posts explaining Pin and its development just a few months ago.
I agree with that assessment of Pin. That's why the second post I linked to presents a set of features that would make it as easy to use as mutability (pinning is really the dual of immutability: an immutable place cannot be assigned into, whereas a pinned place cannot be moved out of).
> Your commentary on Pin in this post is even more sophomoric than the rest of it and mostly either wrong or off the point. I find this quite frustrating, especially since I wrote detailed posts explaining Pin and its development just a few months ago.
To me, this sounds as if the Pin concept is so difficult to understand that it's hard to even formulate correct criticism about it.
I get that Pin serves a critical need related to generators and async, and in that it was a stroke of genius. But you as the creator of Pin might not be the right person to judge how difficult Pin is for the more average developers among us.
If you actually read my posts you would see that I acknowledge and analyze the difficulty with using Pin and propose a solution which makes it much easier to deal with. My understanding is that the Rust project is now pursuing a solution along the lines of what I suggested in these posts.
> The obvious examples you want to use work fine, but the feature needs to be designed in such a way that the language remains internally consistent and works in all edge cases.
True. How long should that process take? A month? A year? Two years?
I ask because this feature has been talked about since I started using rust - which (I just checked) was at the start of 2017. Thats nearly 8 years ago now.
Do I have too high expectations? Is 6 years too quick? Maybe, a decade is a reasonable amount of time to spend, to really talk through the options? Apparently 433 people contributed to Rust 1.81. Is that not enough people? Do we need more people, maybe? Would that help?
Yes, I do feel piqued by the glacial progress. I don't care about the || operator here - since I don't have any instinct for what that should do. And complex match expressions are already covered by match, anyway.
Rust doesn't do the obvious thing, in an obvious, common situation. If you ask me, this isn't the kind of problem that should take over 6 years to solve.
> Your commentary on Pin in this post is even more sophomoric than the rest of it and mostly either wrong or off the point. I find this quite frustrating, especially since I wrote detailed posts explaining Pin and its development just a few months ago.
If I'm totally off base, I'd appreciate more details and less personal insults.
I've certainly given Pin an honest go. I've used Pin. I've read the documentation, gotten confused and read everything again. I've struggled to write code using it, given up, then come back to it and ultimately overcame my struggles. I've boxed so many things. So many things.
The thing I've struggled with the most was writing a custom async stream wrapper around a value that changes over time. I used tokio's RwLock and broadcast channel to publish changes. My Future needed a self-referential type (because I need to hold a RwLockGuard across an async boundary). So I couldn't just write a simple, custom struct. But I also couldn't use an async function, because I needed to implement the stream trait.
As far as I can tell, the only way to make that code work was to glue async fn and Futures together in a weird frankenstruct. (Is this a common pattern? For all the essays about Pin and Future out there, I haven't heard anyone talk about this.) I got the idea from how tokio implements their own stream adaptor for broadcast streams[1]. And with that, I got this hairy piece of code working.
But who knows? I've written hundreds of lines of code on top of Pin. Not thousands. Maybe I still don't truly get it. I've read plenty of blog posts, with all sorts of ideas about Pin being about a place, or about a value, or a life philosophy. But - yes, I haven't yet, also read the 9000 words of essay you linked. Maybe if I do so I'll finally, finally be enlightened.
But I doubt it. I think Pin is hard. If it was simple, you wouldn't have written 9000 words talking about it. As you say:
> Unfortunately, [pin] has also been one of the least accessible and most misunderstood elements of async Rust.
Pin foists all its complexity onto the programmer. And for that reason, I think its a bad design. Maybe it was the best option at the time. But if we're still talking about it years later - if its still confusing people so long after its introduction - then its a bad part of the language.
I also suspect there are way simpler designs which could solve the problems that pin solves. Maybe I'm an idiot, and I'm not the guy who'll figure those designs out. But in that case, I'd really like to inspire smarter people than me to think about it. There's gotta be a simpler approach. It would be incredibly sad if people are still struggling with Pin long after I'm dead.
I don't deny that Pin is complicated to use as it stands (in fact that is the entire thrust of my blog posts!), just that there is some magical easier solution involving Move and changes to the borrow checker. You wrote something on the back of a napkin and you imagine its better, whereas I actually had to ship a feature that works.
The state of async Rust is not better because no one hired me to finish it past the MVP. I have solutions to all of your problems (implementing a stream with async/await, making Pin easier to use, etc). Since I am not working on it the project has spun its wheels on goofy ideas and gotten almost no work done in this space for years. I agree this is a bad situation. I've devoted a lot of my free time in the past year to explaining what I think the project should do, and its slowly starting to move in that direction.
My understanding is that if let chaining is stalled because some within the project want to pretend there's a solution where a pattern matching operator could actually be a boolean expression. I agree that stalling things forever on the idea that there will magically be a perfect solution that has every desirable property in the future is a bad pattern of behavior that the Rust project exhibits. Tony Hoare had this insightful thing to say:
> One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.
> The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late.
Thankyou for all your hard work on this. I'm sorry my post is, in many ways, dismissive of the huge amount of work that you and others have poured into rust, async, Pin, explaining Pin in detail over and over again, and all of the other things I take for granted in the compiler constantly.
But appreciation does little to temper my frustration. Watching the rust project spin its wheels has dulled any enthusiasm I might have once had for its open, consensus based processes. I could get involved - but I worry I'd be yet another commenter making long issue threads even longer. I don't think Rust has a "not enough cooks in the kitchen" shaped problem.
I love that quote. I agree with it - at some point, like with Pin and the 'foo.await' vs 'await foo' discussion - you just have to pick an answer, any answer, and move forward. But the siren song of that "simple and elegent" solution still calls. Alan Kay once made a similar observation. He pointed out that it took humanity thousands of years (and two geniuses) to invent calculus. And now we teach it to 8th grade children. How remarkable. Clearly, the right point of view is worth 50 IQ points.
I look forward to reading your blog posts on the topic. I suspect there's lots of workable solutions out there in the infinite solution space. Research is always harder and slower than I think it should be. And this is very much a research question.
You seem very convinced that replacing Pin with Move would be a mistake. Maybe! I wouldn't be surprised if the Move vs Pin question is a red herring. I suspect there's an entirely different approach which would work much better - something like, as I said in my post, attacking the problem by changing the borrow checker. Something like that. Maybe that wouldn't be viable for rust. Thats fine. There will be more languages following in its footsteps. I want them to be as good as possible.
And I swear, there's a better answer here somewhere.
> I've devoted a lot of my free time in the past year to explaining what I think the project should do, and its slowly starting to move in that direction.
That's very intriguing. Do you have any examples? Willing to learn more.
> True. How long should that process take? A month? A year? Two years?
If you want a feature that everyone complains about, like Pin or async rust, yes, that is how long that process should take.
If you don't want a feature that everyone uses as their stock example for why language designers are drooling morons, and the feature has any amount of complexity to it, then the process should probably take over a decade.
There's a commonality to the features you're complaining about, and it's things where the desire to push a MVP that satisfied some, but not all, use cases overrode the time necessary to fully understand the consequences of decisions not just to implement the feature but its necessary interactions with other features, present and future.
I do appreciate the irony, though, of you starting about complaining about Rust moving too slowly before launching into detailed criticism of a feature that most agree is (at least in part) the result of Rust moving too quickly.
> before launching into detailed criticism of a feature that most agree is (at least in part) the result of Rust moving too quickly.
Is Pin the result of moving too quickly? Maybe.
Personally, I’m not convinced that it’s generally possible to explore the design space properly by having long conversations. At some point, you have to ship. Figure out if it’s a good idea with your feet. Just like pin did.
I don’t claim to be smarter than anyone on the rust team who worked on this feature before it was launched. Only, now it’s launched and people have used it, I think we should go back to the drawing board and keep looking for other approaches.
As someone who has worked a lot to get if let chains stabilized (but so far not achieved the goal), there is surprisingly few blockers: only an ICE. But the ICE fix requires doing some breaking changes, so it's being phased in as part of the 2024 edition. The alternative to doing breaking changes would be to make if let chains different from if let, which wouldn't be nice.
Hopefully we'll have stable if let chains soon-ish. But note that nowadays on Rust, it's mostly volunteers working on the language, so things might not be as fast any more.
In any case, writing a language from scratch is going to be ten times more involved than targeting nightly Rust where if let chains are available.
> The obvious examples you want to use work fine, but the feature needs to be designed in such a way that the language remains internally consistent and works in all edge cases.
?? Then why did the language team put it on the 2024 roadmap? Am I looking at something different? (Specifically on under the 'Express yourself more easily' (1) goal, which links to the RFC issue (2)).
It certainly looks like the implementation is both complete and unblocked, and actively used.
It looks more like the issue is (despite being put on the roadmap and broadly approved as a feature), being argued about because of the alternative proposal for 'is' syntax.
ie. If you want to generalize then yes, there are features which are difficult to implement (yeah, I'll just make a Move trait... yeah... No. It's not that easy).
BUT.
That's not a problem.
A lot of clever folk can work through issues like that and find solutions for that kind of problem.
The real problem is that RCFs like this end up in the nebulous 'maybe maybe' bin, where they're implemented, have people who want them, have people who use them, have, broadly the approval of the lang team (It's on the roadmap).
...but then, they sit there.
For months. Or years. While people argue about it.
It's kind of shit.
If you're not going to do it, make the call, close the RFC. Say "we're not doing this". Bin the code.
Or... merge it into stable.
Someone has to make the call on stuff like this, and it's not happening.
This seems to happen to a fair few RFCs to a greater or less extent, but this one is particularly egregious in my opinion.
Given that your proposal is backwards compatible, what is preventing it from moving to standard language faster? Especially if it improves the situation drastically.
Also, why would pinned be a syntactic sugar for Pin and not the other way around?
I disagree. Some features are more complex than others and design has little to do with that complexity.
Async is a good example of a complex feature that needs a fairly detailed blog post to understand the nuances. Pretty much any language with coroutines of some sort will have 1 or many blog posts going into great detail explaining exactly how those things work.
Similarly, assuming Rust added HKT, that would also require a series of blog posts to explain as the concept itself is foreign to most programmers.
Languages using pure versions of the pi calculus support concurrency without any of the usual headaches.
Async is a great example of this problem. It is way more cumbersome in Rust then it could be, in a different universe where Rust concurrency made different choices.
> A properly designed feature shouldn’t require an entire blog post, let alone multiple, to understand.
After reading though the wiki about pi calculus and looking up the few languages that support it, I would be pretty shocked to find a language that adds a pi calculus feature wouldn't need several blog posts explaining what it is and how to understand it.
It's a stretch to argue that Go's concurrency model is pi calculus. Go supports lexical closures, and even the initial body of a goroutine can close over variables in the parent scope.
Go's concurrency model is fundamentally lexical closures[1] and threading, with channels layered on top. Lexical closing is, afterall, how channels are initially "passed" to a goroutine, and for better or worse it's not actually a common pattern to pass channels through channels. And but for Go hiding some of the lower-level facilities needed for thread scheduling, you could fully implement channels atop Go's lexical closures and threading.
I think the similarity to pi calculus is mostly coincidence, or perhaps convergent evolution. The choice not to make goroutines referencible as objects, and the fact channels can be communicated over channels, makes for a superficial similarity. But the former--lack of threads as first-class objects--comes from the fact that though the concurrency model is obviously threading, Go designers didn't want people to focus on threads, per se; and also it conveniently sides-steps contentious issues like thread cancellation (though it made synchronous coroutines problematic to implement as the GC has no way to know when a coroutine has been abandoned). And the ability to pass channels through channels is just consistent design--any object can be passed through a channel.
[1] Shared reference--non-copying, non-moving--closures. Though Go's motto is "share memory by communicating" as opposed to "communicate by sharing memory", Go comes to the former by way of the latter.
Just to add, Go was inspired by Hoare's CSP paper [1]. Hoare came up with the ideas of CSP separately from Milner [2] even though they have some cross over concepts. The two collaborated later on, but really had somewhat independent approaches to concurrency.
To respond to the OP. Go's concurrency model absolutely has multiple blogs written about it and explaining how it works. It's actually a little funny OP was thinking Go was based on pi calculus when it was actually based on CSP. That goes to my original disagreement. Good features need explanation and they don't become "bad" just because they require blog posts.
Do you even know what the pi calculus is? Like, you can implement the pi calculus (or the lambda calculus) by explicitly rewriting names but that's rarely done in practice. Any practical implementation would have a set of channels possibly shared by different processes and that's not very different from the free threading model with channels. By disallowing any other communciation methods you effectively end up with the actor model, was this what you were arguing for?
Basically every programming concept requires the equivalent of a blog post to understand. Remember learning pointers? Remember learning inheritance? Remember literally every programming tutorial you ever read when you were starting out? I don't understand why people reach a certain level of proficiency and then declare "I shouldn't have to work to learn anything ever again!".
Rust's async model can support io-uring fine, it just has to be a different API based on ownership instead of references. (That's the conclusion of my posts you link to.)
Yes, this is the actual reason. In Java you're restricted to one implementation of an interface for a type by syntactic construction (classes list their interfaces in their header and each interface can only appear once). In Rust there is a similar restriction (called coherence), but it takes into account all of the parameters to a trait, including its generics.
An illustrative example of the difference: `AsRef<T>` and `Deref` have almost identical signatures, except that the target type for `AsRef` is a parameter and for `Deref` is an associated type. `String` implements `AsRef<str>`, `AsRef<Path>`, and so on, but only `Deref<Target = str>`.
The blog post's meandering description the difference between static and dynamic dispatch has no relevance whatsoever.
Not to talk my own book, but there is a well-known alternative to C++ that can actually guarantee the absence of data races.