> that async programming aims to avoid (performance, deadlocks, your business logic being cluttered with low level implementation details).
I disagree with you, my code looks safe and simple with explicit blocking threading, and at the same time is much simpler to reason about what is going on and tune in contrast to async frameworks which hide most of the details under the hood.
You can argue about performance, that async/epoll/etc allows to avoid spawning thousands of threads and remove some overhead, but there is no much benchmarks in internet (per my research) which would say that this performance overhead is large.
If you are using explicit blocking, share data between threads and have not run into deadlocks then your application is trivial (which is great if it solves your problem).
You can minimize sharing data between threads because it's easier to have data affinity with threads (ie only thread A will read or write to a piece of data). You can still access that data from multiple modules because the whole thread is never blocked waiting for IO (because of async). An extreme example is nodejs, where you only have one thread, can concurrently do thousands of things and never have to coordinate (ie via mutexes) data access.
It's not either or, you can combine the two. I've worked on a system that did real time audio mixing for 10000s of concurrent connections, utilizing >50 cores, mostly with one thread each. Each thread had thread-local data, was receiving/sending audio packets to hundreds/thousands of different IP addresses just fine without worrying about mutexes at all. Try that with tens of thousands of actual OS threads and the associated scheduling overhead.
Having data affinity to cores is also great for cache hit rates.
Here is part of the C++ runtime this is based on: https://github.com/goto-opensource/asyncly. I was the principal author of it when it was created (before it was open sourced).
> Each thread had thread-local data, was receiving/sending audio packets to hundreds/thousands of different IP addresses just fine without worrying about mutexes at all.
it doesn't sound they really sharing data with each other, it looks like your logic is well lineralizable and data localized, and you can't implement access to some global hashmap in that way for example.
> Try that with tens of thousands of actual OS threads and the associated scheduling overhead.
I run this(10k threads blocked by DB access) in prod and it works fine for my needs. There are lots of statements in internet about overhead, but not much benchmarks how large this overhead is.
> Here is part of the C++ runtime this is based on
yeah, I need one runtime on top of another runtime, with unknown quality, support, longevity and number of gotchas.
> it doesn't sound they really sharing data with each other, it looks like your logic is well lineralizable and data localized, and you can't implement access to some global hashmap in that way for example.
Yes, because data can have thread affinity. Data doesn't need to be shared by _all _ connections, just by a few hundred/thousand. This enables connections to be scheduled to run on the same thread so that they can share data without synchronization.
> I run this(10k threads blocked by DB access) in prod and it works fine for my needs. There are lots of statements in internet about overhead, but not much benchmarks how large this overhead is.
> wiki page doesn't mean it is well researched, where can I see results of overhead measurements on modern hardware?
Here is how this works: at the bottom of the wiki page, there are referenced papers. They contain measurements in modern hardware. You read those, then perhaps go to Google and see if there is any newer research that cites those papers.
I spent short time looking and found that most papers are very outdated or don't have relevant info (no measurements of overhead) on that page. Give specific paper and citation or we finish this discussion.
Maybe you should just take a college computer architecture course along the lines of Hennessy/Patterson. This is nothing new, I learned much of this in college 15 years ago. The problem has only gotten worse since then, computers have not become more single threaded.
my reading is that graphs in that post are just fantasized by author to demonstrate his idea and not backed by any benchmarks or measurements, at least I don't see any links on code in article and no mentions what logic he actually tried to run, how many threads/connections he spawned.
> The problem has only gotten worse since then, computers have not become more single threaded.
Computers are now can handle 10k blocking connections with ease.
> yeah, I need one runtime on top of another runtime, with unknown quality, support, longevity and number of gotchas.
It's a library. It solved our problems at the time, years ago. It's still used in production and piping billions of audio minutes per month through it. You don't have to use it, I merely referred to it as an example. A similar library is proposed to be included in C++23: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p23...
I disagree with you, my code looks safe and simple with explicit blocking threading, and at the same time is much simpler to reason about what is going on and tune in contrast to async frameworks which hide most of the details under the hood.
You can argue about performance, that async/epoll/etc allows to avoid spawning thousands of threads and remove some overhead, but there is no much benchmarks in internet (per my research) which would say that this performance overhead is large.