these days you have std::async(promise+future via channel), std::thread(e.g. pthread style) and coroutines in modern C++. my understanding is std::async is a simpler API for parallel coding, thread for finer grained control, while coroutine is running on one core|cpu by design thus it needs to work with threadpool(i.e. using std::thread) or something else to utilize all cores, unlike std::async or std::thread, both are multi-core ready.
Don't forget the upcoming "executors" concept. Good talk w/Eric Niebler on why promise/future is not so great[1] and P0443R14[2]. Seems like it won't make it for C++23, though.
Good to know. std::async for simple and less intensive cases, for anything serious std::thread might be the way to go, before executors is real that is.
I'm assuming you're wondering about process rather than why specifically this piece of work was not prioritised enough to get done (which you'd have to ask the relevant people about)
C++ is a JTC1 standard, which means it needs to use ISO processes, which means it needs to produce a CD (Committee Draft) and then that needs to go through formal international processes in which, in principle, there might be objections from ISO's members (which remember are national standards agencies from around the world, on behalf of the sovereign entities) and then those get addressed and only then, months later, is it published as a standard.
So in practice that means WG21 (the C++ committee) needs to have more or less final text for C++ 23 by July 2022. ie the C++ 23 standard is in effect already decided.
Coroutines are a tool for concurrency, threads a tool for parallelism - see e.g. https://stackoverflow.com/questions/1050222/what-is-the-diff... . In my new code coroutines have entirely replaced futures, they make the code much more readable. They can also be used for use cases where parallelism is irrelevant, for instance for generators.
Think of coroutines as a language-level way to turn procedural-looking call trees into state machines where the caller can control when and how the transitions of the state machine happen.
difference between concurrency and parallelism in present lingo is what?
we didn't distinguish in the early days (early 1990s) of working on parallelizing compilers and proliferating shared memory multiproceesors, as i recall, and first heard someone say they meant different things about 12 years ago, so a person roughly 15 years younger than me.
I was also working on early multiprocessors in the early 90s, and it's true that the terms were often treated as synonyms then, but for at least half of the time since then the distinction has been clear and pretty well agreed upon. Concurrency refers to the entire lifetimes of two activities overlapping, and can be achieved via scheduling (preemptive or cooperative) and context switching on a single processor (core nowadays). Parallelism refers to activities running literally at the same instant on separate bits of hardware. It doesn't make a whole lot of sense etymologically, might even be considered backward in that sense, but it's the present usage.
Note: I deliberately use the generic "activity" to mean any of OS threads, user-level threads, coroutines, callbacks, deferred procedure calls, etc. Same principle/distinction regardless.
If I'm interpreting your question correctly, yes. Two concurrent activities can run sequentially or alternately on a single core, via context switching. Or they can run in parallel on separate cores. It shouldn't matter; either way it's concurrency with most of the associated complexity around locks and most kinds of data races. OTOH, the two cases can look very different e.g. when it comes to cache coherency, memory ordering, and barriers. I've seen a lot of bugs that remained latent on single-core systems or when related concurrent tasks "just happened" to run on the same core, but then bit hard when those tasks started jumping from core to core. This stuff's never going to be easy.
concurrency is a bit overused term, in the early stage it was referring to parallelism too, anyways yes I know the difference between pthread vs coroutines.
the problem of coroutine with me is how to use it on multiple core systems, my current thought is to have a pthread-pool in the # of cores, each thread can run multiple coroutines, as again, coroutine seems not an ideal fit to leverage multiple cores. The pthread-pool + coroutine approach is a combination with simpler code and multicore usage to me.
Theres decades of research on this, search for M:N threading. The best approach for scheduling coroutines depends massively on the application domain in practice.