Erlang operates on a higher level, with much, much larger chunks. Any sensible modern system will, for instance, have some sort of execution context (thread/green thread/continuation/task/whatever) associated with a single incoming HTTP request. That's very nice, and not going anywhere.
However Erlang has very little to say about parallelization of loops, or in the levels between a single loop and a HTTP request.
Nor would it be a good base for such things; if you're worried about getting maximum parallel performance out of your CPUs you pretty much by necessity need to start from a base where single-threaded performance is already roughly optimal, such as with C, C++, or Rust. Go at the very outside, and that's already a bit of a stretch in my opinion. BEAM does not have that level of single-threaded performance. There's no point in making what BEAM does fully utilize 8 CPUs in this sort of parallel performance when all that does is get you back to where a single thread of Rust can run.
(I think this is an underappreciated aspect of trying to speed things up with multiple CPUs. There's no point straining to get 8 CPUs running in some sort of complicated perfect synchronization in your slow-ish language when you could just write the same thing in a compiled language and get it on one CPU. I particularly look at the people who think that GIL removal in Python is a big deal for performance and wonder what they're thinking... a 32-core machine parallelizing Python code perfectly, with no overhead, might still be outperformed by a single-core Go process and would almost certainly be beated by a single-core Rust process. And perfect parallelization across 32 cores is a pipe dream. Unless you've already maxed out single-core performance, you don't need complicated parallelization, you need to write in a faster language to start with.)
However Erlang has very little to say about parallelization of loops, or in the levels between a single loop and a HTTP request.
Nor would it be a good base for such things; if you're worried about getting maximum parallel performance out of your CPUs you pretty much by necessity need to start from a base where single-threaded performance is already roughly optimal, such as with C, C++, or Rust. Go at the very outside, and that's already a bit of a stretch in my opinion. BEAM does not have that level of single-threaded performance. There's no point in making what BEAM does fully utilize 8 CPUs in this sort of parallel performance when all that does is get you back to where a single thread of Rust can run.
(I think this is an underappreciated aspect of trying to speed things up with multiple CPUs. There's no point straining to get 8 CPUs running in some sort of complicated perfect synchronization in your slow-ish language when you could just write the same thing in a compiled language and get it on one CPU. I particularly look at the people who think that GIL removal in Python is a big deal for performance and wonder what they're thinking... a 32-core machine parallelizing Python code perfectly, with no overhead, might still be outperformed by a single-core Go process and would almost certainly be beated by a single-core Rust process. And perfect parallelization across 32 cores is a pipe dream. Unless you've already maxed out single-core performance, you don't need complicated parallelization, you need to write in a faster language to start with.)