You can get within 20% the performance for 1/10th the cost by using a modern fast language (rust, d, nim, go), and within 50% for even cheaper by using c# or java.
Of my short encounter with "modern" webdev, I found that running out of RAM was far faster than running out of CPU, even with every trick possible thrown to increase GC aggressiveness.
RAM is by far more discriminately priced on all these new "cloud" hostings, and has the most unpredictable performance change with size. Even on real hardware, going for high RAM servers is quite expensive.
While CPU, or I/O saturation naturally throttles itself, RAM exhaustion is rarely pretty, and hard to proof your software against. Most disconcerting about this is that your RUST, GO, or the TRUE ENTERPRISE JAVA®, don't really use that RAM at all. Most of "modern languages" RAM content is just zeroes, and empty buffers.
Also as shown by TechEmpower benchmark, in "webdev" field, C/C++ doesn't have proper/official driver to Postgres with pipelined support, thus even lose to Java in the fortunes benchmark.
Nowadays, they are using a fork of libpq with batch API that has not been merged for 6 years in order to compete.
So, lacking of good library support in "webdev" field will put C/C++ at extreme disadvantages compare to other "webdev"-friendly languages.
> The system [LMAX] is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing.
Your GP didn't just mention Rust, they also mentioned C# and Java, so when your parent refers to GC the more charitable interpretation is that they were using C# or Java and responding to that part of the message.
Is there an example of a task like that? Specifically, ones that don’t talk to a database/files or another service, in which case the talker’s performance becomes irrelevant, unless it is really crawling. Most code “we” write is scheduling queries, glueing datasets together and jsoning the results into a socket through some stream library. IO takes 98% anyway, 2% rerouting and checks. Personally I’m fluent in a range of languages, but wouldn’t ever think of writing networking in C or a similar low-level environment. A mountain of work and skill for something expressable in just a few lines of python/perl/js/ts/lua/sql, zero economy. (Okay maybe an nginx plugin in a critical case when multiplying in instance costs doesn’t help.)
That's the crux of the issue. Most of webdev is just managing a huge amount of very simple pieces of code, where I/O from somewhere, to somewhere dominates.
One particular issue I remember when talking with Alibaba engineers when I worked on a subcontractor for a custom DC project was "1 second kill"
That's a phenomenon when some super good deal is posted onto the front-page of Taobao, and they get squished when people from all over China smashing F5 click on the deal.
Purchasing looks like a lock, and write task from the DB side, and the whole of Taobao.com was tied onto a single point of failure MySQL cluster in 2016 abused to the maximum.
They went to Computer Science people which only said that there is no way around locking, and a single DB write origin.
External contractors wrote a super-duper performant "database gateway" which organised, and queued purchase reservations to the stock database at around 50hz.