It stops working when you need to connect to any external resource. Database, http clients etc maintain connection pools to skip initial connection phase, which can be costly. That’s why you usually need a running web application process
At my last job, a lot of our web services also benefited immensely from in-process caches and batching (to be fair, some of them were the cache for downstream services), and their scaling requirements pretty much dominated our budget.
I can totally see how the cgi-bin process-per-request model is viable in a lot of places, but when it isn't, the difference can be vast. I don't think we'd have benefited from the easier concurrency either, but that's probably just because it was all golang to begin with.
My understanding is that DuckLake, while being open source format, is not compatible with Iceberg, since it addressees some of it’s shortcomings, such as metadata stored in blob storage.