Most of the interesting things I’ve worked on would fall down if you couldn’t cache some stuff across requests, like database connections. If every request requires a new TCP connection, even if to something lightweight and fast like a local pgbouncer instance, requests are going to be a lot slower (and the time spent context switching from userspace to kernelspace and back a lot higher) than if you could reuse one from an in-memory pool.
I love stateless stuff and have started moving a lot of things to Lambdas where appropriate, but I wouldn’t use a completely stateless setup for handling large numbers of requests per unit time.
Sure, but then it's no longer purely stateless, and honestly no more isolated than, say, a Django app (assuming it hasn't been deliberately broken to share state across requests).
It's still completely stateless from the dev standpoint (even if it's not truly stateless under the hood). It's more like a caches connection if that makes sense.
I love stateless stuff and have started moving a lot of things to Lambdas where appropriate, but I wouldn’t use a completely stateless setup for handling large numbers of requests per unit time.