Hacker Newsnew | past | comments | ask | show | jobs | submit | ivovk's commentslogin

It stops working when you need to connect to any external resource. Database, http clients etc maintain connection pools to skip initial connection phase, which can be costly. That’s why you usually need a running web application process


At my last job, a lot of our web services also benefited immensely from in-process caches and batching (to be fair, some of them were the cache for downstream services), and their scaling requirements pretty much dominated our budget.

I can totally see how the cgi-bin process-per-request model is viable in a lot of places, but when it isn't, the difference can be vast. I don't think we'd have benefited from the easier concurrency either, but that's probably just because it was all golang to begin with.


You can solve that with a sidecar, a dedicated process (or container) that pools connections for you. Pgbouncer as one example.


Great, additional things to maintain that can break, all of that to work around the original sandcastle instead of fixing the root issue


My understanding is that DuckLake, while being open source format, is not compatible with Iceberg, since it addressees some of it’s shortcomings, such as metadata stored in blob storage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: