Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No worries, entirely valid question. There may be ways to tune page cache to be more like this, but my mental model for what we've done is effectively make reads and writes transparently redirect to the equivalent of a tmpfs, up to a certain size. If you reserve 2GB of memory for the cache, and the CI job's read and written files are less than 2GB, then _everything_ stays in RAM, at RAM throughput/IOPS. When you exceed the limit of the cache, blocks are moved to the physical disk in the background. Feels like we have more direct control here than page cache (and the page cache is still helping out in this scenario too, so it's more that we're using both).


> reads and writes transparently redirect to the equivalent of a tmpfs, up to a certain size

The last bit (emphasis added) sounds novel to me, I don't think I've heard before of anybody doing that. It sounds like an almost-"free" way to get a ton of performance ("almost" because somebody has to figure out the sizing. Though, I bet you could automate that by having your tool export a "desired size" metric that's equal to the high watermark of tmpfs-like storage used during the CI run)


Just to add, my understanding is that unless you also tune your workload writes, the page cache will not skip backing storage for writes, only for reads. So it does make sense to stack both if you're fine with not being able to rely on peristence of those writes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: