Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regarding writes, most likely the expectation is that you'll update your entire data set periodically in a batch job, or export and cache another authoritative data store. I understood the atomic replace feature as referring to a convenient way to flip to a new version of the database after its production & distribution by a batch job. It sounds like the "cdbmake" tool is designed to facilitate this.

A lot of data doesn't change very often; and a lot of data can tolerate propagation delays on change. When you have an extremely high read volume against such data sets, it can be economical to cache the entire data set and distribute it periodically to the fleets of machines that require access, as opposed to servicing individual reads over a network. Provides lower cost and latency, while supporting a higher volume of reads and higher availability, at the expense of engineering cost and propagation delay.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: