Hacker News new | past | comments | ask | show | jobs | submit login

> To be clear, the above specs would be pointless for most databases, as almost nothing scales to handle this kind of hardware well — and almost nobody tries.

That strikes me as the more interesting takeaway sentence.

It's not that much RAM if you think of it as 24 8-core machines.




For a database? At expensify scale?

It's still feels like a lot.


What is expensify scale?


Ten million users [1] averaging maybe averaging 40 or so interactions per year (filling out an expense report isn't a common task for a lot of those users)? Napkin math of 1.2 qps. Even if you 10x that for backing workflows, and double it because users are more active than expencted, that's still only 30qps.

[1]: https://www.sec.gov/Archives/edgar/data/1476840/000162828021...


Given the time, effort and hard dollar expense they put into development of BedrockDB - I have to imagine they do more than 1 QPS.

Why else would they benchmark to show they can achieve 4,000,000 QPS if all they need is 1 QPS?


While I think the figures might be a little conservative (or might work for the mean, but not the peak) it is a little odd to imagine why an expenses app would need a database that syncronises via a private blockchain to track expenses. It would be interesting to understand the rationale.


Sunk cost fallacy?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: