I think all modern system even scylla db do commit batch no fsync on every write, you either need throughput or durability both cannot exist together. Only thing what redpanda claim is you have to do replication before fsync so your data is not lost if the written node is dead due to a power failure. this is how scylla and cassandra works, if iam not wrong, so even if a node dead before the batch fsync, replication will be done before fsync from memtable,so other nodes will bring the durability and data loss is no longer true in a replicated setup. single node? obviously 100% data loss. but this is the trade off for a high tps system vs durable single ndoe system brings. its how you want to operate.
Rapid is excellent. It also integrates with the standard library's fuzz testing, which is handy to persist a high-priority corpus of inputs that have caused bugs in the past.
Testing/quick is adequate for small things and doesn't introduce new dependencies, but it's also frozen. Many years ago, the Go team decided that PBT is complex enough that it shouldn't be in stdlib.
Sibling comments have already mentioned some common strategies - but if you have half an hour to spare, the property-based testing series on the F# for Fun and Profit blog is well worth your time. The material isn’t really specific to F#.
Sometimes, sure - but sometimes, passing around a fat wrapper around a DB cursor is worse, and the code would be better off paginating and materializing each page of data in memory. As usual, it depends.
Very cool! Using “object storage for primary durability” seems difficult for any OLTP workload that’s latency-sensitive - there’s a fundamental tradeoff between larger batch sizes to control write costs and smaller batches to reduce latency. This hurts OLTP workloads especially badly because applications often make multiple small writes to serve a single user-facing request. How does EloqKV navigate this tradeoff?
Also, I’d love to see:
- A benchmark that digs into latency, throughput, and cost for a single workload. Most of the benchmarks I saw are throughput-only.
- Some explanation of the “patented 1PC protocol.” Your website [1] suggests that you treat single EBS volumes as high-durability, replicated storage, which seems unusual to me - apart from io2 volumes, EBS is designed for less than 3 nines of durability [2].
These are great questions. I appreciate you carefully reading through the documents. For the first question, we have detailed benchmark on EloqKV, with the same architecture (but with Redis API) in our blog, and we will soon publish more about the performance characteristics of EloqDoc. Overall, we achieve about the same performance as using local NVME SSD, even when we use S3 as the primary storage, and the performance often exceed the original database implementation (in the case of EloqDoc, original MongoDB).
As for the durability part, our key innovations is to split state into 3 parts: in memory, in WAL, and in data storage. We use a small EBS volume for WAL, and storage is in S3. So, durability is guaranteed by [Storage AND (WAL OR Mem)). Unless Storage (S3) fails, or Both WAL (i.e. EBS lost) AND Mem fail (i.e. node crash), persistence is guaranteed. You can see the explanation in [1]
I'm no expert in corporate finance, but whether or not OpenAI goes bankrupt feels like the wrong question to me (in thinking about this loan). Wouldn't a bank be more concerned with (1) the likelihood that OpenAI can raise another round of financing from which to repay the bank, and (2) the likelihood that OpenAI will have assets worth >10B when/if they do eventually declare bankruptcy?
The bank's risk seems quite a bit lower than the VC's risk.
Also 5% would be a ridiculously low rate for this sort of corporate finance. You would expect more like 8-12% I think?
Plus the post seems to only include 1 year of interest.
Unless we know the terms, I don't think we can necessarily calculate EV from JP Morgan's perspective. I would say that they aren't usually carelessly giving away money though... They probably have terms where they can get out early if OpenAI's position weakens etc.
I agree but had different questions. TFA mentions the consideration of whether failure cases are correlated, but of course if OpenAI wins big, there's a good chance this directly or indirectly creates much instability and uncertainty in many other loans/partners. What's the EV on whether that is net-positive considering this is a loan at 5% and not an investment?
On the other side, if OpenAI crashes hard, is it really such a sure thing that Microsoft will be the on the hook to pay off their debts? Setting aside whatever the lawyers could argue about in a post-mortem, are they even obligated to keep their current stake / can they not just divest / sell / otherwise cut their losses if the writing is on the wall?
JPMorgan Chase might not mind ending up owning much of OpenAI's IP if they default on the loan. Banks have largely been locked out of making equity investments in OpenAI so far so perhaps they see this as the next best alternative?
TFA goes into this in some depth: there's an option to subscribe for one month with a one-time payment. After the month is up, your account automatically reverts to the free plan and you get an email with your fonts attached.
The docs explicitly state that clusters do not provide strong consistency and can lose acknowledged data.