Hacker News new | past | comments | ask | show | jobs | submit login

This doesn’t seem to be what was suggested? Using an up-to-date database that has had its distributed mechanisms tested - even if the test was a few versions back - is a lot better than something that uses completely untested bespoke algos.

As for verifying Jepsen, I’m not entirely sure what you mean? It’s a non-deterministic test suite and reports the infractions it finds; the infractions found are obviously correct to anyone in the industry that works on this stuff.

Passing a Jepsen test doesn’t prove your system is safe, and nobody involved with Jepsen has claimed that anywhere I’ve seen.




As a fork, it is essentially a new version of an already Jepsen-tested system. That meets your definition of "a lot better" than "untested bespoke algos".


I can’t tell if you are trolling me, but obviously “we made this single-threaded thing multithreaded” implies new algorithms that need new test runs


I am just evaluating your logic in the form of a reply, so you can see it at work. The same issue you describe happens often across multiple versions of a system.


The part of Redis that were Jepsen stressed, Redis-raft more recently, and Redis sentinel, for which the numerous flaws were pretty much summed at "as designed". No part of KeyDB has gone through a Jepsen style audit, all of which are untested bespoke algos.


Chrome is based on a fork of Webkit. Are you going to trust it because Safari passed a review before the fork?


Ask one level higher in the thread.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: