This doesn’t seem to be what was suggested? Using an up-to-date database that has had its distributed mechanisms tested - even if the test was a few versions back - is a lot better than something that uses completely untested bespoke algos.
As for verifying Jepsen, I’m not entirely sure what you mean? It’s a non-deterministic test suite and reports the infractions it finds; the infractions found are obviously correct to anyone in the industry that works on this stuff.
Passing a Jepsen test doesn’t prove your system is safe, and nobody involved with Jepsen has claimed that anywhere I’ve seen.
As a fork, it is essentially a new version of an already Jepsen-tested system. That meets your definition of "a lot better" than "untested bespoke algos".
I am just evaluating your logic in the form of a reply, so you can see it at work. The same issue you describe happens often across multiple versions of a system.
The part of Redis that were Jepsen stressed, Redis-raft more recently, and Redis sentinel, for which the numerous flaws were pretty much summed at "as designed". No part of KeyDB has gone through a Jepsen style audit, all of which are untested bespoke algos.
As for verifying Jepsen, I’m not entirely sure what you mean? It’s a non-deterministic test suite and reports the infractions it finds; the infractions found are obviously correct to anyone in the industry that works on this stuff.
Passing a Jepsen test doesn’t prove your system is safe, and nobody involved with Jepsen has claimed that anywhere I’ve seen.