Nice to see you are serious about testing, however this PR page has got something that hits me as strange:
As a rough measure of the effectiveness of our simulation testing framework, Quicksand has only ever found one bug.
To me, this means your test is useless, not that your software is reliable.
Generally speaking, simulating hardware faults is not very useful because you're testing the software layer below you: you don't talk to the hardware, you talk to the kernel which talks to the hardware.
(Unless your filesystem and network code is in kernel land in which case I will retract this comment and go die in a dark corner of Earth. ;) )
Quicksand (the physical fault testing environment) doesn't find our bugs because our simulation-based testing finds them first!
Our deterministic simulations cover all of the failure cases that quicksand can test and many more. The main reason we built quicksand is precisely to find out if our assumptions about the layers below us are wrong, since the simulator can't test its own assumptions. For example, there are hardware/software combinations out there where fsync() can't be trusted...
As a rough measure of the effectiveness of our simulation testing framework, Quicksand has only ever found one bug.
To me, this means your test is useless, not that your software is reliable.
Generally speaking, simulating hardware faults is not very useful because you're testing the software layer below you: you don't talk to the hardware, you talk to the kernel which talks to the hardware.
(Unless your filesystem and network code is in kernel land in which case I will retract this comment and go die in a dark corner of Earth. ;) )