I can't imagine running (or investing) in a software-based company here without also having an automatic model checking layer for verifying all runs + 24/7 monitoring for disabling any live contract. If you're going to put $10M+, years of your life, and who knows how much customer money into this, why not spend $500K of it so you're running with the blockchain equiv of CI testing? You'd be able to deploy faster, with more confidence and less stress, and fewer of these weird midnight Europe phone calls.
Viable model checkers for basic software contracts existed since the 80's, and the modern incarnations are insanely powerful (Z3, ...) + quite approachable (Rosette, ...). They're used to tackle software verification problems magnitudes harder than "money can only go from here to there in this tiny software contract": race detection in distributed file systems, bugs in hardware circuits, security holes in big javascript libraries, etc. I think of these same not-very-secret tools every time I see one of these articles, and yet the engineering fails keep happening.
A few teams deploy tech here, including built on the above, but it seems like most do not. I'd say mind-blowing, but at this point... mind-numbing?
I do appreciate the author being frank about how bad the status quo is.
EDIT: To give a sense of this -- the same people will talk about meticulous cold storage key exchanges with someone always being there to watch, driving into the desert for bootstrapping secrets, and then for their actual operations, deploy unverified contracts.
I hate to be that guy but you are vastly underestimating the challenge of formally verifying these software systems. Blockchains are highly adversarial, open source, and doing a lot of innovation. Innovation which means that nobody has ever tried to verify that type of system before.
Model checkers can tell you thinks like 'there are no underflows' and 'these two pieces of code are identical', but if you want to know whether there is no arbitrage or front-running, you're well past the capabilities of the state of the art. It's not merely a matter of spending $500k on CI and auditing.
And then you've got a separate issue, which is that the space is super competitive and moves extremely fast. If you spend 6 weeks getting your new contract audited, you may well miss the window where people will care about the project you launched. I don't think this is a healthy culture, but it is one that many teams are trying to compete in. And therefore they ARE willing to bet millions of dollars without taking any time to audit, because the expected value of deploying faster is higher than the expected value of deploying more safely.
For projects that are comfortable moving more slowly, formal verification IS a big focus, and the cryptocurrency industry has been a material driving force in many security related technologies such as reproducible builds (Gitian), reproducible bootstrapping (Guix), and software verification methodologies.
For background, I've built verifiers for harder languages, reviewed papers for crypto systems, seen the inside of crypto operations & crypto security startups, and help build software for adversarial, investigative, & high performance scenarios. I'm not an expert in blockchain stuff, but I'm also not unfamiliar with the software challenges.
* I would agree that paying consultants to audit contracts is prohibitively expensive. It's the equivalent of paying pen testers to do your unit testing & security engineering - that's a costly way to do your basics
* I disagree that model checkers can't check for stuff like front-running. It's not textbook, but close: the first papers on model checkers were specifically temporal logic for stuff like ordering issues. That was ~35 years ago! Contracts are similar in size, and both computers + solvers have gotten exponentially better. For my day job, we do TLFOPS for $0.20/hr, in Python.
* Reproducible builds, bootstrapping, etc. are real... but the 20%, and skipping the 80% I'm talking about. Verifiable VM IRs + verifiable contract lang subsets + contracts verified against them. Yes, we've seen sw supplychain attacks against some projects. More than that? Buggy contracts, buggy contract libs, & buggy blockchains.
I get that crypto startup people don't know this stuff, but you can hire 1-2 devs (= $500K) that can. Even if verifying against full abstraction is likely out of reach due to the security mess that is the ETH VM & friends, chiseling out subsets and running the model checking equiv of fuzzers isn't hard. The status quo of not doing it makes it look like an industry of folks not running unit tests before pushing to prod. (See: article.) It's not that hard. As more money gets into any company here, my expectations go higher, even if that industry's haven't.
-- Modeling: You start with basics like using the small universe assumption to bound checking to X transactions. I'd expect most front running to show up as small cycles here, so the typical case is a super small X. Later, you might get into a time cost semantics to better tune what you consider interesting, but almost no one in the crypto space is at the level of modeling maturity. I'd predict a team's time is way better spent building up a stdlib of contract checks, verified contract helpers, & whitebox attack heuristics/guides.
-- Modeling II: Also, in verification, it's way better (ex: realizes more of the ROI) to verify the program has the properties you want ("money goes from a->b without getting stuck"). You can dream up individual attacks and model those one by one ("front-running where ..."), but then you potentially miss some, or some aspect of one. That's basically the difference between verification and testing. You still do stuff like check sample scenarios & individual attacks, but that's more about testing the verification conditions & model fidelity.
-- Fixes: A good (while still cheap & easy) checker gives you summarized examples of attacks. Likewise, it makes it interactive, so you can tune what you consider in/out of scope. More R&D-level verifiers suggest patches (verification and synthesis are two sides of the same coin), but that's not necessary. If your idea sucks or the attack is unavoidable, the verifier isn't the problem, and if you decide to still proceed with the now proven-bad idea, you can at least now price the risk in.
In the article the attacker simply pays a higher fee to have their call executed first. How does verification help you in this case?
It sounds a bit like it would just tell you that your design sucks and you need to change it, but that's not really helpful if it does that for all designs you can come up with.
I'm not convinced you can spend 500k and make the problem go away. If it turns out the problem can only be fixed by changing the underlying platform, rather than your contracts, you will spend years talking to stakeholders and advocating for the necessary changes. Which you still have to come up with yourself. Unless your solver somehow finds the correct solution?
Another reason why that budget is suspect is that you'd have to develop most of that from scratch. There certainly isn't an existing set of mature tools like there might be for verifying properties of C++ code.
Unless you make the problem go away, you are not going to be better off hiring people. Front runners let one know there is a problem just as well as a verification consultant.
Proving that an attack is unavoidable might at least save some time. Proving that a specific solution doesn't work doesn't really help you find the correct one (?)
Nope! I actually hedged by ~10X :) In reality, I'd advocate building successively better verifiers as more & more money flows through, with the first solid prototype being $20K-$50K.
One good MS/PhD student in the verification community can build a decent toolkit over a summer (= $20-50K). The reason is that tools like those I mentioned earlier are intentionally language-agnostic and part of 15+ year movement of building out lightweight generic toolkits for this stuff.
Think of it like a CI system: you get most bang for the buck by building out basic unit tests early on, and as your system becomes worth more $, get into integration testing, and one day, chaos engineering. Same for different levels of verifiers.
* Some problems are inherent to all designs...yet you're better off hiring people? That doesn't make sense to me. What are the expensive per-contract outside people going to do if they can't fix the bug? That's worst of both worlds!
When a verifier flags the issue, if a team can't figure out a fix, at least now they can now mitigate the risk (e.g., shut it off, only put in so much money, get insurance, hedge/diversify, monitor for the exploit happening, ..).
My broader statement is verification tech is increasingly accessible and building out some of it for an org deploying contracts is similar to a utility co building out monitoring or a software shop building out CI. Not for the weekend coder, but should be basic engineering for a professional shop.
If you can't fix it maybe you shouldn't deploy it. It's like knowing you have a bug that will result in a 50/50 chance of losing customers money. It would be irresponsible to deploy code like that.
There should actually be a big market for formal verification tools: malicious users can use them to find and exploit buggy contracts.
I imagine this will happen after the low-hanging fruit (the front-runners described in this article) is gone.
Providing formal security proofs may be forever out of reach, but if the tools get expressive eventually it'll be a battle of who can throw the most CPU at the solver, to the point where no cost incentive remains.
Either way, it will spur developers to use these tools before their attackers do.
Runtime Verification (no affiliation, just know some people who have worked with or for them) are working on tools and providing services for things like this
> the same people will talk about meticulous cold storage key exchanges with someone always being there to watch, driving into the desert for bootstrapping secrets, and then for their actual operations, deploy unverified contracts
Although they, or their predecessors, didn't necessarily do the former in the first few years of Bitcoin. Lots of exchanges, including the very biggest, were compromised and robbed.
Maybe there's a cycle where particularly terrible outcomes help to create a new consensus on basic safety precautions.
You can validate your contract all you want, but if it matters in which order executions matter, you're still subject to the whims of the underlying blockchain.
That's something you can verify and fix, or decide to otherwise include in your risk calc.
But yep, after looking at the hoops verification folks are having to jump through to run safe contracts on insecure blockchain VMs, maybe doing something else with your $ can also make sense.
Viable model checkers for basic software contracts existed since the 80's, and the modern incarnations are insanely powerful (Z3, ...) + quite approachable (Rosette, ...). They're used to tackle software verification problems magnitudes harder than "money can only go from here to there in this tiny software contract": race detection in distributed file systems, bugs in hardware circuits, security holes in big javascript libraries, etc. I think of these same not-very-secret tools every time I see one of these articles, and yet the engineering fails keep happening.
A few teams deploy tech here, including built on the above, but it seems like most do not. I'd say mind-blowing, but at this point... mind-numbing?
I do appreciate the author being frank about how bad the status quo is.
EDIT: To give a sense of this -- the same people will talk about meticulous cold storage key exchanges with someone always being there to watch, driving into the desert for bootstrapping secrets, and then for their actual operations, deploy unverified contracts.