Part of the linked paper's point is that because computer systems involve many "levels of failure", even a more careful programmer cannot usually rule out every class of programs. The DAO hack, yes, possibly.
> Part of the linked paper's point is that because computer systems involve many "levels of failure", even a more careful programmer cannot usually rule out every class of programs.
But -- and this is the crucial point -- that doesn't mean we shouldn't strive to be better than we are now.
The impossibility of perfectly modeling the world hasn't prevented us from making enormous progress on software safety and security over the past 30 odd years. Today, if you care to, you can easily write code that is free of buffer overflows and command injection attacks. In the 00's SQL injection attacks were extremely easy to find; now they're comparatively rare.
Smith's paper tells us that code-as-law is probably a bad idea. But it is not -- and wasn't intended to be -- an indict of static analysis or model-based engineering more generally. Every structural engineer knows the difference between a bridge and a model of a bridge; a paper pointing out the difference without substantively critiquing the practical usefulness of a particular type of model would probably elicit eye-rolls. I'm not sure why these mundane observations receive such attention in computer science. Maybe because with software the model and the system look so similar.
But to be sure, the impossibility of codifying human morality is a pretty lame excuse for failing to use static analysis tools or better languages or quality frameworks to prevent known classes of attacks. So I doubt that's what Smith is advocating.
> How many "more careful" Ethereum programmers also check the compiler for correctness?
Yes, we should obviously check compilers for correctness, and we're making slow but sure progress toward a world where our critical infrastructure comes with strong --
of course, never perfect -- correctness guarantees.
> But -- and this is the crucial point -- that doesn't mean we shouldn't strive to be better than we are now.
Agreed! And I also agree that we are really making progress. But we're far from a world where people can crank out provably correct code (let alone the proofs).
> I'm not sure why these mundane observations receive such attention in computer science. Maybe because with software the model and the system look so similar.
Excellent point, and yes, I think that is the problem. Modeling the world in code is not much different from... er... modeling the world in code :-)
> Part of the linked paper's point is that because computer systems involve many "levels of failure", even a more careful programmer cannot usually rule out every class of programs.
The programmer only has to comply with the specification. The specification is a finite syntactic entity. If the specification doesn't capture what the user really wants, or compliance is undecidable, or <insert problem beyond the programmer's control here>, then the one at fault is the specification writer, not the programmer.
But, for example, people have also lost money due to bugs in the Solidity compiler: https://np.reddit.com/r/ethtrader/comments/5foa5p/daily_disc... How many "more careful" Ethereum programmers also check the compiler for correctness?
Another paper in this vein is James Fetzer's "Program Verification: The Very Idea". http://lore.ua.ac.be/Teaching/SSPEC2LIC/critique2.pdf