It's clearly true that having a language which forced an explicit ending to an if block would have prevented the goto fail bug.
But is there any actual evidence that code written in modern languages have fewer bugs overall? Or is it all, "let's focus on this one example"?
As another commenter mentioned, the goto fail bug would have been utterly trivially caught by any unit test that even just tested that it reached the non-error case in any way (you don't even need to test the correctness of the non-error code).
I would like to see data before I believe that "errors that would have been prevented by non-C semantics" constitutes a significant fraction of all bugs, or that they aren't just replaced by bugs unique to the semantics of whatever replacement language you're using.
The modern language communities tend to be more open to static code analysers and unit tests than the C community, even though they have lint since the early days.
But is there any actual evidence that code written in modern languages have fewer bugs overall? Or is it all, "let's focus on this one example"?
As another commenter mentioned, the goto fail bug would have been utterly trivially caught by any unit test that even just tested that it reached the non-error case in any way (you don't even need to test the correctness of the non-error code).
I would like to see data before I believe that "errors that would have been prevented by non-C semantics" constitutes a significant fraction of all bugs, or that they aren't just replaced by bugs unique to the semantics of whatever replacement language you're using.