My point is that not all bugs lead to data loss or loss of privacy. But yes, I agree.
That said, I feel that we're now talking about memory safety vs semantic safety.
We can certainly prove a program is entirely memory safe. Rust's type system is prove, so the work is then to prove the unsafe code safe (actively being researched).
There is no way to prove all semantics of a program (Rice's theorem). Therefor I would argue that a bug is due to a semantic issue is not necessarily negligent, whereas we could easily see memory safety issues from using C as a case of negligence.
But in terms of liability they would likely both fall into the same bucket.
Negligence in the legal sense of the word revolves around care. If someone is maintaining some codebase and causes a memory safety related bug you would have to look at that bug in isolation and what the person did to avoid it. Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.
So the ideal as you perceive it is so far from realistic that I do not believe pursuing that road is fruitful.
What we can do is class bugs (or actually, the results of bugs, the symptoms that cause quantifiable losses) in general as a source of (limited) liability. This will put those parties out of business that are unable to get their house in order without mandating one technology over another (which would definitely be seen as giving certain parties a competitive edge, something I don't think will be ever done).
So that's why I believe that solution is the better one.
But in a perfect world your solution would - obviously - be the better one, unfortunately we live in this one.
> Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.
Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?
I'm not saying your approach is wrong, I think they're not mutually exclusive - your solution would provide incentive to not use C in a technology agnostic way. But at some point isn't it just irresponsible to write critical infrastructure, or code that exposes user information, in a language that has historically been a disaster for security?
> Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?
Because it does not mesh with reality. There are 100's of millions of lines of C code exposed to the internet in all layers of the stack. So if you start off with a hardcore position like that you virtually ensure that your plan will not be adopted.
It's the legacy that is the problem, not the future.
That said, I feel that we're now talking about memory safety vs semantic safety.
We can certainly prove a program is entirely memory safe. Rust's type system is prove, so the work is then to prove the unsafe code safe (actively being researched).
There is no way to prove all semantics of a program (Rice's theorem). Therefor I would argue that a bug is due to a semantic issue is not necessarily negligent, whereas we could easily see memory safety issues from using C as a case of negligence.
But in terms of liability they would likely both fall into the same bucket.