There's a benefit to types beyond correctness. It helps make code understandable locally. And that helps speed up code development, by making changes to code much faster.
I think this might be the real value of formal methods. Not correctness, but speed of itteration. That does affect what kind of formal methods do and don't help. Which really matches what the article said on delivering value with just a small amount of effort.
Often, a program can accumulate so many types that little logic is present in the code proper, but now you need an encyclopedic knowledge of the types and all their dependencies to work with them, since the complexity has just been shifted and not removed.
I'm mainly thinking of the gnarly template errors you'll see in C++, or trait errors in Rust, caused by heavy use of type-based logic. Poor error formatting can exacerbate this, but ultimately you still need an understanding of all the type constraints to do anything, and documentation is often unhelpful since it's diced up into a dozen little pieces.
That is to say, the local optimum of "making changes to code much faster" can be reached pretty early in the process of increasing specification, and many existing projects (IMHO) have overshot that mark to little benefit. So I'm inclined to be skeptical about FM's value proposition, unless it can be applied in an very progressive manner in places where extra specification makes the most sense.
I've been looking at incremental / partial formal verification based on 'assert'-like statements. Kani for rust has an interface close to what I imagine. But I'd like to use deductive verification (something like hoare / seperation logic) instead.
This means the developer gets to decide what is important enough to try to assert. It's likely the developer will need to add sub-lemma's to get the assert to be proven, and I suspect some of these will need to be in the form of 'assumptions / requirements' at the start of a function, which then need to be confirmed at all call-sites.
A big downside here is that some things you'd like to prove often don't fit into boolean expressions in the underlying language. For example, universal / existential quantifiers aren't available (but are implicit in the formal prover, so can be hacked into). You also don't have predicates for 'this address has been allocated / initialized' nor any other predicates that let you say a value is constant. Similarly, there are no temporal quantifiers available.
I'm hoping that the above is not too limiting in practice. The fact that developers already user 'assert' statements does suggest that normal boolean expressions are already valuable.
> Your last point is what I was trying to get at — eg, reasoning just about your math expressions.
I think it's borderline criminal that programming languages haven't added units of measure to their type systems yet, and then required every numerical type to be associated with a unit. We've known how to do this for decades now. All numerical code has latent units: either a specific unit, or polymorphic over the unit (like scalar transforms). Even memory copies and moves operate on byte units.
Great news if true, but I'm not holding my breath. Just adding units support doesn't quite go far enough, but C++'s backwards compatibility prevents it from enforcing the stronger requirement I mentioned.
I think this might be the real value of formal methods. Not correctness, but speed of itteration. That does affect what kind of formal methods do and don't help. Which really matches what the article said on delivering value with just a small amount of effort.