Hacker News new | past | comments | ask | show | jobs | submit login
Coq-of-rust: Formal verification tool for Rust (github.com/formal-land)
159 points by todsacerdoti 32 days ago | hide | past | favorite | 42 comments



> Even if Rust's type system prevents many mistakes, including memory errors, the code is still not immune to vulnerabilities, such as unexpected panics or wrongly implemented business rules.

> The way to go further is to mathematically prove that it is bug-free: this is named "formal verification" and what coq-of-rust proposes! This is the only way to ensure your code contains no bugs or vulnerabilities, even against state-level actors .

I like formal verification. But I consider this a misrepresentation of what it offers. It lets you mathematically prove that your code implements a specification correctly. It doesn't prove that your specification is bug-free & vulnerability-free. It doesn't prove that your specification correctly implements your business rules.

Formal verification is still extremely useful. It's often much easier to use a specification language to define invariants your programming language code needs to obey than to only implement things in a programming language. To me that's a much better, more honest selling point for formal methods. You have to write more code, and in a different language, but unlike just testing you can actually prove that invariants hold for all possible inputs.


It's very useful for things where the specification is much simpler than the implementation. Databases and file systems are good examples. A database can be specified as giant arrays with linear search. Now prove that the database with all its optimizations and concurrency behaves the same as the simple spec.


Even the trivial specification "given a valid input, the program exits successfully with some arbitrary output" will already get you very far: to prove this trivial specification as correct, you need to prove the absence of panics, undefined behavior or infinite loops. Doing that for a complex program will require formulating useful invariants and proving those. So even if using the trivial specification for the overall program, you'll likely find that the proof will require you to write actually useful specifications for several components of your program (because the program likely contains code that relies on the correctness of the component, in order to prove absence of crashes in the code using the component).


> to prove this trivial specification as correct, you need to prove the absence of panics, undefined behavior or infinite loops.

Note that Rust supports this already for a significant subset of code, namely 'const fn' code. This is also the subset of Rust that could most feasibly be extended to support something akin to theorem proving since it's supposed to be safely evaluated "at compile time", just like a Coq proof term or script. If Rust had some simple but comprehensive support for raw "proof terms" - namely, expressions evaluated at compile time which could either result in () or abort the build, I assume that much of the "convenience" features involved in an actual theorem prover could be implemented as macros and live in custom crates. But we don't know what a raw "proof term" could look like in Rust for useful program properties.


Using (formal) specification languages doesn't guarantee a specification is "correct" either.

You have to write more code, and in a different language, but unlike just testing you can actually prove that invariants hold for all possible inputs.

Formal verification is needed to show that the code actually implements a specification. If anything, a formal specification is really good for generating test suites for the implementation code.


> It doesn't prove that your specification is bug-free & vulnerability-free.

There are formal development methodologies where you start with a small set of very simple axioms and you iteratively refine them. Eventually, you end up with code. All steps are formally verified along the way. This addresses your (valid) concerns. For example, many railway software control systems have been developed this way. See e.g. references [1,2] for some more in-depth discussion. Some of these ideas might make a comeback given that code generation is cheap now, but it's quite unreliable.

[1] https://raisetools.github.io

[2] https://raisetools.github.io/material/documentation/raise-me...


Yeah that's fine if you can do it. His point is that most software is not that simple. I work in RISC-V verification and the spec is full of ambiguously worded or easy to misinterpret or just plain unspecified requirements. Even if you formally verify your chip against a formal specification there's a huge gap between the asciidoc ISA manual and that formal specification where bugs can exist.

Something like compression is the absolute best case - you have a trivial property that covers everything. Mostly it isn't like that though.


AWS says that when code has a good set of formal verification they can be as aggressive as possible in performance optimization without having to worry they are breaking the logic of the code. This helped them double the performance of the IAM ACL evaluation code.


> It doesn't prove that your specification is bug-free & vulnerability-free.

Mathematically prove might be a little strong, but if the implementation agrees then it is proven in the more general sense of the word. Similar to double entry accounting, there are two sides to the coin and both sides have to agree else you know there is a problem somewhere. To make the very same mistake twice here is statistically unlikely, so when they do agree there is sufficient evidence to believe that it is proven.

What is more likely is that one does not take the time to truly understand what they need from software and introduce features that, in hindsight, they wish they hadn't, but that is done with more intentionality and isn't really the same thing as a bug or vulnerability traditionally.


You are describing "validation": the process of verifying that your spec really says what you mean.


He is saying that original spec can be incorrect.


Yes. And the post you're replying to is pointing out that that would be part of `Validation` to decide. Verification is confirming the software meets the spec. Validation is confirming that the software meets the actual desired behavior.


I think the comment was way shorter than it currently is when I replied. It is sometimes difficult to say what is meant with "validation". Many also use it to mean "functional testing", for example, which is incorrect.


based on what amw-zero stated, they're using it in the formal QC sense like ISO9001. In that context the two words have distinct meanings.

I do agree that colloquially they can be used in different ways, but as we're talking about formal verification as a concept, in this case verification aligns with that definition.


Yes - that is what [Validation](https://en.wikipedia.org/wiki/Verification_and_validation) is for. I'm not saying it's easy (it's not). But there is a term for this.


I sort of imagine (but I'm not experienced enough to actually state) that you could create a negative specification. Something like: this program will certainly not open the file flag.txt

This doesn't guarantee the program is correct. But it's not nothing.


I learned recently that the Coq project was renamed Rocq https://rocq-prover.org.


Not even a wink or a nudge about why they renamed: https://rocq-prover.org/about#Name


The linked discussion is upfront about the name change, if it wasn't already obvious.

https://coq.discourse.group/t/coq-community-survey-2022-resu...


Ah, the community voted on it. The reason why the survey was proposed shall, however, remain a mystery.


> how different Coq users use Coq differently and express different needs

> applying Coq to do software verification

> encourage others to learn and use Coq

To be clear, some people giggle when they read or hear the above and this is the reason.


I was not part of the survey but count me as someone who felt concern whenever I had to say it in a public or professional setting.


According to the post, 24% "expressed discomfort: (it is uncomfortable or awkward to have to use the name Coq)"

PS: i am impressed by the time and effort that was given here to create fancy graphs, regressions, tests, etc...


Speaking of https://www.why3.org/

For completeness https://github.com/creusot-rs/creusot

The French were right about so many things.


Probably the right move.


Definitely not the right move. The old name was perfectly fine and allowed for harmless fun. They can obviously do what they wish, but IMO there's no good reason to destroy good clean fun just because it might upset the occasional person.


The developers of a serious academic tool may not have wanted their image to be "harmless fun," especially of a sort that can reasonably offend people and make it hard to talk about in formal settings. "Let's try [something that sounds like rock]" is going to be an easier sell to your boss than "Let's try [something that sounds like cock]."


I am just starting out with the ironclad kernel and the gloire OS built on top of it [1,2]. From the web page:

  "Ironclad is a formally verified, real-time capable, UNIX-like operating system kernel for general-purpose and embedded uses. It is written in SPARK and Ada, and is comprised of 100% free software."
I recall AdaCore working with Ferrous Systems a while back. Did anything come of it for rust? I believe one of the non-starters is rust does not have a formal standards specification like C, Common Lisp, or other PLs. It's hard to develop formal verification systems for a moving target.

SPARK has formal verification tools and a legacy of high-integrity applications. The Ironclad kernel is progressing towards a substantially, formally verified kernel written in SPARK. Gloire is an OS implemented on top of Ironclad for PoC.

I have sort of given up on rust, because I just don't find it enjoyable. I am working with Zig. This would be a good direction for Zig to take. Glad to see the effort here. I'll have to check it out a bit more.

[1] https://ironclad.nongnu.org/

[2] https://github.com/Ironclad-Project/Gloire?tab=readme-ov-fil...


> Did anything come of it for rust?

They parted ways, and both are selling Rust compilers.

> I believe one of the non-starters is rust does not have a formal standards specification like C

It's not an issue in practice.


Related: Verus (verified Rust): https://github.com/verus-lang/verus

It's aimed more at full systems verification (been used to build verified filesystems, kubernetes controllers etc...).


Does anyone know of any good resources explaining how a theorem prover like Coq is actually used to prove safety properties of software? All the resources I’ve found thus far have been more in the pure mathematics domain, and not so much about applying it to software.


Colleagues of mine are using it to verify concurrent programs: https://iris-project.org/tutorial-material.html


Software Foundations is quite a comprehensive series on program verification.


should've been called roost


Do that for Zig.


Rust’s “mutable-xor-aliased” model makes it particularly amenable to developing verification tools on top of it [1]. Without this model, verification becomes just as hard — and just as practically intractable — as all the verifiers for existing languages.

1: https://graydon2.dreamwidth.org/312681.html


> Without [mutable-xor-aliased], verification becomes just as hard

i get graydons point but as a counterpoint: at the point where you're adding a verifier to X-lang, you might as well put mutable xor aliased in the verifier instead of in the compiler.


> Rust supports it better than many other imperative systems languages -- even some impure functional ones!

I am surprised that this blog post does not mention Ada / SPARK at all though.


While this does not mention it, in many other discussions Rust vs Ada is mentioned. My (late) introduction to Ada via was finding out in one of the Rust forums.


The link returns 403.


It (still) works for me.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: