Hacker News new | past | comments | ask | show | jobs | submit login

That Coq has a stronger logic is an argument I hear often, but is that really so important when the result is that it's so hard to use that no real-world software engineer is ever going to bother?

Maybe with HOL I will not be able to prove some deep theoretical mathematical result that I could with Coq's Calculus of Constructions, but if I can otherwise much more easily prove >99% of my code to be bug-free, then I would argue that perhaps it's the better "tool" for the job...




The issue is not the edge of logical independence, but rather how easily can you express certain theorems. Most verification is about the verification of programming languages (e.g. the POPLmark Challenge [1]), and in this setting sometimes what you want to prove is more convenient to express in the CoC. You can also express it in HOL, but not quite as easily.

All that said, automation of provers based on the LCF approach with a non-constructive logic is much superior as of December 2017. With this in mind I was surprised to see that L. de Moura based the new Lean prover [2] on a variant of the CoC. He thinks that all automation that we currently have for HOL (i.e. Isabelle/HOL) can be brought to Lean.

J. Blanchette [3] seems to think the same.

[1] https://www.seas.upenn.edu/~plclub/poplmark

[2] https://leanprover.github.io

[3] https://people.mpi-inf.mpg.de/~jblanche


> sometimes what you want to prove is more convenient to express in the CoC. You can also express it in HOL, but not quite as easily.

> All that said, automation of provers based on the LCF approach with a non-constructive logic is much superior as of December 2017.

This is kind of the point I am making. Is the fact that some things are more convenient to express worth the substantial loss of automation?

> With this in mind I was surprised to see that L. de Moura based the new Lean prover [2] on a variant of the CoC. He thinks that all automation that we currently have for HOL (i.e. Isabelle/HOL) can be brought to Lean.

> J. Blanchette [3] seems to think the same.

Interesting, thanks! I hope they are right and in the future this kind of automation does come to Coq and Lean.


> Maybe with HOL I will not be able to prove some deep theoretical mathematical result that I could with Coq's Calculus of Constructions, but if I can otherwise much more easily prove >99% of my code to be bug-free, then I would argue that perhaps it's the better "tool" for the job...

Is this true though? What program properties can Isabelle easily automate the proofs of that Coq cannot?


> Is this true though? What program properties can Isabelle easily automate the proofs of that Coq cannot?

Almost all?

Granted, I haven't used Coq very much, so correct me if I'm wrong, but isn't it true that in Coq, with the exception of basic proofs like simple arithmetic expressions, in order to prove a theorem you have to at least explicitly know and write the name of the lemmas that are needed to prove it (apart from how they fit together)?

In Isabelle/HOL, many times you literally only need to write "try" and Isabelle will find the proof of the theorem for you, including the lemmas that were needed to prove it and how they fit together (assuming these lemmas were already proven before). If it turns out that your theorem is false, it can show you an example of inputs where it fails.

This auto-proof functionality I'm describing is mostly thanks to Sledgehammer [0], the auto-counterexample finding is due to Nitpick (and another tool which I can't remember), although there are a few other automation tools that are also very useful.

Granted, you still have to have a high-level understanding of how you want to prove your code, but I would argue this eliminates a lot of the work and (wasted?) time you need to accomplish the final goal...

[0] http://isabelle.in.tum.de/website-Isabelle2009-1/sledgehamme...


"Granted, I haven't used Coq very much, so correct me if I'm wrong, but isn't it true that in Coq, with the exception of basic proofs like simple arithmetic expressions, in order to prove a theorem you have to at least explicitly know and write the name of the lemmas that are needed to prove it (apart from how they fit together)?"

Coq has automated tactics like "auto" that search for proofs based on a configurable database of "hints". On the other hand, if it doesn't succeed, it leaves the proof state unchanged and gives no hints as to what to do next. But on the other, other hand, if it succeeds, it gives no hints as to how it succeeds. So it's less useful than it might be.

Edit: Also, Coq has a dandy baked-in search system for finding lemmas and so-forth that would be useful in a given situation. On the other hand, for a similar flavor, look on Hoogle for functions transforming one list into another list.

I need to play with Isabelle.


Do you have any example proof scripts of some basic algorithms where the proofs can be almost fully automated in Isabelle?


See [1] for an example (edit: although this may not be the best example nowadays, see below):

Quoting:

> Consider the conjecture "length(tl xs) <= length xs", which states that the length of a list’s tail (its "cdr") is less than or equal to the length of the entire list.

> Thanks to Vampire, Sledgehammer finds the following proof: by (metis append_Nil2 append_eq_conv_conj drop_eq_Nil drop_tl tl.simps(1))

Those are all the lemmas that were needed to prove the theorem. They were selected completely automatically.

AFAIU the "metis" procedure tries to fit the lemmas together so that they prove the theorem, and I believe the HOL kernel gives high assurance that if "metis" succeeds, then the theorem was proven correctly (but my understanding may be wrong here).

Note that Sledgehammer may even automatically find and return a human-readable proof, which for the example above, it returns something like this:

  proof –
    have “tl [] = []” by (metis tl.simps(1))
    hence “∃u. xs @ u = xs ∧ tl u = []” by (metis append_Nil2)
    hence “tl (drop (length xs) xs) = []” by (metis append_eq_conv_conj)
    hence “drop (length xs) (tl xs) = []” by (metis drop_tl)
    thus “length (tl xs) <= length xs” by (metis drop_eq_Nil)
  qed
Edit: That said, note that at least nowadays Isabelle/HOL seems to be able prove the above theorem by simplification alone ("by simp"), so this may not be the best example.

[1] http://people.irisa.fr/Thomas.Genet/ACF/BiblioIsabelle/isabe...


metis is a first-order automated theorem prover. the metis tactic will handle encoding the goal as a first-order problem, using ~just the rules you provide it, invoking actual-metis, and reconstructing a proof (in the kernel logic) from the output proof. so, it is impossible to prove things that are false using the metis tactic (ie, metis is entirely out of the TCB).

re: the simplifier, I feel like it's a bad example because coming up with good simp rules and knowing when to add them and when not to when writing new code in Isabelle/HOL is almost an art.


One example are nominal techniques [1]. While they are constructive, they have so far only found substantial implementation in Isabelle/HOL [2]. The reason is that the natural implementation is non-constructive, and leads to only a small blowup in proof complexity, which can easily be handled by Isabelle/HOL's proof automation. In contrast, constructive implementations of nominal techniques seem to require a considerable blowup in logical complexity, which the existing proof automation in Curry-Howard based provers doesn't seem to handle gracefully.

Another issue is that most of the automation that is known as "hammer" is based on non-constructive automatic provers.

[1] https://www.cl.cam.ac.uk/~amp12/papers/nomlfo/nomlfo-draft.p...

[2] https://nms.kcl.ac.uk/christian.urban/Nominal


Do you know why that is the case? Nominal logic is usually presented as a sheaf model (i.e., as the internal language of the Schanuel topos), which models a constructive dependent type theory. Is there a problem when constructing a universe?


I'm probably the wrong person to discuss this question with. I don't know more than I've written above. All I know about this is from conversations with Christian Urban, who implemented Nominal Isabelle. It was a while back, maybe the problem of implementing nominal techniques in constructive type theory in an effective way (i.e. doesn't put additional burden on automation) has been solved by now.

The problem as I understand it is this. Let's say you have a theory T in the formal language given by signature S.

- In HOL you extend T to T' and S to S', but the deltas between T and T', and S and S' are small, and largely generic. This is possible because almost everything handling nominal can be done using double negation.

- In CoC you extend T to T' and S to S', but the deltas between T and T', and S and S' are huge, and largely non generic.


Is there a motivating example that compares nominal techniques in Isabelle with standard techniques in Coq or Lean over the same simple theorems?


Not that I'm aware of. It would be nice to have one. The number of people who understand nominal, HOL, CoC and implementations of provers is vanishingly small.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: