On a related note, I had a very early account (probably one of the first 1000 on the site). Years later reddit said they lost some laptop with account info, and so they reset all the passwords. My account was from before there was email. My attempts to get the account back all were met with responses that said "use email reset" and seemed to imply I'd done something wrong. My thanks for being a loyal user for over a decade.
I was surprised to learn recently that there is a cheap and easy to take action that will reduce your exposure to air pollution while driving: Replace your passenger-compartment air filter with a HEPA filter! Most cars made in the last 15 years or so have an air filter right before the AC. You can, for slightly more money, replace this with a HEPA filter. I haven't tested, but this will probably be very effective, since the car is such a small space and there is so much pollution outside.
Replacing the filter is an incredibly easy job on most cars. You don't need any tools or even to open the hood. You just pull out your glove box, and then the filter slides in and out.
I honestly think that in terms of health, this might be the action that has the highest ROI. (Amount of health improvement per amount of time / money invested.)
I was a little apprehensive, but decided to try this. I called my representative as well as both senators. In all cases (3:30pm on a thursday) I just got a voicemail. I left a short message in each case. Nothing could be easier.
Software gets more complicated, and so bugs become more common. Hardware gets faster, so speed pressures are reduced on software, so it gets more layered and hence slower; and also more complicated. More layers of abstraction means more things to get wrong, so correctness becomes a serious problem. The world is increasingly reliant on software, so correctness becomes a serious concern.
That sums up the "software crisis" as it relates to correctness. Most theorem provers have been swept up in this. They are written in high level languages and frameworks made by people who paid no thought to verification and only slightly more than most to correctness. What's more, even some theorem prover languages have dubious correctness or semantics; Rust and Dafny are both languages that make a big deal about formal correctness but have no formal spec. (Rust is working on it.) Coq is not known to be consistent (the literature is full of "here is a proof that some subsystem of Coq has this and that property" but no one can handle the whole thing), not to mention that it has had a few soundness bugs in its history, one of which (Reals violate EM) wasn't even clearly a bug because no one knew what the model was supposed to be.
Agda is a free for all as far as I can tell, and Isabelle has some strange thing with axiomatic typeclasses that takes them out of the HOL world. HOL Light is pretty decent metatheory-wise, and the logic at least was formalized inside itself, but the implementation has not, and OCaml's Obj.magic is still there, ready to cause inconsistency if you feel like using it.
Lean has not had any soundness bugs to date, and the theory is well understood, but it is a massive C++ application and there are doubtless dozens of implementation bugs in there waiting to be found.
None of the big theorem provers are really appropriate for trying to prove correct, because either it is not known what this even means or it's trivially false and we all try to pretend otherwise, or it's an intractable open question.
These are the paragons of correctness in our world.
Can we do better? What does it even mean to do better? This paper aims to find out.
Lean has independent proof checkers, and Coq could have, too. The fact that these are big systems doesn't harm the de Bruijn principle: you only need to trust the checkers…
(btw I couldn't find the source code for the proof checker in the paper, is it available?)
The fast proof checker I talk about is mm0-c: https://github.com/digama0/mm0/tree/master/mm0-c . It is (deliberately) bare bones, and I'm planning on formalizing approximately the assembly that you get from running that through gcc.
There is a second proof checker for MM0 in mm0-hs: https://github.com/digama0/mm0/tree/master/mm0-hs , which has a few more features like the MM1 server mode to give interactive feedback, as well as a verifier that reads a text version of the proof file (not described in the paper, the .mmu format).
The footnotes are supposed to be links to the github repo but I got the versions mixed up and submitted a partially anonymized version to arXiv.
Do we even know if it's possible to prove correctness of a theorem-prover, in a very general sense? At some point, that would imply that some theorem-prover would have to prove its own correctness (either by proving itself directly or by proving itself through a loop of theorem-provers). This seems intuitively like it start to run up against Gödel-style incompleteness problems.
It does, but only if you state the theorem in a particular way. Let's say that you have a verifier A, that checks theorems in Peano Arithmetic (PA). Inside that logic, you can define what a computer is, how it executes, what PA is, what a verifier is, and what it means for a program to verify theorems of PA. Then you can have a sequence of bytes (verifier B), and prove that they represent a program that verifies theorems of PA.
What have you proven? Well, verifier A (using PA) proves that "verifier B proves theorems of PA". That means that "if B says phi is true, PA |- phi". We would have a hard time actually running verifier B inside the logic of verifier A (that entails running all the steps of the computation B does as theorem applications inside verifier A), but even if we did, we would obtain a proof of "PA |- phi". If we assume A correctly implements PA, then that means PA |- "PA |- phi". In general, this is the best we can do. If we assume further that PA is a sound axiom system, i.e. when it proves facts about numbers then we won't find concrete counterexamples, then this strengthens to PA |- phi and then to phi itself, so we've learned that phi is true.
The plan is to prove inside verifier A (that is, MM0) the statement "Here is a sequence of bytes. It is an executable that when executed has the property that if it succeeds on input "T |- phi", then T |- phi." The bootstrapping part is that the sequence of bytes in question is in fact the code of verifier A. In order to support statements of the form "here is a sequence of bytes", the verifier also has the ability to output bytes as part of its operation (it's not just a yes/no oracle), and assert facts about the encoding of those bytes in the logic.
Seems like the ideal would be a number of theorem provers each capable of verifying the next. And the first one simple enough to be verified with pen and paper
Provers in the LCF tradition have tiny cores that avoid having to trust lots of code. There are also reflective provers that can admit self-verified extensions. One project, called Milawa, had several layers of this.
(I'm the author of the paper BTW.) Regarding TLA+ and CakeML: TLA+ seems like a good idea, Leslie Lamport talks a lot of sense in most of his papers. I especially recommend "Composition: A Way to Make Proofs Harder" (https://lamport.azurewebsites.net/pubs/lamport-composition.p...). It sums up a lot of my feelings about "simplifying" a problem by adding more layers and generalizing the layers so that they have more work to do individually. I'm not sure I'm totally on board the idea of using temporal logic; my plan is to just use plain first order logic to talk about the states of a computer.
Regarding CakeML, I'm really impressed by the work they've done, and I think they more than anyone else are leading the charge on getting verification as pervasive as possible at every level (especially the low levels, where historically it has been difficult to get past the closed source barrier). That said, I think they are not doing an optimal job at being a bootstrap. They have a system that does that and so much more that if you just want a bootstrap it's way overkill. And you pay for that overkill - 14 hour compilation per target (see https://cakeml.org/regression.cgi).
(I edited my comment to remove my questions about TLA+ and CakeML, when I got to the "Related work" part of the paper. I was also wondering about Milawa but couldn't remember the name! Kudos sir!)
Honest question: what if those self-driving cars have multiple people in them? With 4 people you quadruple the throughput, and this seems technically achievable with technology. Minivans and the like go further. I certainly haven't run the numbers vs rail but in most regions it seems like 4x ing the highways would meet current needs comfortably.
I sympathize, but do keep in mind that it would be very difficult to teach all the math at the same time. Imagine trying to run a course in French on analytic philosophy. If your students didn't know much French coming in, you'd be in a tough spot.
I understand. I see so many people joining the ML / AI bandwagon and I wonder if so many people really understand the maths behind it or is it as simple as calling a function?
Any way you could make this available outside the Mac App store? Apple seems to have decided I did something horrible and unforgivable by moving to a different country after creating an account, thus making it impossible for me to use the store.
To be a little optimistic, it does seem to me that the ESTA system (where an electronic check is done for visitors looking to use the visa waiver program before they travel) has greatly reduced these types of situations. However, ESTA is only used for visitors arriving by air/sea and not for people driving or walking over the border.
In any case, I myself (an american citizen) have been treated with extreme disrespect and a total lack of professionalism when crossing the border at this same checkpoint by car from the Canadian side to the American side. The particularly ridiculous part of this was that after I waited an hour for them to tear everything apart in my car looking for contraband -- at which point it's been verified that I was doing nothing wrong -- they continued to be completely antagonistic.
Edit: I filed a complaint about this, which did lead to a supervisor calling me back and talking to me on the phone for probably 30 minutes. The supervisor was reasonably pleasant, but unapologetic. (Is it reasonable that someone be forced out of their car in the middle of winter and not allowed to get their coat?) He claimed that he put a "warning" on the officer's record, though this didn't seem credible. He again and again seemed to imply that I might have been smuggling, after which I had to remind him that my car was extensively searched and cleared.
> The particularly ridiculous part of this was that after I waited an hour for them to tear everything apart in my car looking for contraband -- at which point it's been verified that I was doing nothing wrong -- they continued to be completely antagonistic.
> He again and again seemed to imply that I might have been smuggling, after which I had to remind him that my car was extensively searched and cleared.
No no, you're definitely guilty. But if they can't find the contraband you're one of the smart drug smugglers, and thus even worse than the dumb smugglers they catch everyday.
To build on fny's answer, there is one school of statistics (Bayesian statistics) where there basically is a "right" way to do analysis for any problem, provided you make the necessary assumptions (likelihood and prior) correctly. However, the most common statistical concepts (e.g. p-values or confidence intervals) are not in the Bayesian school
Credible interval or highest posterior density intervals are arguably much closer to what most people think a confidence interval are.
I’m not sure there is a single right way to solve any given problem isn a Bayesian way, but it does force you to think more about the problem at hand and make your assumptions explicit.