This strikes a nice balance between pop-level postmodernism ("there is no truth, only interpretation") and authority-based philosophies ("such-and-such holds the key to absolute truth and cannot be questioned"). I've seen the same reasoning given other names, but for someone versed in formal logic and probability theory, this is a very nice presentation of the concepts.
Key points:
- in Bayesian reasoning, beliefs are not held as absolutes, but as probabilities.
- observations / measurements change those probabilities (using Bayes' formula). As a special case, certain measurements may set certain probabilities to zero; viewing only the "zero/nonzero" dichotomy is equivalent to classical logic.
- as a counterpoint to Holmes' "When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth", beware that you may have treated something as impossible which is merely highly improbable. There is a significant difference between actually zero and merely very close to zero.
- A implies B does not mean B implies A. But A makes B more likely does mean B makes A more likely; this is a consequence of Bayes' formula. (This does not imply causation. It's just a recognition of correlation -- if two factors are correlated, the presence of one increases the likelihood of the presence of the other.)
The point that beliefs are held as probabilities requires amplification. In classical logic, if you have a set S={X,Y,Z,...} of n variables, your state of knowledge is a map from S to {true, false}. Thus there are 2^n possibilities.
The usual idea is to say that false=0 and true=1. Then one tries to model uncertainty with intermediate numerical values, replacing 2^n discrete possibilities with an n-dimensional space. This doesn't work and Bayesian reasoning does something else.
With n variables there are 2^n basic conjunctions. This will be familiar from writing out truth tables, or perhaps from not writing out truth tables because they contain too many lines. The Bayesian approach assigns a probability to every basic conjunction and thus lives in a 2^n dimensional space.
How can Bayes rule extend logic when it appears to be simpler? I think the answer is that it is working on a kind of "exploded" representation.
"When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs." - The Black Belt Bayesian
A few years back, a new proof came out regarding an algorithm to find large primes. There was a slow, perfect variant and a faster, statistical variant with a small chance of error (like 1 in 10^20.)
"Of course, this means to be really sure you have to use the slower algorithm or go through a rigorous proof. On the other hand, the probability that you'd mess up a proof is much higher than that."
There is a significant difference between actually zero and merely very close to zero.
Does Bayesian reasoning strike a difference between zero and asymptotically zero? It's my understanding that probability density functions are a method to dodge the problem in the frequentist approach, so I'm assuming the Bayesian approach also claims anything with probability limes 0 is impossible and just makes sure not to consider such cases.
This is one of the main reasons I like Tao. The guy is a terrific and prolific mathematician but he always strives to reach the larger public with his writing.
Key points:
- in Bayesian reasoning, beliefs are not held as absolutes, but as probabilities.
- observations / measurements change those probabilities (using Bayes' formula). As a special case, certain measurements may set certain probabilities to zero; viewing only the "zero/nonzero" dichotomy is equivalent to classical logic.
- as a counterpoint to Holmes' "When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth", beware that you may have treated something as impossible which is merely highly improbable. There is a significant difference between actually zero and merely very close to zero.
- A implies B does not mean B implies A. But A makes B more likely does mean B makes A more likely; this is a consequence of Bayes' formula. (This does not imply causation. It's just a recognition of correlation -- if two factors are correlated, the presence of one increases the likelihood of the presence of the other.)