We don't really have any evidence at all of exactly what nature computes or how it does it.
We do have models that mimic observations in nature, and those models do include some very difficult calculations. However the map is not the territory. We can't be sure that the models' inner workings mimic nature's inner workings, any more than you can conclude that two watches have identical mechanisms because they keep the same time. So there's the possibility that any or all of that difficulty could turn out to be epicycles.
Exactly -- nature isn't doing any calculations. The correct statement would be "it takes difficult calculations to describe what nature does using numbers."
Actually, if you look at the formal definition of computation (see for example "Introduction to the Theory of Computation" by Michael Sipser), a lot of natural processes are in fact a computation. An atom is a computer. Photons may hit an electron giving it extra quanta of energy and it shifts orbit (it skips further away from the nucleus), or it may emit a photon and shift its orbit closer to the nucleus of the atom. So, basically it changes states predictably as it "sees" symbols (photons). It could be said it recognizes a language where alphabet are photons. Entire universe can be thought of as a computer.
Oh, I heartily second super_mario's recommendation. You won't regret it.
I had the pleasure of taking Sipser's class a few years ago, and the man could explain things so clearly. We used his book as our textbook, and it was just as clear.
You should also check out Scott Aaronson's blog[1] if you're into this sort of thing.
Occam's Razor itself actually says that you mustn't multiply entities beyond necessity. It gives no clues as to the necessity and so is, IM[not very popular it seems]O, absolutely of no worth.
Of course if you can prove whether an entity is necessary to describe an outcome then you've no use of Occam's Razor, so it seems rather to excise itself from being useful.
Ockham basically meant that the "simplest" explanation that fits the facts is the most likely. And as a matter of fact, we do have a precise definition for "simplicity": http://en.wikipedia.org/wiki/Kolmogorov_complexity
Anyway, it all boils down to http://en.wikipedia.org/wiki/Bayesian_probability , with what we commonly call "Occam's prior". Probability theory is wonderful, but to use it, you have to start from a set of prior probabilities. When you have zero knowledge, starting with probabilities "inversely proportional" to Kolmogorov complexity seems the most reasonable thing to do.
Occam tells you nothing of truth. It simply says that your knowledge of a situation may be limited. Which seems as close to truism as any aphorism could get.
I'll say it again: Occam's Razor (as told by Ponce at least) has nothing to say on whether one knows the truth. Neither whether one has simplified sufficiently nor if one has failed to add a necessary entity.
You appear to say here that the ability to calculate the Kolmogorov complexity, K, is necessary to establish the simplicity of a given form/function/algorithm/state and so is an entity essential to applying Occam's razor. However, we know that we can't calculate K in all situations and so, it seems, Occam's razor as modified by your requirement to determine the simplest explanation is itself insufficient.
>When you have zero knowledge, starting with probabilities "inversely proportional" to Kolmogorov complexity seems the most reasonable thing to do.
For example, take the current situation with particle physics. It looks like particle soup, very complex, varied interactions. But more knowledge - perhaps entities which currently appear unnecessary to create a working theory - could well precipitate a far simpler theoretical model that revolutionised the analysis of particles and their interactions (a fully working unifying string theory maybe).
To recapitulate, Kolmogrov complexity appears to assume that you know everything and therefore are certain that you're providing the best simplification. You don't and you're not. Occam's Razor has no truth generating/revealing ability.
As I'm sure is clear I've not studied Kolmogorov or BLC before. WRT Occam's Prior how do you judge the K of different entity types (like are more spatial dimensions somehow less complex than more axiomatic constants).
We do have models that mimic observations in nature, and those models do include some very difficult calculations. However the map is not the territory. We can't be sure that the models' inner workings mimic nature's inner workings, any more than you can conclude that two watches have identical mechanisms because they keep the same time. So there's the possibility that any or all of that difficulty could turn out to be epicycles.