Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pi Is Wrong (utah.edu)
78 points by llambda on Jan 27, 2012 | hide | past | favorite | 64 comments


Instead of whether we should use Pi or 2*Pi, I find much more interesting R. Buckminster Fuller's statements that "Pi is operationally irrelevant" and that "Nature is not using Pi".

"Inasmuch as the kind of mathematics I had learned of in school required the use of the XYZ coordinate system and the necessity of placing π in calculating the spheres, I wondered, "to how many decimal places does nature carry out π before she decides that the computation can't be concluded?" Next I wondered, "to how many aribtrary decimal places does nature carry out the transcendental irrational before she decides to say it's a bad job and call it off?" If nature uses π she has to do what we call fudging of her design which means improvising, compromisingly. I thought sympathetically of nature's having to make all those myriad frustrated decisions each time she made a bubble. I didn't see how she managed to formulate the wake of every ship while managing the rest of the universe if she had to make all those decisions. So I said to myself, "I don't think nature uses π. I think she has some other mathematical way of coordinating her undertakings.""

From http://content.stamen.com/buckminster_fuller_and_the_beauty_...


Hogwash. Mathematics is a way of describing nature, not the other way around. And doesn't Gödel's incompleteness theorem make it clear that any such attempt must necessarily be flawed?


http://en.wikipedia.org/wiki/Heisenberg_uncertainty_principl... describes nature's limit to precision of measurement of position and speed vector of objects. In physics, it does not make sense to discuss position (nor boundaries) of object with infinite or absolute precision.

There are suggestions that http://en.wikipedia.org/wiki/Planck_distance is the smallest distance (or size) nature handles.

You could say nature has no concept of real numbers.


> In physics, it does not make sense to discuss position (nor boundaries) of object with infinite or absolute precision.

No, no, no, you are over interpreting. The fact that you cannot measure with infinite precision doesn't mean it doesn't even makes sense to talk infinitely precisely about positions. You can for instance make thoughts experiments, by imagining an initial state infinitely precisely, then conclude that subsequent measures will give such and such results, this time with finite precision.

According to current models, nature runs on infinitely precise mathematics. (And according to most sensible current models, that math is also deterministic.) The fact that your access to those maths lack infinite precision doesn't mean the math itself is imprecise.


Actually, it's you who is doing the over interpreting. The Heisenberg uncertainty principle is a physical constraint that wave functions obey. If you define a state with an infinitely precise location, then the range of momentums is infinite. This result is fairly easy to reproduce, so it would probably be instructive to try it.

The measurement interpretation of the Heisenberg uncertainty principle is both a useful way to understand it, and historically, a way to ease experimentalists into accepting and believing it.


Just to be sure: the wave function is still a distribution of complex amplitude over a configuration space, right ? If I understand, you are saying is that if we have a Dirac peak at some point, then the momentum spread everywhere (the infinite momentum). I don't dispute that.

I only meant that we could describe (in principle) a hypothetical wave function with infinite precision. Meaning, at each exact point in our hypothetical configuration space, we would specify an exact amplitude. (With the usual caveats due to the fact that we're talking about a distribution, not a function.) From then, we could predict the experimental results, which are bound to yield finite precision.

Now, current physics say we will never infinitely accurately measure our wave function. But it also says that this wave function behaves lawfully, with infinite precision, from some (I think?) unknown initial state. Is there a problem left ?

Edit: it just hit me that there is a problem if you don't believe in Many Worlds. Just know that I think the Copenhagen interpretation is crazy (I don't believe in the collapse of the wave function), and that currently, I find some form of Many World Interpretation most likely.


I'm not sure what your argument really is here. Your very invocation of "the wave function" implies continuity in the evolution of the state vectors, described by the wave equation - it matters not that results of macroscopic measurements will only produce quantized results. You still used continuity of the more abstract (but no less "real") state vector evolution.


The quote is not hogwash, in fact it looks to me like you do not oppose the thesis of the quote.

Godel's incompleteness theorems are only applicable to a very specific type of mathematical system: formal axiomatic systems of a specific power (allow (+), (-), succ, ( * ), = , and a few more axioms relating these). Gödel says:

a) Within your formal system some statements simply cannot be proven nor disproven. Kinda like junk dna of your formal system. Very, very roughly, imagine a bunch of islands connected by bridges. Some bridges lead nowhere. Getting from A -> B, is a proof of something's truth. The set of bridges and islands is your deductive system. Godel 1st ICT says there are some islands of legend where you cannot show that no bridges lead to them nor can you find a path to get to them. They are effectively unreachable, "independent". You need a boat or plane to prove that the islands are even real instead of just a trick of fog.

b) GIT2 follows from 1 and says you cannot prove the consistency of your full formal system within your formal system.

Notice that our scientific theories thus far have been neither formal, consistent or complete. But what about nature? For Gödel to apply to nature the question basically is, does a sufficiently powerful formal system underlie nature? That is, is the universe a Turing machine? Buckminster Fuller looks to say yes or less. Buckminster Fuller is basically saying that nature does not compute with arbitrary reals (well what he is saying is actually stronger since he disallows computable reals). That is, hypercomputers do not exist in reality. A very reasonable stance I agree with. http://en.wikipedia.org/wiki/Hypercomputation

-----------

The next lines of musing leave the realms of fact and edge into philosophy.

If nature is Turing equivalent and hence axiomatically encodable by a formal system or as a computable function then is it complete and consistent? Is a mind + universe a subset or superset of the universe? A system can be complete and consistent without us having access to a stronger system to prove it. So it is possible that the universe is complete. It could also be incomplete, we may never know. Something interesting is that Heisenberg's original terminology translates to indeterminacy not uncertainty. Work's attempting to bridge to Godel typically do so via leveraging Kolmogorov randomness.


"Hogwash"

I wouldn't dare to say that about Dr Fuller...

"Mathematics is a way of describing nature, not the other way around."

He doesn't say otherwise, he says that our description of nature (that we are using pi) is flawed.


"he says that our description of nature (that we are using pi) is flawed"

Every description we'll ever have will probably be flawed to some degree.

I concur that it's hogwash


Agreed. Forget Pi, maybe the way we count is flawed!


I think any way of counting will have the same basic axioms (http://en.wikipedia.org/wiki/Peano_axioms)

Also See What are numbers, and what is their meaning?: Dedekind http://www.math.uwaterloo.ca/~snburris/htdocs/scav/dedek/ded...


This is not mathematics though, nor science, it's just the meta-mathematical philosophical opinion of a guy (mostly self-taught, by the way, and know for his cool inventions and ideas, not for his deep mathematical insights).

He also use Pi in computers, e.g to draw circles, and computers (and displays) are even more discrete that nature.

Has he come up with something better to use in Pi's place?

That we don't use "all" of Pi to draw a circle doesn't matter, the "Pi" kind of circle is like the perfect archetype. We don't make the "perfect bridge" or the "perfect car" either, that doesn't mean the concept of bridges and cars is useless.


Yes, it's "only" philosophy, but that doesn't mean it's utter BS.

Look we either talk about the exact pi, or not pi at all. In my computer there's no pi.

As for the alternative, at least he tried something:

"Fuller also claimed that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral."

http://en.wikipedia.org/wiki/Buckminster_Fuller#Philosophy_a...


Look we either talk about the exact pi, or not pi at all. In my computer there's no pi.

You'd be surprised to hear about the wonderful concept of approximation.

As for the "alternative", sounds like bogus science to me...


> Mathematics is a way of describing nature, not the other way around.

I think this comes down to whether you are of the "realist" or the "Platonist" school of thought:

[1] http://en.wikipedia.org/wiki/Philosophy_of_mathematics


The description of a thing is not the thing itself.


Except in the case of performatives!

http://en.wikipedia.org/wiki/Performative_utterance


Not exactly.

Austin's proposition is that the statement, "I promise," is a promise not a proposition.

But that does not make it a description of a state of affairs [or a picture of reality per Wittgensteinian]. A description would be, "You promised," and that is clearly not a promise.

YMMV.


Austin doesn't say performatives aren't propositions. Note that there are still truth-evaluable performatives, and that as Austin and others continued down the il/perlocutionary rabbit hole, they came to regard all language as essentially performative.

Don't get me wrong though, I see the point you want to make; but it misses the mark in statements like Rimbaud's "Je est un autre." Writers, poets especially, do this a lot, pushing performativity to some limit where the form accomplishes what the meaning merely asserts.

Which totally reminds me of a line from Marshall McLuhan:

Just before an airplane breaks the sound barrier, sound waves become visible on the wings of the plane. The sudden visibility of sound just as sound ends is an apt instance of that great pattern of being that reveals new and opposite forms just as the earlier forms reach their peak performance.


Wait, what? "I promise" is as much a description as "You promised". A description can be made in the first person.


Austin's argument is that "I promise" is unique because saying "I promise" is the act of promising.

"You promised" is not a promise, it is a description of the act of promising (carried out by someone else).

[see How to Do Things with Words: http://www.amazon.com/How-Do-Things-Words-Lectures/dp/067441... ]


He didn't say "You promised" is also a promise. Rather, he said "I promise" is also a description.


Isn't that what I said?


My understanding of Godels is that it only implies that a super system will always be required to explain certain results in a base system.

Which does not refute out ability to model nature.


I never said it refutes our ability to model nature. I said it seems to refute the idea that nature itself runs on mathematics. Because what is the super system of nature?


Well according to http://en.wikipedia.org/wiki/Pi

Practically, one needs only 39 digits of π to make a circle the size of the observable universe accurate to the size of a hydrogen atom

In my opinion trying to answer the question "to how many decimal places does nature carry out π before she decides that the computation can't be concluded" naturally leads us to what is known as Heisenberg's uncertainty principle


All the evidence is that nature does implausibly difficult computations all the time. Multiplying by pi has nothing on calculating the evolution of a quantum system.


We don't really have any evidence at all of exactly what nature computes or how it does it.

We do have models that mimic observations in nature, and those models do include some very difficult calculations. However the map is not the territory. We can't be sure that the models' inner workings mimic nature's inner workings, any more than you can conclude that two watches have identical mechanisms because they keep the same time. So there's the possibility that any or all of that difficulty could turn out to be epicycles.


Exactly -- nature isn't doing any calculations. The correct statement would be "it takes difficult calculations to describe what nature does using numbers."


Actually, if you look at the formal definition of computation (see for example "Introduction to the Theory of Computation" by Michael Sipser), a lot of natural processes are in fact a computation. An atom is a computer. Photons may hit an electron giving it extra quanta of energy and it shifts orbit (it skips further away from the nucleus), or it may emit a photon and shift its orbit closer to the nucleus of the atom. So, basically it changes states predictably as it "sees" symbols (photons). It could be said it recognizes a language where alphabet are photons. Entire universe can be thought of as a computer.


I like this. Sipser is already on my reading list. I'll make sure I get to it once I finish working my way through SICP.


Oh, I heartily second super_mario's recommendation. You won't regret it.

I had the pleasure of taking Sipser's class a few years ago, and the man could explain things so clearly. We used his book as our textbook, and it was just as clear.

You should also check out Scott Aaronson's blog[1] if you're into this sort of thing.

[1]http://www.scottaaronson.com/blog/


> We can't be sure that the models' inner workings mimic nature's inner workings

William of Ockham says we have reasonable hope of coming damn close. http://en.wikipedia.org/wiki/Occam%27s_razor

Anyway, infinite certainty doesn't exist. Not even for 2+2=4. http://wiki.lesswrong.com/wiki/Absolute_certainty


Occam's Razor itself actually says that you mustn't multiply entities beyond necessity. It gives no clues as to the necessity and so is, IM[not very popular it seems]O, absolutely of no worth.

Of course if you can prove whether an entity is necessary to describe an outcome then you've no use of Occam's Razor, so it seems rather to excise itself from being useful.


Ockham basically meant that the "simplest" explanation that fits the facts is the most likely. And as a matter of fact, we do have a precise definition for "simplicity": http://en.wikipedia.org/wiki/Kolmogorov_complexity

You will note that this is a quantitative reasoning, not a qualitative one. An equivalent way to come up with the same results is http://en.wikipedia.org/wiki/Inductive_inference

Anyway, it all boils down to http://en.wikipedia.org/wiki/Bayesian_probability , with what we commonly call "Occam's prior". Probability theory is wonderful, but to use it, you have to start from a set of prior probabilities. When you have zero knowledge, starting with probabilities "inversely proportional" to Kolmogorov complexity seems the most reasonable thing to do.


Occam tells you nothing of truth. It simply says that your knowledge of a situation may be limited. Which seems as close to truism as any aphorism could get.

I'll say it again: Occam's Razor (as told by Ponce at least) has nothing to say on whether one knows the truth. Neither whether one has simplified sufficiently nor if one has failed to add a necessary entity.

You appear to say here that the ability to calculate the Kolmogorov complexity, K, is necessary to establish the simplicity of a given form/function/algorithm/state and so is an entity essential to applying Occam's razor. However, we know that we can't calculate K in all situations and so, it seems, Occam's razor as modified by your requirement to determine the simplest explanation is itself insufficient.

>When you have zero knowledge, starting with probabilities "inversely proportional" to Kolmogorov complexity seems the most reasonable thing to do.

For example, take the current situation with particle physics. It looks like particle soup, very complex, varied interactions. But more knowledge - perhaps entities which currently appear unnecessary to create a working theory - could well precipitate a far simpler theoretical model that revolutionised the analysis of particles and their interactions (a fully working unifying string theory maybe).

To recapitulate, Kolmogrov complexity appears to assume that you know everything and therefore are certain that you're providing the best simplification. You don't and you're not. Occam's Razor has no truth generating/revealing ability.

As I'm sure is clear I've not studied Kolmogorov or BLC before. WRT Occam's Prior how do you judge the K of different entity types (like are more spatial dimensions somehow less complex than more axiomatic constants).


Feynman said in his talk on quantum computing (IIRC) that this bugged him -- how according to our best understanding of physics it takes an infinite amount of computation to predict what happens in an arbitrarily small volume. And that was part of what set him to thinking about quantum-mechanical computers. (I should look up the actual quote, though.)


I'm a dodgy mathematician at best, but doesn't the simple elegance of Euler's formula suggest that π is beyond merely "relevant" and indeed one of a handful of the most fundamental building blocks of the universe?


Euler's formula is a mathematical law, not a physical one. There is no proof that the Earth is perfectly round, and plenty of evidence to the contrary.


exp(i tau) = 1


Why does he assume nature requires decimal representation for her calculations?


He doesn't; the entire argument is base independent.


He assumes nature needs to calculate Pi out to a certain number of decimal places. The flaw in that reasoning is base independent.


See also [1], where Terence Tao suggests that 2 pi i may be even more fundamental than both 2 pi and pi, and [2] for some explanation on why.

[1] http://blog.computationalcomplexity.org/2007/08/is-pi-define...

[2] http://qchu.wordpress.com/2011/03/14/pi-is-still-wrong/


Of course tau * i is what's really fundamental; however it's much more convenient to have the notation refer to a real constant. Firstly, one might be working in a context where complex numbers are not present, and to have to use them just to even refer to a real constant would be an annoyance. Secondly, if one defines a real constant tau, it is then easy to talk about the imaginary version i * tau; whereas one defining an imaginary constant, and having to divide by i or multiply by -i to get the real version would be somewhat annoying. Thirdly, if one went with the complex version, there'd be the whole "i or -i" problem due to the symmetry of the complex numbers -- OK, I guess this is not really an actual problem, but it would be slightly annoying, especially in context where dealing with complex numbers at all isn't really necessary. Whereas defining tau to be a positive real number gets rid of that problem.


Vi Hart explains the issue much more enjoyably:

https://www.youtube.com/watch?v=jG7vhMMXagQ


Just slightly a link-bait-esque title. Can we rename it to e.g. “Tau: Why 2π Would Be a Better Constant than Pi”?


Tau is pretty much designed for link bait. It's worse than the programming flamewars, and on even shakier ground when people claim that it matters. Just wait until people start calculating the amount of energy saved if people used tau instead of 2 pi.


Bob Palais' original article title, "Pi Is Wrong!", was intentionally provocative. When I suggested the use of tau for 2 pi in The Tau Manifesto, I followed his usage for the sake of continuity.

"Pi is wrong!" also makes for a pithier rallying cry. Can you imagine Braveheart yelling "Tau: Why 2π Would Be a Better Constant than Pi!" Neither can I.


Unfortunately, mathematical notation has tons of historical cruft like this. And there will be people who will fight tooth and nail to preserve all those artificial complexities.

Personally, I always believed that improving the notation would have huge benefits in the future.


But then you have a different problem. See http://xkcd.com/927/ (it's the one about standards.)


Notation aside, I like the term "turn" used for the whole unit. If you say 2 pi or 1 tau, it's still cloaked in traditional mathspeak. That kills opportunities for intuitive understanding, which can be a foothold for some.


Its tendentious title aside, all very useful and reasonable, up to but not including the "\newpi" symbol at the end of the article, which IMHO is an abomination with three legs.


I tell people it's half right :-)



If only more articles like this made it to the frontpage... Instead articles about the MPAA and RIAA seem to make it to the frontpage more and more.


Still have the "tau circle" printout nailed to my cube wall from the last time HN hashed this issue out. (Rather like the idea...)


I've seen articles like this before, but wondered if pi originates from astronomy working with a more or less 180 degree horizon.


Whether pi or 2pi or 4pi is more convenient depends on whether you're working with circumference, area, or solid angle. Circumference was probably the easiest thing to measure.



I'd rather type 2\pi than \frac{\tau}{2}.


Pi is exactly 3!


You can downvote me, but this is an overkill;

All gungho just because pi sounded like pie and it went viral just like kolaveri di.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: