In fact, I wasn't sure about the full story but the fact that Wikipedia repeatedly told that Plouffe alone discover the formula makes me think that there is a story behind that.
Not this one, but algorithms that can calculate the specific digit positions but nothing else are indeed used for world records, mainly for the verification. If two radically different algorithms converge into the same digits at something like the trillionth position then you will have a high confidence for the rest of digits.
Yes, that fact gave rise to projects like PiHex [1]; hexadecimal digits of pi at around 10^15-th positions were already known in 2000, while the current full-digit record is only for first 10^14 decimal digits.
Decimal is the only tricky one. Given that tau=2pi, simple use f(n-1) in the original binary formula for pi, and Iβm sure something just as trivial for hex.
Yes, and there's also bold (π), bold italic (π ), sans-serif bold (πΏ), and sans-serif bold italic (πΉ), all distinct from the run-of-the-mill Greek Small Letter Pi (Ο) which is often rendered quite differently from the "standard mathematical pi" in addition to, of course, being semantically different. There's a lot of fun stuff in the Mathematical Alphanumeric Symbols block (https://unicode-table.com/en/blocks/mathematical-alphanumeri...)
Mathematicians (and other math subjects) use an interesting notation that is something like a "semantically local variable". A symbol like n can be reused multiple times, as long as the context is clear you're allowed to do that.
Yes, it looks like you're right. But I don't understand the reasoning - is he actually saying you can use this formula to compute pi^n to n decimal places? So it doesn't produce arbitrary digits on their own, but excellent approximations to pi^n?
And this was by taking 4 terms of the zeta function infinite product expansion. 2xn digits would presumably be given by taking more terms (but quite a few, one for every prime under 100, so in some sense this method has bad convergence when you want the mth digit of pi^n and m >> n).
Seems like the real breakthrough was computing large Bernoulli numbers without a very precise value for pi, and this is a relatively easy corollary for someone to spot and get a paper out of.
IOW, the arXiv title might have lost the "nth" because of some formatting thing. The title inside the PDF says "A formula for nth digit of \pi and \pi^n"
Is there a physical limit to how many digits of pi can ever be computed/represented in the universe?
For example, letβs say we need one atom for each digit of pi that we want to store, the max limit of digits of pi would be something like the total number of atoms in the universe, minus the atoms required to compute and store the digits.
What do you mean that we βonly needβ 63 decimal places?
A computer can calculate and hold a lot more than 63 digits.
So I guess my question is more like: if the universe was just one big computer dedicated only to calculating (and storing) digits of pi, how many digits could it get to calculate, max?
Thereβs no possible circle in our universe that needs more than those digits of PI for accurate calculations. Obviously itβs fascinating to study Pi for theoretical reasons and Iβm sure someone can tell me there are also practical reasons to do so. I was just surprised to find out that, given the Planck length, the practical need for precision is astonishingly low compared to whatβs been calculated, even moreso compared to your theoretical upper limit!
It is all curious, but it seems that procedure that calculates the n-th digit using some other functions that require O(n) calculations (i.e. Bernoulli numbers) is not that exciting, as it's just a speedup comparing to a naive calculation (maybe a big one but still).
Although because pi is a transcendental number maybe it is impossible to have an algorithm to return the n-th digit in O(1) operations? Does anyone know ?
Upd: thinking more about it, just to read all the digits of number n, one needs log(n) operations, so I guess anything that has performance of O(log(n)) can be considered easy (i.e. the digits of rational numbers can be computed in O(logn).
The normal method for computing digits of pi is not O(n), so this could still be interesting. I think it's probably better to compare this with the BBP formula [1]. The commonly used algorithm for pi I think still takes O(n log^3 n) time [2]
Even rational numbers which have repeating digits at the end are not O(1) because in order to calculate the n-th digit you need to calculate n modulo k where k is the length until the number repeats its digits, which needs to read the entire input number n (thus runs in O(log(n)).
The only exception are rational numbers with k being a power of two.
Take 0.1010101010β¦, you can get the n-th digit by reading off the smallest digit of n and checking whether it is a 0 or a 1.
There are transcendental numbers where the n-th digit can be computed in O(log n) like Champerowne constant though.
Chatgpt, please write a function in python for the nth digit of pi.
Sure, here is a function in Python that calculates the nth digit of Pi using the Chudnovsky algorithm:
from decimal import *
def nth_digit_of_pi(n):
getcontext().prec = n + 1
C = 426880 * math.sqrt(10005)
M = 1
L = 13591409
X = 1
K = 6
S = L
for i in range(1, n):
M = (K ** 3 - 16 * K) * M // i ** 3
L += 545140134
X *= -262537412640768000
S += Decimal(M * L) / X
K += 12
return +str(S / C)[n]
Interesting! If you correct the obvious errors, then it appears to calculate 1/pi:
>>> from decimal import Decimal
>>> import math
>>>
>>> def nth_digit_of_pi(n):
... getcontext().prec = n + 1
... C = Decimal(426880 * math.sqrt(10005))
... M = 1
... L = 13591409
... X = 1
... K = 6
... S = L
... for i in range(1, n):
... M = (K ** 3 - 16 * K) * M // i ** 3
... L += 545140134
... X *= -262537412640768000
... S += Decimal(M * L) / X
... K += 12
... return str(S / C)[n]
...
>>> "".join([ nth_digit_of_pi(i) for i in range(50) ])
'0.318309886183790698041462054251035408427213165074'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'manojlds' object has no attribute 'humour'. Did you mean: 'joke'?
I don't understand this claim on the bottom of page 1 (emphasis added):
> It is easy to verify that with n = 1000, the error on Ο is of the order of 0.293193 x 10^-303, which is less than 2^-1000. So the thousandth position of this expression is the 1000th bit of the number Ο.
How does that follow? Take the number 1 - 2^-n. The difference between this number and 1 is 2^-n by definition, so it can be made arbitrarily small by varying n, but all bits are wrong. Addition can propagate all the way to the "decimal" point (and even beyond), so an error bound doesn't normally say anything about individual bits or digits. What am I missing here?
Wanted to compliment this paper for being well written. I'm not a practicing mathematician but I was able to easily follow along here and that was a cool feeling.
Could someone who is a practicing mathematician speak to the practical application of this? From what I understand from reading this seems like an interesting curiosity but the Chudnovsky formula it refers to seems to be better at doing the same thing for any practical purpose.
>Does anyone know a practical application where something like this would help?
It's another way to help verify super long calculations of pi are correct, and in turn I guess one basic "practical application" of calculating pi to many digits is as part of the suite for verifying new hardware. How do you know that fancy fresh new silicon is actually crunching the numbers correctly, not producing garbage in some subtle way at enough significant digits? While there are lots and lots of checks used to avoid a repeat of hardware bugs of days past (like the forever infamous Pentium FDIV), one simple sanity check/stress test is calculating out numbers like pi a bunch of different ways to huge numbers of digits and making sure the result is always correct. If it's not there's clearly a problem somewhere.
I believe it can be used to verify calculations of new digits of pi. For example, if you tell me you just computed five trillion digits of pi, I can ask you for specific digits near the end and check that they match what this formula produces
That's an interesting point. You could use this to hash values with infinitely scaling difficulty by providing an offset to n where n = difficulty + n. The only issues is collision since the output is always 0-9 but if your POW check is against multiple digits then it might work? Again you could still have potentially infinite repetitions of sequences of digits with different inputs.
Basically the work will be give me starting index from which, next N digits are = "45334138023580". Finding the first digit can take ages while verification is O(1)ish.
Imagine that to provide storage systems similar to IPFS, but the blockchain network only stores the metadata and no data!
The very first denominator seen is (2Ο)^2n, the last denominator on that line is the same, the first denominator of the second line is Ο^2n+1 and the second denominator in the second line is the same. Are you not seeing that?
Are you talking about the inequality on the first page? That is at the end of a sentence that starts "The calculation is made from the two inequalities ...". The next sentence is "By isolating Ο in both cases, we can derive an approximation of the latter."
Look at the first denominator seen in both lines of the equation, the last denominator in the first part of the equation, and the second denominator in the second part of the equation.
So does it mean that these pi-calculating competitions & record are now going to devolve to special olympiads in pointless storage capacity? What will the super computers going to do in view of this discovery? Fascinating times for humans as well as for the machines...
Any file can be represented as a binary string, and a binary string is just an integer value - which is to say binary strings are simply indexes of files in binary space (i.e. the set of all possible permutations of bits).
Looks to me like nfs is simply transforming a binary space index into a pi-space index. Some files may compress to a smaller value than they are in binary space (if you get lucky), but make no mistake, some files will be much much larger (i.e. the files you're trying to store don't show up in pi until an index value that is a virtually infinite number of digits long).
It's actually not known if pi contains every combination of digits (in any base 2 or greater) or not. It feels likely but really all we know is it's transcendental and seems pretty random from the parts we've generated.
For me, it stimulates a mental shift that could be good exercise elsewhere: instead of storing the string, you find the string somewhere and store its index.
It also stimulate the imagination: what other transcendental numbers might this work with? How long do you have to search in the digits to find your string? What can you say about the size of the index (how far you searched) vs your string length? Etc. It's patterns all the way down.
Yes, we are too much focused on the reverse problem.
We know exactly how much time we need to compute the nth digit of pi. But how much time do we need to find a specific string of digits? Seems like a more interesting question.
Why would it? Calculating the first few trillion digits may be (and probably is) a lot faster than doing a trillion calculations of individual digits, but even if it isnβt, people will simply raise the bar and compute even more digits.
> What will the super computers going to do in view of this discovery?
Whatever they do now. Itβs not like this is what supercomputers are built for. Computations like these more are used to get confidence that the hardware works.
Nothing will change there. There are already much more efficient methods of calculating Ο. What this lets you do is jump straight to the nth digit without calculating all the ones before it.
Couldn't you "cheat" a bit in a timed competition if you knew exactly how many digits you would reach, and kick this off in parallel to extract a few more to tack to the end? Or would the parallel job be too slow for that? Or the competitions constrain CPU or other resources to a ceiling?
There is nothing of note here. This paper should be in 'general math', not number theory. You need to know the Bernoulli numbers in order for this to work, which is more computationally difficult than computing pi. So what. Yeah, Plouffe is a famous person in computer science and math, but this does not measure up to the hype. It reminds me of the stuff i tinkered with in high school when i first learned infinite series ..but not publishable-level, sorry.
I would think the βexplicit expressionβ part is new and enough to make it publishable, and fail to see why not being useful in practice ever should be a factor in determining whether something is number theory.
Now, for the hype, I donβt see any from the authors.
The only possible "last digit" it could not have is zero: if the last digit were to be a zero, that digit would be superfluous and could be discarded, making the digit before that the real "last digit".
sigh It was said in jest, in response to another comment also said in jest.
That aside, the apparent "empty space" on the either side of a number is in reality consisting of infinite zeroes.
Just because we typically choose to "display" most numbers without those zeroes, it doesn't mean they aren't there in a very real, practical and important sense.
They are there, because if they aren't there, then something else might be, and then all our numbers would have to be assumed to be wrong or incomplete... so instead, we assume the zeroes.
The terrible reality is the zeroes extend off infinitely in either direction, and we use empty space as shorthand for this so we don't have to spend longer than the age of the universe to write a single number with full accuracy.
3 recurring represents a particular quantity continuing forever.
0 recurring represents the end of a quantity, and the absence of any further quantity, forever.
Eg: 0.012500000000000...
The significant portion is 0.0125 - the recurring zeroes serve a mathematical role akin to that of a full-stop in a sentence. Hence zero being (jokingly, but in a sense truthfully) always the "last digit".
This is just an artifact of representing it in base-10. In base-3 0.0125 has the same value but would have a non-terminating representation of 0.00010001...
Yes... proving that would require an infinite-capacity rounding mechanism, which cannot exist, because when trying to build it, you always run out of universe. Thusly, "impossible" is equivalent to "false", by default.
The paper seems absolutely brilliant, but the grammar is very strange (thereβs even what appears to be a typo in the paper where he says βrand nβ instead of βrank nβ). Odd that he wouldnβt have worked with somebody with better written English before publishing.
If you can download it from arxiv, it is published. Researchers donβt really care whether the paper went through formal peer review and publication process in some journal, because that process is of little value: they can figure out that the author meant rank instead of rand etc.
"Published" in this context is short for "published to a journal" or more completely for this thread "gone through the full edit cycle you would expect from a paper published to a journal".
> When a article is published, the author may wish to indicate this in the abstract listing for the article. For this reason, the journal reference and DOI (Digital Object Identifier) fields are provided for articles.
This can only make sense if "public abstract on arxiv" is not the same as "published" in the way you mean.
I understand what it means, which you could have seen by reading my comment carefully. My point is that this publication process is of little value these days.
I believe you when you say you understand, but your first sentence was "If you can download it from arxiv, it is published." Which is precisely what GP was responding to.
Furthermore, this thread started with someone complaining about the lack of polish which the publication process can provide.
But explaining or footnoting everything defensively, rather than pointing out misconceptions as they arise, seems excessive.
Further, someone may deliberately use a minority definition in order to stress a philosophical point. One valid viewpoint is that a publication is a publication is a publication. A preprint, a blog post, or a peer-reviewed journal publication should be given equal weight as being "published." I'll call this position #1.
Another valid point is that some works are incomplete, and may go through multiple drafts before reaching the final, "published" form, which it's best known by, and is likely the most polished of the versions. I'll call this position #2.
Often people want feedback, and one way to get feedback is by publishing a preprint. (There are others. I recall reading of a mathematician, about a century ago, who would first publish in his home country, and native language, to get friendly feedback from colleagues, before publishing in English. He's cited for his later publication.)
Someone who holds position #1 might fully understand that I use the dictionary with position #2, and still deliberately use position #1 in order to popularize that #1 dictionary. The difference isn't one of confusion or lack of knowledge, but one of viewpoints.
Let me be clear - I'm not saying that that's the case here. Instead, my example is meant to show it's not necessarily so simple as "shares the same dictionary" or not.
There is definitely a difference between "public(ly available on arxiv)" and "published (in a peer-reviewed journal)". Depending what I want to do I may prefer on or the other for my work as a researcher.
Sure, but if the paper is never published in a journal, and just exist as a pdf on arxiv forever, you wonβt treat it any different than if it was published. Youβll still ignore it if it looks crap, still read it if it looks promising, still tell your friends about it if it has interesting results, still cite it etc. In short, it doesnβt matter much if the paper was formally published.
The "doesnβt matter much" is, I think, the crux of the matter.
commandlinefan's earlier negative aside concerned language quality.
IMO, I think people hold peer-review journal published papers to a (slightly?) higher language quality standard than what may be the first of several preprints. And I think anamexis was pointing out that difference.
As Wikipedia says: "The immediate distribution of preprints allows authors to receive early feedback from their peers, which may be helpful in revising and preparing articles for submission." https://en.wikipedia.org/wiki/Preprint
I expect that may include identifying and fixing typos.
You have no idea how atrocious the English is in papers that I see as a reviewer. And depending on journal I don't even get to reject it for that as long as the science is sound.
> And depending on journal I don't even get to reject it for that as long as the science is sound.
and you shouldn't. As long as it is somewhat legible, forcing people who are not native English speakers to conform to another language 100 % in order for their science to be published is horrendous.
There was a time 300 years ago, where great thinkers who did not speak French or German could not publish their thoughts and answers, and to us now it seems atrocius. Let us not go a head and redo that with English
The problem is that "somewhat legible" is not a given and you don't know if the version that comes out of language editing (if it is still done, a lot of journals skip it to save money) is still scientifically correct.
Edit: A possible solution would be to have "good idea, please language edit and send back for further review" as an option along with "reject" / "needs major revisions" / "needs minor revisions" / "accept".
actually, sometimes "native" English is much harder to read for non-natives than a non-native writing, due to 1. less vocabulary, and 2. more straight forward sentence structure.
If you say so - but honestly, why not just publish it in French then (I think the author is French)? If I were publishing something in French - technical or scientific or otherwise - Iβd want it to be reviewed by a native speaker.
I can list that in the feedback to the authors or recommend that in the confidential remarks to the editor. If they will listen (and force the authors if necessary) depends on the journal.
He used to teach in my university. He is a fantastic man. This is serious, he is crazy about numbers. He is one of the guy behind OEIS (oeis.org).
That is not the first formula he found about pi and some of the previous one had been used to break world record about the number of known decimals.