"It is not transparent to me why this definition is sensible."
Because if I have a six digit meter and I measure 5.0000 volts I know that it is 5.0000 volts. I don't see why that is confusing at all.
"Here, yyy has correct one and three significant digits relative to xxx, but incorrect 2 significant digits, since the truncation at x2x_2x2 and y2y_2y2 do not agree even to the first significant digit. "
This doesn't make sense. Y has the correct one?
I think you might be confused about the purpose of significant digits. SD are intended to be paired with a tolerance. If there is no tolerence quote with the number, it is meaningless to talk about significant digits. You arbitrarily defined the tolerance as +/0.5 (half a unit). That is not always true, almost never true. Consider test equipment, not every multimeter is going to be +/0.5mV, some might be +/-.3uV. It varies.
I agree that it’s better to state the uncertainty [1] together with the measurement, but when this isn’t done, there is a standard convention for significant digits, explained in ISO 80000-1:2009 [2], subclause 7.3.4:
When a number is given without any further information,
it is generally interpreted so that the last digit is
rounded with a rounding range equal to 1 in the last digit
(see Annex B). Thus, for example, the number 401 008 is
generally assumed to represent a value between 401 007,5
and 401 008,5. In this case, the maximum magnitude of the
error in the number 401 008 is 0,5.
This is consistent with the author’s “less than half a unit.”
Exactly this. It may be useful for OP to consider what the hypothetical opposite of significant digits would be: insignificant digits, ie. digits that tell you nothing. A leading zero can be omitted from a number without any loss of information. A trailing zero, when correctly used in a position that is not below the tolerance, gives you significant information. On the other hand, if you start combine numbers with different tolerances, but do not truncate the result, you will have a bunch of trailing digits that are meaningless in practice because they are below the combined tolerance.
I'm not entirely clear on the author's thinking. But consider the case of watching a real world voltmeter read 1.02v, 1.01v, 1.00v, 0.99v. Did we really lose a digit of precision there? The leading digit transition between 1 and 9 is a boundary condition that messes up that rule.
I think the catch is that you aren't keeping the number normalized. Significant digits somewhat assume scientific notation. So that last number is 9.9e-1. Two digits.
I'm not sure what you mean here, probably because I worded it poorly. Of course your instrument will have different scales, and you should use the appropriate one.
But the measurement was done with a number of significant figures, this is independent of how you write it. Nothing is "lost", if measurement had 2 sig figs it is 0.99, if it had 3 it was 0.990. Or 0.991 or whatever.
They said if your device measures 1.00, it is three significant digits. If it measures 0.01, it is one significant digit. From the same measuring device.
So it is counter intuitive that the measurement just going down below 1 somehow causes it to "lose" significant digits. It didn't lost precision, though. Just the number of digits that are significant.
I'm not sure what you are saying. 0.01 has just one significant digit.
Consider you have a 10 meter stick with centimeter meetings. This can give 0.01 as the smallest measure. A number with 1 significant digit. It can also give up to 10.00, a number with 4 significant digits. In both, the precision of the measurement is to a centimeter.
I think you missed what they found weird. The subject for that was that 0.0491 only has three significant digits. It is not obvious why, if you don't normalize the number.
There's also the thing where the 1 in 0.0491 has the exact same significance as the 2 in 0.0982.
IOW, the final significant figure has continuously variable significance (but discretely intervalled numerically) depending on the absolute value of what is it supposed to be significant to.
This is a feature of granularity.
And rounding error has a variable influence on the intervals and how "discreet" or blatant any error may appear.
Then you've got error propagation and pretty soon you're going to need something like Gustafson's UNUM's:
No, what is weird is that the original author seems to think you can look at a number and know how many significant digits it has.
This is almost completely wrong: the number of significant digits is something given by metadata, you cannot look at a number and know that it has been properly rounded to the correct number of significant digits. You always need to be told this special circumstance because most numbers you encounter have in fact not been properly rounded.
Whenever you've not personally done the error propagation and rounding yourself you shouldn't assume someone else has done it for you, and you should check if they have before trusting it.
For example, the number you are viewing could have been produced with finite precision floating point math and then just rounded to an arbitrary chosen (small) number of digits.
Not sure how you're supposed to do floating point math while following that rule... you'll have better luck with writing C code with all semicolons in place.
> "Here, y has correct one and three significant digits relative to x, but incorrect 2 significant digits, since the truncation at x_2 and y_2 do not agree even to the first significant digit. "
> This doesn't make sense. Y has the correct one?
I think that's a language issue, like "that x̂ agress to x upto p" in the next section. I think what they're trying to say is the following. If you round x = 0.9949 to 1, 2, or 3 significant digits, you get 1, 0.99, or 0.995 respectively. If you round y = 0.9951 to 1, 2, or 3 significant digits, you get 1, 1.0, or 0.995 respectively. So x and y look equal if you round to 1 or 3 significant digits, but not if you round to 2 significant digits.
This is such a great example of how people that know math(s) really struggle to explain things to people who don't, because they struggle to realise how much lingo they are using.
The first point explains something as basic as absolute and relative errors. Great.
1 page down (s)he's casually thrown in function notation with R representing the real numbers with no explanation. Anyone who isn't familiar with this notation is immediately intimidated. But I'd wager almost anyone who is familiar with it didn't need to read the page before it. But also, I think it's just unnecessary to be this specific in the first place, if you're trying to quickly explain concepts.
It's actually a huge problem in maths and even Wikipedia (I feel) is pretty guilty of this.
If you take a (relatively) key mathematical concept, such as the Taylor series, then the opening gambit is completely incomprehensible to anyone who isn't already knowledgeable in maths: https://en.wikipedia.org/wiki/Taylor_series
This, to me, seems in huge contrast with other scientific fields. Medicine wikipedia, for example, caters towards the "average reader" first, and doctors later. This is entirely appropriate for an encyclopedia. For example: https://en.wikipedia.org/wiki/AV_nodal_reentrant_tachycardia...
One comment regarding Wikipedia: you may be aware of the “Simple English” language option, which can help with making some pages more accessible. It’s in the language menu, or you can just change the language prefix in the URL, such that your example above becomes https://simple.m.wikipedia.org/wiki/Taylor_series. Granted, it’s not immediately obvious that it’s there, but perhaps it’s one reason that the main article can be more precise off the bat rather than explanatory. Furthermore, I wonder if Wikipedia math editors are less concerned with the notion of an encyclopedia as learning device and treat an article as a canonical overview of a topic, with all of the baggage that comes with that.
I’ve been using wikipedia for years and never knew about this. I wish there was a prominent link on the page “Switch to simplified explanation” or something. As someone who knows barely enough maths to get by this would have helped me immensely over the years.
On the other hand, as someone who knows a bit of math, I love those Wikipedia articles that get straight to the point, and hate those where I first have to dig through a pile of (to me) useless "intuition".
I'm not sure if it is really possible to learn/teach math (like actual math, not just some loose intuitions, because those don't really help you with anything) without going through notation and prior knowledge. Notation exists for a reason.
I run into that same problem a lot while trying to find tutorials that target beginners. Absolute beginners, the kind showing up to my organization's Google Summer of Code chat rooms. It's rare to find tutorials that just keep it simple enough.
I wanted a regex one recently. First off, they all presumed mastery of the concept of a string, but more seriously, none of them had a really good definition of a regex! I still can't think of one myself, but I know it's probably not "pattern to match specific strings", because it uses an uncommon meaning of "pattern" and a jargon meaning of "match". And of course they all immediately used terms like "formal languages".
The perldoc page 'perlretut' did pretty well with "a template that is used to determine if a string has certain characteristics" - ok, a bit abstract for my taste, but let's keep going - then a bit further down they use "modifier" (flag) with no definition. But I really had to look hard for flaws in that one. I guess the only way to actually nail this is to supervise successive groups of students as they try to learn using your tutorial-in-progress...
Is encountering function notation and the symbol for the set of real numbers all that different from encountering unfamiliar vocabulary? It is slightly harder to Google for notation, but otherwise I do not see a major difference.
In the Wikipedia page you linked, I immediately encounter several terms that I do not know the meaning of. I must either click through or search separately to fill in my own gaps.
Your criticism is not totally disagreeable, but we have to wonder where to draw the line. Is a "Hacker's Guide to Numerical Analysis" obligated to hold your hand all the way through basic notation? The notation that you mention in your comment are the "Hello world" of mathematics.
I have found undefined notation to be the bane of my existence when trying to teach myself a field of math. How do you Google mathematical notation? It can be a giant and opaque barrier to deepening one's knowledge when studying the subject on one's own.
Maybe this will help: https://detexify.kirelabs.org/classify.html Then you can e.g. use the name of the TeX command for the symbol to Google its meaning (Stackexchanges are often full of questions looking for the TeX command for a symbol, so you could use these to infer in the other direction)
I sort of agree. But there's no right answer here. We can't expect every Wikipedia article to take you through the basics of the subject from scratch. A certain level of knowledge just has to be assumed, and it's hard to know where to draw that line.
Are there any plans for a physical book? My attention span seems to be calibrated to focus for a prolonged period on paper but less so for information on screen.
Anytime you're doing a lot of floating-point math, the imprecision can bite you in non-intuitive ways. But you can mitigate that by fixing the order of operations to minimize error.
I suppose one other specific example of a similar concept would be that when training deep learning models models you're doing a ton of matrix multiplication and tweaking the weights constantly. Especially when using lower-precision numbers, you can hit issues where your coefficient gets too close to 0, and the next thing you know it IS 0, and everything it gets multiplied by also comes out to 0, which messes everything up. So you have to avoid getting too close to 0: https://pytorch.org/docs/stable/amp.html#gradient-scaling.
Can happen more seriously than with floating point since the dynamic range of integer math is relatively much more limited for the same bit depth. This can be especially bad with division followed by inherent truncation.
The inability to represent very big values as well as very small values might need to be worked around.
I needed to overcome this back in 1982 (before 16-bit portable PC's, and PC's were not yet popular at all) to do equivalent calculations on an 8-bit TRS-80 in 0.5K of BASIC program memory, to what was officially developed by 1980 in many pages of 32-bit FORTRAN.
The algorithms had been well established for decades but always implemented using the equivalent of trig tables until the mainframe was capable of substituting (or even automatically printing the official tables!) after 1980.
One volume of the published tables was bigger and weighed more than the TRS-80 portable.
Ended up using something like the 21st century Gustafson approach for my little problem back then, now he is a mathematician pursuing a brilliant more universal solution.
By 1982 it didn't bother me that my integer algorithms were dramatically different than the official floating-point FORTRAN. I had gotten accustomed to morphing fundamental equations by computerizing much less onerous tables where the simple near-linear equation was useless because the published tables contained official rounding errors from the original slide-rule calculations. To match the official calculations that time, the convergence of manual and automatic rounding errors took major precedence over the underlying equation which actually was defined by physical properties.
My rule of thumb is that anything title "...for Hackers" is crap.This link was not an exception.
> Since the relative error is invariant under scaling (x↦αx)(x \mapsto \alpha x)(x↦αx), we will mostly be interested in relative error.
No, No, No, No. This is very much dependent on what you are measuring/modelling, what is the tolerance you are willing to accept, the consequences and so on.
A 1% of error in the change the cashier gave you back maybe is not a big deal. That same error in the national budget is equivalent to 2X the NASA's budget.
Although I do think that kind of rule of thumb is dumb. I've read plenty of materials with titles like that that have been informative and well written.
Digression: for all authors that want minimalistic website, please don’t push it too hard, as author of this website did. Color scheme is nice, as is font selection, but it lacks navigation links. Putting simple link to website home would do so much for explorers of site.
Since I dumped on the author in another comment, let me defend him here: going up one level in the URL is really not a problem. It’s even easier if you use something like Vimperator, where you just need to type “gu”. And there is a nice table of contents.
Because if I have a six digit meter and I measure 5.0000 volts I know that it is 5.0000 volts. I don't see why that is confusing at all.
"Here, yyy has correct one and three significant digits relative to xxx, but incorrect 2 significant digits, since the truncation at x2x_2x2 and y2y_2y2 do not agree even to the first significant digit. "
This doesn't make sense. Y has the correct one?
I think you might be confused about the purpose of significant digits. SD are intended to be paired with a tolerance. If there is no tolerence quote with the number, it is meaningless to talk about significant digits. You arbitrarily defined the tolerance as +/0.5 (half a unit). That is not always true, almost never true. Consider test equipment, not every multimeter is going to be +/0.5mV, some might be +/-.3uV. It varies.