Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are really interested in approximating a cosine on the cheap with high precision, you should look into approximating polynomials.

The Taylor expansion produces approximating polynomials that aren't that good.

For instance, if you were to ask which degree-4 polynomial best approximates cos(x)?, you wouldn't end up with 1-x^2/2 + x^4/24.

In fact, this polynomial is 0.99958 - 0.496393 x^2 + 0.0372093 x^4; it pretty much coincides[2] with cos(x) on the interval (-π/2, π/2); the error is an order of magnitude smaller than with the Taylor polynomial (see [3] vs [4]).

How to do this? Linear algbera[1].

See, the polynomials form a Hilbert space (a vector space with an inner product), where a decent choice of one is

    <f(x), g(x>) := \int_π/2^π/2 f(x)g(x) dx 
This is Do Gramm-Schmidt on 1, x, x^2, ...; obtain an orthonormal basis, and use the inner product to compute a projection on the first d elements to obtain a best degree-d polynomial approximation. Voila!

Linear Algebra saves the day yet again.

[1]https://www.math.tamu.edu/~yvorobet/MATH304-503/Lect4-04web....

[2] https://www.wolframalpha.com/input/?i=plot+0.0372093+x%5E4+-...

[3]https://www.wolframalpha.com/input/?i=plot+abs%281-x%5E2%2F2...

[4]https://www.wolframalpha.com/input/?i=plot+abs%281-x%5E2%2F2...



Finding the least-squares best polynomial doesn’t get you the minimum possible worst-case error though, or the minimum worst-case relative error. For those you can use the Remez exchange algorithm, https://en.wikipedia.org/wiki/Remez_algorithm

And if you look at the NASA low-precision number routines, you’ll see that’s exactly what they did.


It is important to note two things, though, for those following along:

1. Approximations using orthogonal polynomial bases (the least squares method, and more generally the Chebyshev methods) are, for transcendental functions, typically as accurate as Remez exchange (including with range reduction) to 16 or so digits of precision. Remez exchange is more difficult to implement correctly than the simple linear algebra methods, the latter of which rely "only" on doing a comparatively simple change of basis. Practically speaking you gain nothing from using Remez exchange instead of Chebyshev to approximate exp(x), for example.

2. The Remez exchange (and more generally, the MiniMax methods) does not guarantee anything beyond minimizing the worst case error. For many practical applications you don't care about worst case error, you care about average case relative error. It's not uncommon for the Remez exchange to produce polynomials which actually have worse average case relative error.

This is also covered in considerable depth in Trefethen's Approximation Theory and Approximation Practice.


Here’s a relevant example: http://www.chebfun.org/examples/approx/EightShades.html

Here are the first 6 chapters of ATAP http://www.chebfun.org/ATAP/atap-first6chapters.pdf

Also cf. https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf

* * *

I’d say the bigger point is not about whether the error is marginally better or worse with a Chebyshev series vs. CF vs. Remez, etc., but rather that any of these will usually beat the others if you add just one more coefficient.

Usually convergence speed matters more than picking a number of coefficients and then exactly optimizing to the last bit.

And as you say, running the Remez algorithm is probably waste of time if you are trying to calculate a near-optimal approximation on the fly at runtime.


Sure, and thanks for the link!

The least-squares solution is just a step-up from Taylor series that doesn't require anything deeper than the notion of an inner product (the parent comment I was responding to didn't go beyond Taylor).


Here was Apollo’s sine/cosine code, a 5th degree polynomial which clearly had its coefficients optimized for minimax relative error (presumably by the Remez algorithm):

https://fermatslibrary.com/s/apollo-11-implementation-of-tri...

Here’s a comparison between its relative error vs. a 5th degree Taylor series (if you use high precision arithmetic; I’m sure on the machine itself rounding errors made things a bit choppier). In the worst case for the Taylor series the relative error is about 45 times lower in the optimized polynomial:

https://www.desmos.com/calculator/aelh9kf9ms


If anyone wants to do the thing romwell describes without an avalanche of math, try this one-liner:

    bash$ gnuplot -e 'set format x "%.15f"; set format y "%.15f"; set samples 100000; set table "cos.dat"; set xrange [-3*pi/8:3*pi/8]; plot cos(x); f(x) = a + b*x**2 + c*x**4; fit f(x) "cos.dat" using 1:2 via a,b,c'
It then gives you "optimized" values for a/b/c from your template formula, thus resulting in

    cos(x) ~= 0.999923 + -.498826*x*x + .0391071*x*x*x*x
Which is even more accurate but only to about four decimal places.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: