People reach for the Taylor series because it’s what they remember from calculus, but there’s the Remez algorithm for finding the polynomial which minimizes the maximum error.
If you’re measuring “how good is my cosine function” by calculating the maximum error, then it makes sense to use an optimization algorithm that minimizes that error.
The Remez algorithm is rather elegant. The basic idea is that you come up with a polynomial, figure out where the local error maximums are, and then calculate a new polynomial with better local error maximums in the same locations. Repeat. With the right conditions it converges quickly and to a global minimum.
For example, let’s say that you try to come up with a quarter-wave cosine approximation. You end up with a polynomial that has local error maximums at x=[0, 0.1, 0.3, 0.7, 1.4, 1.57], and various values of ∆y. You create a new polynomial and design it to have exact errors of ±∆y at the same x locations… the errors will alternate in sign as the polynomial goes above and below the target function. However, when you design the function, you’re just doing a polynomial fit through these (x,y) coordinates, so the new polynomial will have local error maximums at different x coordinates. Note that the boundaries are also error maximums. You end up with exactly the right number of degrees of freedom to solve the equation.
Each time through the loop you get a new set of x coordinates. Under the right conditions, these converge. Eventually the ∆y errors are all nearly equal to ± some global error maximum, with alternating signs.
I used this to approximate sine and exp functions for an audio synthesis tool I’m working on—the nice thing about polynomials over lookup functions is that they are easier to convert to SIMD.
Here is the NumPy code I used to find the coefficients. Note that this is not taken from any kind of numerical recipes cookbook and I can’t really vouch for the algorithm I used from a numerical standpoint—I only checked that it works empirically.
If you’re measuring “how good is my cosine function” by calculating the maximum error, then it makes sense to use an optimization algorithm that minimizes that error.
The Remez algorithm is rather elegant. The basic idea is that you come up with a polynomial, figure out where the local error maximums are, and then calculate a new polynomial with better local error maximums in the same locations. Repeat. With the right conditions it converges quickly and to a global minimum.
For example, let’s say that you try to come up with a quarter-wave cosine approximation. You end up with a polynomial that has local error maximums at x=[0, 0.1, 0.3, 0.7, 1.4, 1.57], and various values of ∆y. You create a new polynomial and design it to have exact errors of ±∆y at the same x locations… the errors will alternate in sign as the polynomial goes above and below the target function. However, when you design the function, you’re just doing a polynomial fit through these (x,y) coordinates, so the new polynomial will have local error maximums at different x coordinates. Note that the boundaries are also error maximums. You end up with exactly the right number of degrees of freedom to solve the equation.
Each time through the loop you get a new set of x coordinates. Under the right conditions, these converge. Eventually the ∆y errors are all nearly equal to ± some global error maximum, with alternating signs.
I used this to approximate sine and exp functions for an audio synthesis tool I’m working on—the nice thing about polynomials over lookup functions is that they are easier to convert to SIMD.
Here is the NumPy code I used to find the coefficients. Note that this is not taken from any kind of numerical recipes cookbook and I can’t really vouch for the algorithm I used from a numerical standpoint—I only checked that it works empirically.
https://github.com/depp/ultrafxr/blob/master/math/coeffs/cal...