are there any details available for the algorithm behind the erf-function of boost? The documentation of the module is not very precise. All I found out is that several methods are mixed. For me it looks like variations of Abramowitz and Stegun.
Which methods are mixed?
How are the methods mixed?
What is the complexity of the erf-function (constant time)?
Sebastian
The docs for Boost Math Toolkit has a long list of references, among which Abramowitz and Stegun. The erf-function interface contains a policy template parameter that can be used to control the numerical precision (and hence its run-time complexity).
#include <boost/math/special_functions/erf.hpp>
namespace boost{ namespace math{
template <class T>
calculated-result-type erf(T z);
template <class T, class Policy>
calculated-result-type erf(T z, const Policy&);
template <class T>
calculated-result-type erfc(T z);
template <class T, class Policy>
calculated-result-type erfc(T z, const Policy&);
}} // namespaces
UPDATE:
Below a verbatim copy of the section "Implementation" of the earlier provided reference to the erf-function:
Implementation
All versions of these functions first use the usual reflection formulas to make their arguments positive:
erf(-z) = 1 - erf(z);
erfc(-z) = 2 - erfc(z); // preferred when -z < -0.5
erfc(-z) = 1 + erf(z); // preferred when -0.5 <= -z < 0
The generic versions of these functions are implemented in terms of the incomplete gamma function.
When the significand (mantissa) size is recognised (currently for 53, 64 and 113-bit reals, plus single-precision 24-bit handled via promotion to double) then a series of rational approximations devised by JM are used.
For z <= 0.5 then a rational approximation to erf is used, based on the observation that erf is an odd function and therefore erf is calculated using:
erf(z) = z * (C + R(z*z));
where the rational approximation R(z*z) is optimised for absolute error: as long as its absolute error is small enough compared to the constant C, then any round-off error incurred during the computation of R(z*z) will effectively disappear from the result. As a result the error for erf and erfc in this region is very low: the last bit is incorrect in only a very small number of cases.
For z > 0.5 we observe that over a small interval [a, b) then:
erfc(z) * exp(z*z) * z ~ c
for some constant c.
Therefore for z > 0.5 we calculate erfc using:
erfc(z) = exp(-z*z) * (C + R(z - B)) / z;
Again R(z - B) is optimised for absolute error, and the constant C is the average of erfc(z) * exp(z*z) * z taken at the endpoints of the range. Once again, as long as the absolute error in R(z - B) is small compared to c then c + R(z - B) will be correctly rounded, and the error in the result will depend only on the accuracy of the exp function. In practice, in all but a very small number of cases, the error is confined to the last bit of the result. The constant B is chosen so that the left hand end of the range of the rational approximation is 0.
For large z over a range [a, +∞] the above approximation is modified to:
erfc(z) = exp(-z*z) * (C + R(1 / z)) / z;
The rational approximations are explained in excruciating detail. Tf you need more details, you can always look at the source code.
Related
To my knowledge
(-1)^1.8 = [(-1)^18]^0.1 = [1]^0.1 = 1
Hope I am not making a silly mistake.
std::pow(-1, 1.8) results in nan. Also, due to this link:
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
Is there a workaround to calculate the above operation with C++?
std::pow from <cmath> is for real numbers. The exponentiation (power) function of real numbers is not defined for negative bases.
Wikipedia says:
Real exponents with negative bases
Neither the logarithm method nor the rational exponent method can be used to define br as a real number for a negative real number b and an arbitrary real number r. Indeed, er is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0.
The rational exponent method cannot be used for negative values of b
because it relies on continuity. The function f(r) = br has a unique
continuous extension from the rational numbers to the real numbers
for each b > 0. But when b < 0, the function f is not even continuous
on the set of rational numbers r for which it is defined.
For example, consider b = −1. The nth root of −1 is −1 for every odd
natural number n. So if n is an odd positive integer, (−1)(m/n) = −1
if m is odd, and (−1)(m/n) = 1 if m is even. Thus the set of rational
numbers q for which (−1)q = 1 is dense in the rational numbers, as is
the set of q for which (−1)q = −1. This means that the function (−1)q
is not continuous at any rational number q where it is defined.
On the other hand, arbitrary complex powers of negative numbers b can
be defined by choosing a complex logarithm of b.
Powers of complex numbers
Complex powers of positive reals are defined via ex as in section
Complex exponents with positive real bases above [omitted from this quote]. These are continuous
functions.
Trying to extend these functions to the general case of noninteger
powers of complex numbers that are not positive reals leads to
difficulties. Either we define discontinuous functions or multivalued
functions. Neither of these options is entirely satisfactory.
The rational power of a complex number must be the solution to an
algebraic equation. Therefore, it always has a finite number of
possible values. For example, w = z1/2 must be a solution to the
equation w2 = z. But if w is a solution, then so is −w, because (−1)2
= 1. A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for
nonrational powers.
Complex powers and logarithms are more naturally handled as single
valued functions on a Riemann surface. Single valued versions are
defined by choosing a sheet. The value has a discontinuity along a
branch cut. Choosing one out of many solutions as the principal value
leaves us with functions that are not continuous, and the usual rules
for manipulating powers can lead us astray.
So, before calculating the result, you must first choose what you are calculating. The C++ standard library has in <complex> a function template std::complex<T> pow(const complex<T>& x, const T& y), which is specified to calculate (through definition of cpow in C standard):
The cpow functions compute the complex power function xy, with a branch cut for the first parameter along the negative real axis.
For (-1)1.8, the result would be e-(iπ)/5 ≈ 0.809017 + 0.587785i.
This is not what you expected as result. There is no exponentiation function in the C++ standard library that would calculate the result that you want.
How can I use Newton's method to find all roots of polynomial, not only unique?
Link to code: http://quantcorner.wordpress.com/2012/09/14/an-implementation-of-the-newton-raphson-algorithm-in-c-cpp-and-r/
As an example I have this equation: x^2-12x+34=0, when I use this formula I get only one root 4.5857, how can I get second root - 7.4142?
Restart your Newton Raphson from a different starting position.
I have pasted here some code I have implemented in developing a library of financial instrument pricing. See here my code for Newton Raphson. You see as one of the input parameters a start position double start. You can start from different positions on a grid e.g. and compare solutions with what you have already found as solution.
#include "math.h"
#include "ExceptionClass.h"
template <class T, double (T::*Value)(const double) const,
double (T::*Derivative)(const double) const>
double NewtonRaphson(double Target, double start, double Tolerance,
size_t max_count, const T& Object)
{
size_t count = 0;
double diff = Target-(Object.*Value)(start);
double x = start;
double derivative = (Object.*Derivative)(start);
do{
count++;
if(fabs(derivative) < 1.E-6)
throw DivideByZeroException("Dividing by a derivative smaller than: 1.E-6!");
x = x - diff/(-derivative);
diff = Target-(Object.*Value)(x);
derivative = (Object.*Derivative)(x);
} while((fabs(diff)>Tolerance)&&(count < max_count));
if(count >= max_count)
throw("Newton-Raphson did not converge in the defined number of steps!");
else return x;
}
T is a class where you have defined functions to evaluate your (quadratic) equation (here referred to by double (T::*Value)(const double) const in the template) and the derivative of this equation (here referred to by double (T::*Derivative)(const double) const in the template).
Note I have included an exception class as Newton Raphson has some issues.
Use the bisection algorithm for a more stable algorithm.
In practice you can use numbers smaller than 1.E-6.
Value should be here a pointer to a function that evaluates your quadratic function, the target should be set to 0 and the derivative should be a pointer to a function that evaluates the derivative of your function.
In your case my template code can be simplified:
Replace the code diff = Target-(Object.*Value)(x); with diff = (Object.*Value)(x);
Replace x = x - diff/(-derivative); with x = x - diff/(derivative); This code will be more easily recognized as the Newton Raphson algorithm.
If you want to increase the speed of convergence use Halley's (yes the guy from the comet) algorithm or Householder's algorithm.
See numerical recipes in c++ for alternative implementation. Numerical recipes in C++ is the book to go for to address these kind of questions.
As I showed in other questions, i'm currently implementing a C++ metaprogramming library which includes, among other things, a set of types and metafunctions for compile-time arithmetic.
My goal now is to implement the trigonometric functions sin and cos for my fixed point type.
My problem is that every paper I have found about trigonometric algorithms talks about CORDIC or some kind of Taylor series. The problem with CORDIC is that it needs a huge set of precomputed values through a lookup-table, and I couln't provide it easilly with tmp. Also, the point of CORDIC is to compute that trigonometric functions in hardware which has not a multiplier, and i'm perfectly cappable to do multiplications with my library.
So my question is: Is there any other simple alternative to CORDIC and Taylor Series to compute trigonometric functions?
Finally I have implemented the sin metafunction through Taylor series, using series of 10 terms by default (Could be configurable).
I have based my implementation in this interesting article.
My library includes an implementation of a tmp for loop using iterators, and expression templates to allow write complex expressions in a "clear" way (Clear compared to the common template-meta-programming syntax add<mul<sub<1,2>>>...). This allows me to literally copy-paste the C implementation provided by the article:
template<typename T , typename TERMS_COUNT = mpl::uinteger<4>>
struct sin_t;
template<typename T , typename TERMS_COUNT = mpl::uinteger<4>>
using sin = typename sin_t<T,TERMS_COUNT>::result;
/*
* sin() function implementation through Taylor series (Check http://www10.informatik.uni-erlangen.de/~pflaum/pflaum/ProSeminar/meta-art.html)
*
* The C equivalent code is:
*
* // Calculate sin(x) using j terms
* float sine(float x, int j)
* {
* float val = 1;
*
* for (int k = j - 1; k >= 0; --k)
* val = 1 - x*x/(2*k+2)/(2*k+3)*val;
*
* return x * val;
* }
*/
template<mpl::fpbits BITS , mpl::fbcount PRECISION , unsigned int TERMS_COUNT>
struct sin_t<mpl::fixed_point<BITS,PRECISION>,mpl::uinteger<TERMS_COUNT>>
{
private:
using x = mpl::fixed_point<BITS,PRECISION>;
using begin = mpl::make_integer_backward_iterator<TERMS_COUNT-1>;
using end = mpl::make_integer_backward_iterator<-1>;
using one = mpl::decimal<1,0,PRECISION>;
using two = mpl::decimal<2,0,PRECISION>;
using three = mpl::decimal<3,0,PRECISION>;
template<typename K , typename VAL>
struct kernel : public mpl::function<decltype( one() - ( x() * x() )/(two() * K() + two())/(two()*K()+three())*VAL() )> {};
public:
using result = decltype( x() * mpl::for_loop<begin , end , one , kernel>() );
};
Here is the header of the implementation in the project repo.
huge set of precomputed values through a lookup-table
How many is "huge"? Sounds like a one-time effort that would be fast as hell once you were done. My advice? Get a shovel and fill in that table. You'd have it done by the time you get another answer here.
I'm trying to solve aX2 + bX + c = 0 but I can't seem to make it work with using the math header (which I'm not supposed to use).
printf("%E",(-b+(b*b-4*a*c)E0.5)/2a);
Use std::sqrt from header <cmath>. Also, you must write (2 * a), not 2a.
Another thing: don't use the textbook formula for solving quadratic equations. Use the method described there.
If you can't use the math header, then you have to implement the square root eg. as described there:
double my_abs(double x)
{
return x > 0 ? x : -x;
}
double my_sqrt(double x)
{
static const double eps = 1e-12;
double u = x, uold;
do { uold = u; u = (u * u + x) / (2 * u); }
while (my_abs(u - uold) < eps * x);
return u;
}
That is not at all how E works.
E is used in floating point literals, to express a number scientific notation (more or less)
// x will be the same as 0.00104
double x = 1.04e-3
If you want to take a square root, then you should be using a sqrt function:
sqrt(-b+(b*b-4*a*c))/2 / a
Of course, since you can't use #include <cmath>, you'd have to roll your own!
You can't use E as pow in C/C++ (see for example mathematical power operator not working as expected). And the E in the printf will print the number as Scientific notation, you know? (like 3.9265E+2).
E only works when you're typing out a constant floating point, like 2.2E6. To compute exponentials, you need to use std::pow() from <cmath>. In this case, you could use std::sqrt().
I suppose with E you mean the power, but there is no such power operator in C++. Use either the pow function or the, in your case more appropriate, sqrt function. But these are both in <cmath>. If you cannot use <cmath> (homework assignment?), you might have to implement your own square root function.
I think you are confusing scientific notation (3.2E6 = 3.2 x 10^6) with exponentiation (sqrt(5) = 5^(1/2)), where I am using ^ for "raise to the power of". Unfortunately, c++, like C, doesn't have a built-in power operator. So you would normally use either sqrt(x) or pow(x,0.5) from the math library.
However, if you want to solve this without the math header, you'll have to find a different way to calculate square roots. You could write a subroutine to use the Babylonian or Heron method, for example...
I've just finished second year at Uni doing a games course, this is always been bugging me how math and game programming are related. Up until now I've been using Vectors, Matrices, and Quaternions in games, I can under stand how these fit into games.
This is a General Question about the relationship between Maths and Programming for Real Time Graphics, I'm curious on how dynamic the maths is. Is it a case where all the formulas and derivatives are predefined(semi defined)?
Is it even feasible to calculate derivatives/integrals in realtime?
These are some of things I don't see how they fit inside programming/maths As an example.
MacLaurin/Talor Series I can see this is useful, but is it the case that you must pass your function and its derivatives, or can you pass it a single function and have it work out the derivatives for you?
MacLaurin(sin(X)); or MacLaurin(sin(x), cos(x), -sin(x));
Derivatives /Integrals This is related to the first point. Calculating the y' of a function done dynamically at run time or is this something that is statically done perhaps with variables inside a set function.
f = derive(x); or f = derivedX;
Bilnear Patches We learned this as a way to possible generate landscapes in small chunks that could be 'sewen' together, is this something that happens in games? I've never heard of this (granted my knowlages is very limited) being used with procedural methods or otherwise. What I've done so far involves arrays for vertex information being processesed.
Sorry if this is off topic, but the community here seems spot on, on this kinda thing.
Thanks.
Skizz's answer is true when taken literally, but only a small change is required to make it possible to compute the derivative of a C++ function. We modify skizz's function f to
template<class Float> f (Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f); // f(x) = x^2 + 4x + 6
}
It is now possible to write a C++ function to compute the derivative of f with respect to x. Here is a complete self-contained program to compute the derivative of f. It is exact (to machine precision) as it's not using an inaccurate method like finite differences. I explain how it works in a paper I wrote. It generalises to higher derivatives. Note that much of the work is done statically by the compiler. If you turn up optimization, and your compiler inlines decently, it should be as fast as anything you could write by hand for simple functions. (Sometimes faster! In particular, it's quite good at amortising the cost of computing f and f' simultaneously because it makes common subexpression elimination easier for the compiler to spot than if you write separate functions for f and f'.)
using namespace std;
template<class Float>
Float f(Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f);
}
struct D
{
D(float x0, float dx0 = 0) : x(x0), dx(dx0) { }
float x, dx;
};
D operator+(const D &a, const D &b)
{
// The rule for the sum of two functions.
return D(a.x+b.x, a.dx+b.dx);
}
D operator*(const D &a, const D &b)
{
// The usual Leibniz product rule.
return D(a.x*b.x, a.x*b.dx+a.dx*b.x);
}
// Here's the function skizz said you couldn't write.
float d(D (*f)(D), float x) {
return f(D(x, 1.0f)).dx;
}
int main()
{
cout << f(0) << endl;
// We can't just take the address of f. We need to say which instance of the
// template we need. In this case, f<D>.
cout << d(&f<D>, 0.0f) << endl;
}
It prints the results 6 and 4 as you should expect. Try other functions f. A nice exercise is to try working out the rules to allow subtraction, division, trig functions etc.
2) Derivatives and integrals are usually not computed on large data sets in real time, its too expensive. Instead they are precomputed. For example (at the top of my head) to render a single scatter media Bo Sun et al. use their "airlight model" which consists of a lot of algebraic shortcuts to get a precomputed lookup table.
3) Streaming large data sets is a big topic, especially in terrain.
A lot of the maths you will encounter in games is to solve very specific problems, and is usually kept simple. Linear algebra is used far more than any calculus. In Graphics (I like this the most) a lot of the algorithms come from research done in academia, and then they are modified for speed by game programmers: although even academic research makes speed their goal these days.
I recommend the two books Real time collision detection and Real time rendering, which contain the guts of most of the maths and concepts used in game engine programming.
I think there's a fundamental problem with your understanding of the C++ language itself. Functions in C++ are not the same as mathmatical functions. So, in C++, you could define a function (which I will now call methods to avoid confusion) to implement a mathmatical function:
float f (float x)
{
return x * x + 4.0f * x + 6.0f; // f(x) = x^2 + 4x + 6
}
In C++, there is no way to do anything with the method f other than to get the value of f(x) for a given x. The mathmatical function f(x) can be transformed quite easily, f'(x) for example, which in the example above is f'(x) = 2x + 4. To do this in C++ you'd need to define a method df (x):
float df (float x)
{
return 2.0f * x + 4.0f; // f'(x) = 2x + 4
}
you can't do this:
get_derivative (f(x));
and have the method get_derivative transform the method f(x) for you.
Also, you would have to ensure that when you wanted the derivative of f that you call the method df. If you called the method for the derivative of g by accident, your results would be wrong.
We can, however, approximate the derivative of f(x) for a given x:
float d (float (*f) (float x), x) // pass a pointer to the method f and the value x
{
const float epsilon = a small value;
float dy = f(x+epsilon/2.0f) - f(x-epsilon/2.0f);
return epsilon / dy;
}
but this is very unstable and quite inaccurate.
Now, in C++ you can create a class to help here:
class Function
{
public:
virtual float f (float x) = 0; // f(x)
virtual float df (float x) = 0; // f'(x)
virtual float ddf (float x) = 0; // f''(x)
// if you wanted further transformations you'd need to add methods for them
};
and create our specific mathmatical function:
class ExampleFunction : Function
{
float f (float x) { return x * x + 4.0f * x + 6.0f; } // f(x) = x^2 + 4x + 6
float df (float x) { return 2.0f * x + 4.0f; } // f'(x) = 2x + 4
float ddf (float x) { return 2.0f; } // f''(x) = 2
};
and pass an instance of this class to a series expansion routine:
float Series (Function &f, float x)
{
return f.f (x) + f.df (x) + f.ddf (x); // series = f(x) + f'(x) + f''(x)
}
but, we're still having to create a method for the function's derivative ourselves, but at least we're not going to accidentally call the wrong one.
Now, as others have stated, games tend to favour speed, so a lot of the maths is simplified: interpolation, pre-computed tables, etc.
Most of the maths in games is designed to to as cheap to calculate as possible, trading speed over accuracy. For example, much of the number crunching uses integers or single-precision floats rather than doubles.
Not sure about your specific examples, but if you can define a cheap (to calculate) formula for a derivative beforehand, then that is preferable to calculating things on the fly.
In games, performance is paramount. You won't find anything that's done dynamically when it could be done statically, unless it leads to a notable increase in visual fidelity.
You might be interested in compile time symbolic differentiation. This can (in principle) be done with c++ templates. No idea as to whether games do this in practice (symbolic differentiation might be too expensive to program right and such extensive template use might be too expensive in compile time, I have no idea).
However, I thought that you might find the discussion of this topic interesting. Googling "c++ template symbolic derivative" gives a few articles.
There's many great answers if you are interested in symbolic calculation and computation of derivatives.
However, just as a sanity check, this kind of symbolic (analytical) calculus isn't practical to do at real time in the context of games.
In my experience (which is more 3D geometry in computer vision than games), most of the calculus and math in 3D geometry comes in by way of computing things offline ahead of time and then coding to implement this math. It's very seldom that you'll need to symbolically compute things on the fly and then get on-the-fly analytical formulae this way.
Can any game programmers verify?
1), 2)
MacLaurin/Taylor series (1) are constructed from derivatives (2) in any case.
Yes, you are unlikely to need to symbolically compute any of these at run-time - but for sure user207442's answer is great if you need it.
What you do find is that you need to perform a mathematical calculation and that you need to do it in reasonable time, or sometimes very fast. To do this, even if you re-use other's solutions, you will need to understand basic analysis.
If you do have to solve the problem yourself, the upside is that you often only need an approximate answer. This means that, for example, a series type expansion may well allow you to reduce a complex function to a simple linear or quadratic, which will be very fast.
For integrals, the you can often compute the result numerically, but it will always be much slower than an analytic solution. The difference may well be the difference between being practical or not.
In short: Yes, you need to learn the maths, but in order to write the program rather than have the program do it for you.