Cholesky factorization of semi-definite matrices in C++ - c++

I am to implement a function to do a Cholesky factorization of a semidefinite matrix in C++ and am wondering if there is an library/anything out there that is already optimized. It is to work as or similar to what is described in this:
http://www.sciencedirect.com/science/article/pii/S0096300310012713
This is an example for positive definite but it doesn't work for positive semi-definite: http://en.wikipedia.org/wiki/Cholesky_decomposition#The_Cholesky.E2.80.93Banachiewicz_and_Cholesky.E2.80.93Crout_algorithms
The program must be in C++, with no C/FORTRAN libraries, (think pointy hared boss giving instructions) which means ATLAS, LAPACK, ect. are out. I have looked through MTL + Boost but theirs only works for positive definite matrices. Are there any libraries that I haven't found, or even single functions that have been written?

The problem with Cholesky decomposition of semi definite matrices is that 1) it is not unique 2) Crout's algorithm fails.
The existence of a decomposition is usually proven non constructively, via a limiting argument (if M_n -> M, and M_n = U^t_n U_n, then ||U_n|| = ||M_n||^1/2 where ||.|| = Hilbert-Schmidt norm, and U_n is a bounded sequence. Extract a subsequence to find a limit U satisfying U^t U = M and U triangular.)
I have found that in the cases I was interested in, it was satisfactory to multiply the diagonal elements by 1 + epsilon, with epsilon small (take a few thousand times the machine epsilon) to give a perfectly acceptable decomposition.
Indeed, if M is positive semi definite, then for each epsilon > 0, M + epsilon I is definite positive.
As the scheme converges when epsilon goes to zero, you can contemplate computing the decomposition for multiple epsilons and perform a Richardson extrapolation.
As to the positive definite case, you could implement Crout's algorithm yourself (there is a sample code in Numerical Recipes), but I would highly recommend against writing it yourself, and advise using LAPACK instead.
This may involve having your boss pay for Intel MKL if he is concerned by potentially poor implementations of LAPACK. Most of the time I heard such a speech, the rationale was "but we can't control the code, we do want to write it yourself so that we can debug it in case of a problem". Dumb argument. LAPACK is 40 years old and thoroughly tested.
Requiring not to use LAPACK is as silly as requiring not to use the standard library for sine, cosine and logarithms.

Related

Better precision to solve a geometric progression

Letting a1 be the first term, r be the constant that each term is multiplied by to get the next term and n be the number of terms, the geometric progression is: ai = a1*r**(i-1), pn the product of the n terms and sn the sum of the n terms.
I have the formulas to calculate this, but Fortran 95 (Plato2) doesn't admit the precision I need. (For example: I can't get -1.234E+00567890 as result).
How can I "amply" the double precision to work with this "huge" numbers?
Such high numbers (your example -1.234E+00567890) are too large for any intrinsic numerical type supplied by Fortran standard. They are also larger than numbers used in physical and engineering applications. For example, my gfortran supports these types:
huge(1.0_real32) 3.40282347E+38
huge(1.0_real64) 1.7976931348623157E+308
huge(1.0_real128) 1.18973149535723176508575932662800702E+4932
As far as I know there is no Fortran compiler available with much larger intrinsic floating point types.
For specialized purposes, as yours, specialized libraries are needed. This site is not for software recommendations so I will not recommend any particular one. Have a look at a list of some of them at http://crd-legacy.lbl.gov/~dhbailey/mpdist/ and of course there are more around (the GNU Scientific Library will have something I am sure).

Rounding error in dgesv?

I'm using the dgesv and dgemm fortran subroutines in C++ to do some simple matrix multiplication and left division.
For random matrices A and B, I do:
A\(A\(A*B));
where * is defined using dgemm and \ using dgesv. Obviously, this expression should simplify to the identity matrix. I'm testing my answers against MATLAB and I'm getting more or less 1's on the diagonal but the other entries are very slightly off (the numbers are on the order of magnitude e-15, so they're close to 0 already).
I'm just wondering if this result is to be expected or not? Because if I do something like this:
C = A+B;
D = A*B;
D\(C\(C*C));
the result should come out to D\C. Basically, C(C*C) is very accurate (matches MATLAB perfectly), but the second I do D\C I get something that's off by e-1 or even e+00. I'm guessing that's not supposed to happen?
Your problem seems to be related to finite accuracy of floating point variables in C/C++. You can read more about it here. There are some techniques of minimizing that effect (some of them described in the wiki article) but there will always be some loss of accuracy after a few operations. You might want to use some third-party mathematical library that supports numbers of arbitrary precision (e.g. GMP). But still - as long as you stick to numerical approach accuracy of your calculations will always be tainted.

Bracketing algorithm when root finding. Single root in "quadratic" function

I am trying to implement a root finding algorithm. I am using the hybrid Newton-Raphson algorithm found in numerical recipes that works pretty nicely. But I have a problem in bracketing the root.
While implementing the root finding algorithm I realised that in several cases my functions have 1 real root and all the other imaginary (several of them, usually 6 or 9). The only root I am interested is in the real one so the problem is not there. The thing is that the function approaches the root like a cubic function, touching with the point the y=0 axis...
Newton-Rapson method needs some brackets of different sign and all the bracketing methods I found don't work for this specific case.
What can I do? It is pretty important to find that root in my program...
EDIT: more problems: sometimes due to reaaaaaally small numerical errors, say a variation of 1e-6 in some value the "cubic" function does NOT have that real root, it is just imaginary with a neglectable imaginary part... (checked with matlab)
EDIT 2: Much more information about the problem.
Ok, I need root finding algorithm.
Info I have:
The root I need to find is between [0-1] , if there are more roots outside that part I am not interested in them.
The root is real, there may be imaginary roots, but I don't want them.
Probably all the rest of the roots will be imaginary
The root may be double in that point, but I think that actually doesn't mater in numerical analysis problems
I need to use the root finding algorithm several times during the overall calculations, but the function will always be a polynomial
In one of the particular cases of the root finding, my polynomial will be similar to a quadratic function that touches Y=0 with the point. Example of a real case:
The coefficient may not be 100% precise and that really slight imprecision may make the function not to touch the Y=0 axis.
I cannot solve for this specific case because in other cases it may be that the polynomial is pretty normal and doesn't make any "strange" thing.
The method I am actually using is NewtonRaphson hybrid, where if the derivative is really small it makes a bisection instead of NewRaph (found in numerical recipes).
Matlab's answer to the function on the image:
roots:
0.853553390593276 + 0.353553390593278i
0.853553390593276 - 0.353553390593278i
0.146446609406726 + 0.353553390593273i
0.146446609406726 - 0.353553390593273i
0.499999999999996 + 0.000000040142134i
0.499999999999996 - 0.000000040142134i
The function is a real example I prepared where I know that the answer I want is 0.5
Note:
I still haven't check completely some of the answers I you people have give me (Thank you!), I am just trying to give al the information I already have to complete the question.
Assuming you have a one-dimensional polynomial problem (which I assume from the imaginary solutions) you can use Sturm sequences to bracket all real roots. See Sturm's theorem.
Welcome to the wonderful world of numerical methods. Watch your hairline; it might start receding as you pull your hair out in frustration.
First off, with numerical root finding, you are toast if you can't bracket the problem. Newton Raphson is nice for polishing off a solution once you get close, and it only works if the derivative near the root is well away from zero. You always need to have some slower technique at hand as a backup because Newton Raphson can send you off to never-never land (i.e., somewhere well outside the bracket). If your function is not a polynomial, the first thing to try is Brent's method. If your function is a polynomial, try Laguerre's method or Jenkins-Traub.
BTW, it sounds like you have a pathological problem. You shouldn't expect particularly good performance. Pathological problems are, well, pathological.
Addendum
If you are having problems with things that appear to be roots, but aren't, you need to take care how you evaluate your function. If you do have a polynomial, form each term of the polynomial, sort by absolute value, and add smallest to largest. This produces better accuracy most of the time, but fails if you have large terms whose sum is nearly zero. If that's the case, you might want to add those canceling terms separately, add the rest smallest to largest, and then compute a grand total -- and your still kinda screwed. That big addition that nearly cancels loses a lot of precision. There's no escape other than extended precision arithmetic.
Ander, thanks for responding to my question (about the interval); sorry for the delay in following up - I have very busy work. Also - before I found the additional information you've provided - I had in mind to explain quite a few things how to handle this and was contemplating how to present that. However, I now believe your case is not too difficult and we can get at it without too much additional stuff, since you apparently have an explicit polynomial expression (coefficients to the various powers).
Let's start with a simple case, to pinpoint the approach.
Step 1.
If you have a 2nd degree polynomial, its derivative is first order and has a simple zero (which you can find by bracketing or simply by explicitly solving the equation). (Yes, I know there's a closed formula for the roots of a 2nd degree polynomial also, but for the sake of the current argument, let us forget that).
The zero's of the 2nd degree polynomial are then located one at the left side and one at the right side of the zero of the derivative. So, if you also have the interval where the roots of the original function (the 2nd degree polynomial) are to be found, you now have two intervals - left and right of the derivative-zero, each with one zero.
It is important to realize that the original function is MONOTONIC on each subinterval (decreasing on one of them, increasing on the other). Therefore, simply by checking the function values at the ends of the (sub)interval you can determine whether or not they actually bracket a zero. If not, there's a multiple zero (double, in this case) exactly at the zero of the derivative IF the function is zero there (otherwise, it is a double imaginary root of which you've now found the real part).
In case the zero of the derivative lies OUTSIDE the total interval, you will have at most one root inside your interval and you need to check only that particular (sub)interval.
Step 2.
Consider now a 3rd order polynomial.
Its derivative is 2nd order.
The derivative of THAT 2nd order polynomial is again 1st order and you proceed as before to get two subintervals to find the roots of the derivative of the original function. These two roots give you THREE (at most) intervals where you will find the 3 roots of the original (3rd order) function.
And also here, you will have intervals (3) where the original function is monotonic (alternatingly increasing/decreasing), making the analysis per subinterval quite easy.
Again, zeros may coincide (2 or even all 3) and may in addition turn out to be complex-valued (i.e. having non-zero imaginary parts). The analysis of the cases is straightforward: check function values at the borders of the intervals to assess whether not there's a sign-change (function is monotonic on each subinterval) and/or whether the function is zero at one of the subinterval-borders.
Step 4.
Generalize this with the known polynomial. Let's say - your example - it is 6th order:
a) construct the 5th derivative (i.e. reducing the original to a 1st order polynomial). Compute it's zero (it is at precisely 0.5 in your example). In this case you're already done, but suppose you don't realize that. So you have now 2 intervals 0..0.5 and 0.5..1
b) construct the 4th derivative. Inspect its values at the subinterval-boundaries (0, 0.5, 1)
For each subinterval determine if it has a real zero inside. If so, you re-partition your original interval in 3 subintervals, using the two found zeros (you forget about the zero of the 5th derivative). If they coincide (at the previous cut, 0.5) you stick with that 0.5 (don't care whether you've found a true double zero of your 4th derivative there or a "double imaginary") and still have only 2 intervals, but for the sake of the argument let's say you now have 3.
c) construct the 3rd derivative and do likewise as before. You will then have 4 (at most) intervals.
d) And so on. After having processed the 2nd derivative in this fashion you have 5 (at most) intervals, and after processing the 1st derivative you have 6 intervals (or less...) and knowing the function is monotonic on each subinterval, you'll quickly determine in each of them if there's a real root, as always using the know monotonicity of the function in each of the final subintervals.
Adding a note on numerical accuracy at evaluating a function:
A first (probably sufficient, in this case) method to reduce noise is NOT to evaluate your function in the way suggested by the original form (i.e. a6 x*6 + a5 x*5 +..), but to rewrite it as:
a0 + x*(a1 + x*(a2 + x*(a3 + x*(a4 + x*(a5 + x*a6)))))
So, in evaluating you proceed:
tmp = a6
tmp = x*tmp + a5
tmp = x*tmp + a4
etcetera.
In case this little rewriting is not sufficient for numerical stability, you should rewrite your polynomial in (for instance) a chebyshev-polynomial expansion and evaluate that one with its recurrence relations. Both (getting the expansion and applying the recurrence relations for evaluation) are rather simple. I can explain, if you need help, but I guess it won't be necessary here.
In all cases, you HAVE to allow for some inaccuracy, i.e. accept that a computation will, generally speaking, NEVER give you the mathematically exact function value. So the assessment whether the function is presumably zero at some point must include some "tolerance", there's no way around this, unfortunately; the best you can aim for is to minimize the noise.
Well, if your function touches zero but never crosses it, you seem to be looking for a minimum (or a maximum). In which case, you're better off telling computer to do exactly that --- either find the root of a derivative (if you can calculate it analytically), or use a minimization routine. Then check that the function value at the minimum is 'close enough' to zero.
Just to reiterate what was already said by other people:
don't start with Newton-Raphson method; it's almost always better to start with Brent or even a straightforward bisection (provided you can bracket the root).
An instability where 'small numerical errors' of the order of 1e-6 have bad effects is worth investigating. Immediate suspects: mixing floats and doubles, loss of precision somewhere etc.
EDIT: So, depending on some parameters, your function has either a zero crossing, or a minimum with zero value, is this correct? In this case, what I'd do is this: use a simple and robust bracketing strategy (e.g. start from [-1, 1], multiply the endpoints by 1.1, check the signs, keep multiplying, something like this). If that succeeds, there's a zero crossing, use a root finding routine. If bracketing fails, use minimization.
Using Newton-Raphson is an act of desperation. You are much better off finding the continued fraction that represents your function and calculating that. A CF will converge much faster and will produce the real root(s). Also, because the CF produces a ratio of two integers you have tight control over numeric precision and don't have to worry about accumulation of rounding errors and other similar hair-pulling-out problems.
To find the real roots of any polynomial function refer to "A Continued Fraction Algorithm for Approximating All Real Polynomial Roots" by David Rosen (1978).
------------ ADDENDUM 1 --- 11 OCT-----------------
Ok, you are solving a sextic. You have several options. The simplest is to use a Taylor approximation (say to the 3rd degree) in conjunction with Halley's method. This is much superior to Newton because it has cubic convergence and you can detect imaginary solutions. The disadvantage is that you will have rounding problems which may result in an incorrect answer.
The ideal option is to find the continued fraction that represents the monic root, because this CF will be computable as an integer ratio of any desired precision, thus elminating the problem of rounding.
One approach to computing this CF is via the Jacobi-Perron algorithm. See the paper Hendy and Jeans: http://www.ams.org/mcom/1981-36-154/S0025-5718-1981-0606514-X/S0025-5718-1981-0606514-X.pdf. This paper shows the exact algorithm for computing cubic and quartic roots via CF approximation.
Note that if the sextic is reducible then it can converted into a quartic and quadratic: http://elib.mi.sanu.ac.rs/files/journals/tm/21/tm1124.pdf. The quartic is then solvable by the algorithm in the Hendy paper.
The general solution to generate a CF for a sextic can be done via the Rogers-Ramunajan CF. See the following paper for the method: http://arxiv.org/pdf/1111.6023v2. This will generate the CF for any sextic.
As in your case, you are interested in the real factorization of a real polynomial. One may see that all complex roots come in conjugate pairs which correspond to a real quadratic factor. By finding this real quadratic and completing the square to get the form (x-r)^2 + s you will be able to see the "real" even order root r with an "error" given by s. If s > 0 is too large, you may discard it as probably being complex. If s < 0 is also large, then you have two faraway real roots given by x = r ± √(-s). If s is very small then you might suspect r is a real double root and keep it.
Finding such a quadratic factor may be done using Bairstow's method, which actually applies a two-dimensional Newton method. This gives x^2 + ux + v and r = -u/2; s = v - r^2.

Is catastrophic cancellation an issue when calculating dot products of floating point vectors? And if so, how is it typically addressed?

I am writing a physics simulator in C++ and am concerned about robustness. I've read that catastrophic cancellation can occur in floating point arithmetic when the difference of two numbers of almost equal magnitude is calculated.
It occurred to me that this may happen in the simulator when the dot product of two almost orthogonal vectors is calculated.
However, the references I have looked at only discuss solving the problem by rewriting the equation concerned (eg the quadratic formula can be rewritten to eliminate the problem) - but this doesn't seem to apply when calculating a dot product?
I guess I'd be interested to know if this is typically an issue in physics engines and how it is addressed.
One common trick is to make the accumulator variable be a type with higher precision than the vectors itself.
Alternatively, one can use Kahan summation when summing the terms.
Another approach is to use various blocked dot product algorithms instead of the canonical algorithm.
One can of course combine both the above approaches.
Note that the above is wrt general error behavior for dot products, not specifically catastrophic cancellation.
You say in a comment that you have to calculate x1*x2 + y1*y2, where all variables are floats. So if you do the calculation in double-precision, you lose no accuracy at all, because double-precision has more than twice as many bits of precision as float (assuming your target uses IEEE-754).
Specifically: let xx, yy be the real numbers represented by the float variables x, y. Let xxyy be their product, and let xy be the result of the double-precision multiplication x * y. Then in all cases, xxyy is the real number represented by xy.

Need pow(-1,1.2) to be 1

I am using math.h with GCC and GSL. I was wondering how to get this to evaluate?
I was hoping that the pow function would recognize pow(-1,1.2) as ((-1)^6)^(1/5). But it doesn't.
Does anybody know of a c++ library that will recognize these? Perhaps somebody has a decomposition routine they could share.
Mathematically, pow(-1, 1.2) is simply not defined. There are no powers with fractional exponents of negative numbers, and I hope there is no library that will simply return some arbitray value for such an expression. Would you also expect things like
pow(-1, 0.5) = ((-1)^2)^(1/4) = 1
which obviously isn't desirable.
Moreover, the floating point number 1.2 isn't even exactly equal to 6/5. The closest double precision number to 1.2 is
1.1999999999999999555910790149937383830547332763671875
Given this, what result would you expect now for pow(-1, 1.2)?
If you want to raise negative numbers to powers -- especially fractional powers -- use the cpow() method. You'll need to include <complex> to use it.
It seems like you're looking for pow(abs(x), y).
Explanation: you seem to be thinking in terms of
xy = (xN)(y/N)
If we choose that N === 2, then you have
(x2)y/2 = ((x2)1/2)y
But
(x2)1/2 = |x|
Substituting gives
|x|y
This is a stretch, because the above manipulations only work for non-negative x, but you're the one who chose to use that assumption.
Sounds like you want to perform a complex power (cpow()) and then take the magnitude (abs()) of that after.
>>> abs(cmath.exp(1.2*cmath.log(-1)))
1.0
>>> abs(cmath.exp(1.2*cmath.log(-293.2834)))
913.57662451612202
pow(a,b) is often thought of, defined as, and implemented as exp(log(a)*b) where log(a) is natural logarithm of a. log(a) is not defined for a<=0 in real numbers. So you need to either write a function with special case for negative a and integer b and/or b=1/(some_integer). It's easy to special-case for integer b, but for b=1/(some_integer) it's prone to round-off problems, like Sven Marnach pointed out.
Maybe for your domain pow(-a,b) should always be -pow(a,b)? But then you'd just implement such function, so I assume the question warrants more explanation .
Like duskwuff suggested, a much more robust and "mathematical" solution is to use complex functions log and exp, but it's much more "complex" (excuse my pun) than it seems on the surface (even though there's cpow function). And it'll be much slower if you have to compute a lot of pow()s.
Now there's an important catch with complex numbers that may or may not be relevant to your problem domain: when done right, the result of pow(a,b) is not one, but often a few complex numbers, but in the cases you care about, one of them will be complex number with nearly-zero imaginary part (it'll be non-zero due to roundoff errors) which you can simply ignore and/or not compute in your code.
To demonstrate it, consider what pow(-1,.5) is. It's a number X such that X^2==-1. Guess what? There are 2 such numbers: i and -i. Generally, pow(-1, 1/N) has exactly N solutions, although you're interested in only one of them.
If the imaginary part of all results of pow(a,b) is significant, it means you are passing wrong values. For single-precision floating point values in the range you describe, 1e-6*max(abs(a),abs(b)) would be a good starting point for defining the "significant enough" threshold. The extreme "wrong values" would be pow(-1,0.5) which would return 0 + 1i (0 in real part, 1 in imaginary part). Here the imaginary part is huge relative to the input and real part, so you know you screwed up your input values.
In any reasonable single-return-result implementation of cpow() , cpow(-1,0.3333) will probably return something like -1+0.000001i and ignore two other values with significant imaginary parts. So you can just take that real value and that's your answer.
Use std::complex. Without that, the roots of unity don't make much sense. With it they make a whole lot of sense.