Solving Systems of Linear Equations using Eigen - c++

I'm currently working on a fluid simulation in C++, and part of the algorithm is to solve a sparse system of linear equations. People recommended using the library Eigen for this. I decided to test it out using this short program that I wrote:
#include <Eigen/SparseCholesky>
#include <vector>
#include <iostream>
int main() {
std::vector<Eigen::Triplet<double>> triplets;
triplets.push_back(Eigen::Triplet<double>(0, 0, 1));
triplets.push_back(Eigen::Triplet<double>(0, 1, -2));
triplets.push_back(Eigen::Triplet<double>(1, 0, 3));
triplets.push_back(Eigen::Triplet<double>(1, 1, -2));
Eigen::SparseMatrix<double> A(2, 2);
A.setFromTriplets(triplets.begin(), triplets.end());
Eigen::VectorXd b(2);
b[0] = -2;
b[1] = 2;
Eigen::SimplicialCholesky<Eigen::SparseMatrix<double>> chol(A);
Eigen::VectorXd x = chol.solve(b);
std::cout << x[0] << ' ' << x[1] << std::endl;
system("pause");
}
It gives it these two equations:
x - 2y = -2
3x - 2y = 2
The correct solution is:
x = 2
y = 2
But the problem is that when the program runs, it outputs:
0.181818 -0.727273
Which is totally wrong! I have been debugging this for hours, but it's a very short program and I'm following the tutorial on the Eigen website exactly. Does anybody know what is causing this issue?
P.S. I know that the classes I'm using are for sparse matrices, but the only difference between those and the normal Matrix classes is the way the elements are stored.

SimplicialCholesky is for symmetric positive definite (SPD) matrices, the matrix you assembled is not even symmetric. By default it only reads the entries in the lower triangular part ignoring the others, so it solved:
x + 3y = -2
3x -2y = 2
As you noticed, for non-symmetric square problems you need to use a direct solver based on LU or BICGSTAB in the world of iterative solvers. This is all summarized in the doc.

You should use a solver capable to process non-symmetric sparse matrices. Another possible approach is to seek a solution not of the original system [A]x=b, but [A]T*[A]x=[A]T*b, where [A]T stands for the [A] transpose. The latter system's matrix is symmetric and positive definite (as long as [A] is non-singular). The only shortcoming would be the fact that [A]T[A] may be rather ill-conditioned if the original [A] is not "good" in that sense. Just an example of software designed to solve such problems:
http://members.ozemail.com.au/~comecau/CMA_LS_Sparse.htm

Related

Solving system linear equation of small matrices via Cramer's rule has large numerical error

I made the observation that when I solve a system of linear equation via the Cramer's rule (quotient of two determinants) of matrices of order N < 10, then I get quite a large residual error compared to LAPACK solution.
Here is an example:
float B00[36] __attribute__((aligned(16))) = {127.3611, -46.75962, 62.8739, -9.175959, 27.23792, 1.395347,
-46.75962, 841.5496, 406.2475, -119.3715, -33.60108, 6.269638,
62.8739, 406.2475, 1302.981, -542.8405, 95.03378, 42.77704,
-9.175959, -119.3715, -542.8405, 434.3342, 34.96918, -33.74546,
27.23792, -33.60108, 95.03378, 34.96918, 59.10199, -1.880791,
1.395347, 6.269638, 42.77704, -33.74546, -1.880791, 2.650853};
float c00[6] __attribute__((aligned(16))) = {-0.102149, -5.76615, -17.02828, 12.47396, 1.158018, -0.9571021};
Now linsolving this, yields for LAPACK (from Intel MKL):
x = [-0.000314947
-0.000589154
-0.00587876
0.0184799
0.01738
-0.0170484]
and the Cramer's rule (own implementation) yields:
x = [-0.000314933
-0.000798058
-0.00587888
0.0184808
0.017381
-0.0170508]
Note x[1] difference.
I can guarantee that the determinant calculation of mine is correct. Has anyone made a similar observation or can tell something about this?

Total Least Squares algorithm in C/C++

Given a set of points P I need to find a line L that best approximates these points. I have tried to use the function gsl_fit_linear from the GNU scientific library. However my data set often contains points that have a line of best fit with undefined slope (x=c), thus gsl_fit_linear returns NaN. It is my understanding that it is best to use total least squares for this sort of thing because it is fast, robust and it gives the equation in terms of r and theta (so x=c can still be represented). I can't seem to find any C/C++ code out there currently for this problem. Does anyone know of a library or something that I can use? I've read a few research papers on this but the topic is still a little fizzy so I don't feel confident implementing my own.
Update:
I made a first attempt at programming my own with armadillo using the given code on this wikipedia page. Alas I have so far been unsuccessful.
This is what I have so far:
void pointsToLine(vector<Point> P)
{
Row<double> x(P.size());
Row<double> y(P.size());
for (int i = 0; i < P.size(); i++)
{
x << P[i].x;
y << P[i].y;
}
int m = P.size();
int n = x.n_cols;
mat Z = join_rows(x, y);
mat U;
vec s;
mat V;
svd(U, s, V, Z);
mat VXY = V(span(0, (n-1)), span(n, (V.n_cols-1)));
mat VYY = V(span(n, (V.n_rows-1)) , span(n, (V.n_cols-1)));
mat B = (-1*VXY) / VYY;
cout << B << endl;
}
the output from B is always 0.5504, Even when my data set changes. As well I thought that the output should be two values, so I'm definitely doing something very wrong.
Thanks!
To find the line that minimises the sum of the squares of the (orthogonal) distances from the line, you can proceed as follows:
The line is the set of points p+r*t where p and t are vectors to be found, and r varies along the line. We restrict t to be unit length. While there is another, simpler, description in two dimensions, this one works with any dimension.
The steps are
1/ compute the mean p of the points
2/ accumulate the covariance matrix C
C = Sum{ i | (q[i]-p)*(q[i]-p)' } / N
(where you have N points and ' denotes transpose)
3/ diagonalise C and take as t the eigenvector corresponding to the largest eigenvalue.
All this can be justified, starting from the (orthogonal) distance squared of a point q from a line represented as above, which is
d2(q) = q'*q - ((q-p)'*t)^2

Avoid numerical underflow when obtaining determinant of large matrix in Eigen

I have implemented a MCMC algorithm in C++ using the Eigen library. The main part of the algorithm is a loop in which first some some matrix calculations are performed after which the determinant of the resulting matrix is obtained and added to the output. E.g.:
MatrixXd delta0;
NumericVector out(3);
out[0] = 0;
out[1] = 0;
for (int i = 0; i < s; i++) {
...
delta0 = V*(A.cast<double>()-(A+B).cast<double>()*theta.asDiagonal());
...
I = delta0.determinant()
out[1] += I;
out[2] += std::sqrt(I);
}
return out;
Now on certain matrices I unfortunately observe a numerical underflow so that the determinant is outputted as zero (which it actually isn't).
How can I avoid this underflow?
One solution would be to obtain, instead of the determinant, the log of the determinant. However,
I do not know how to do this;
how could I then add up these logs?
Any help is greatly appreciated.
There are 2 main options that come to my mind:
The product of eigenvalues of square matrix is the determinant of this matrix, therefore a sum of logarithms of each eigenvalue is a logarithm of the determinant of this matrix. Assume det(A) = a and det(B) = b for compact notation. After applying aforementioned for 2 matrices A and B, we end up with log(a) and log(b), then actually the following is true:
log(a + b) = log(a) + log(1 + e ^ (log(b) - log(a)))
Yes, we get a logarithm of the sum. What would you do with it next? I don't know, depends on what you have to. If you have to remove logarithm by e ^ log(a + b) = a + b, then you might be lucky that the value of a + b does not underflow now, but in some cases it can still underflow as well.
Perform clever preconditioning; there might be tons of options here, and you better read about them from some trusted sources as this is a serious topic. The simplest (and probably the cheapest ever) example of preconditioning for this particular problem could be to recall that det(c * A) = (c ^ n) * det(A), where A is n by n matrix, and to premultiply your matrix with some c, compute the determinant, and then to divide it by c ^ n to get the actual one.
Update
I thought about one more option. If on the last stages of #1 or #2 you still experience underflow too frequently, then it might be a good idea to increase precision specifically for these last operations, for example, by utilizing GNU MPFR.
You can use Householder elimination to get the QR decomposition of delta0. Then the determinant of the Q part is +/-1 (depending on whether you did an even or odd number of reflections) and the determinant of the R part is the product of the diagonal elements. Both of these are easy to compute without running into underflow hell---and you might not even care about the first.

How to implement a left matrix division on C++ using gsl

I am trying to port a MATLAB program to C++.
And I want to implement a left matrix division between a matrix A and a column vector B.
A is an m-by-n matrix with m is not equal to n and B is a column vector with m components.
And I want the result X = A\B is the solution in the least squares sense to the under- or overdetermined system of equations AX = B. In other words, X minimizes norm(A*X - B), the length of the vector AX - B.
That means I want it has the same result as the A\B in MATLAB.
I want to implement this feature in GSL-GNU (GNU Science Library) and I don't know too much about math, least square fitting or matrix operation, can somebody told me how to do this in GSL? Or if implement them in GSL is too complicate, can someone suggest me a good open source C/C++ library that provides the above matrix operation?
Okay, I finally figure out by my self after spend another 5 hours on it.. But still thanks for the suggestions to my question.
Assuming we have a 5 * 2 matrix
A = [1 0
1 0
0 1
1 1
1 1]
and a vector b = [1.8388,2.5595,0.0462,2.1410,0.6750]
The solution to the A \ b would be
#include <stdio.h>
#include <gsl/gsl_linalg.h>
int
main (void)
{
double a_data[] = {1.0, 0.0,1.0, 0.0, 0.0,1.0,1.0,1.0,1.0,1.0};
double b_data[] = {1.8388,2.5595,0.0462,2.1410,0.6750};
gsl_matrix_view m
= gsl_matrix_view_array (a_data, 5, 2);
gsl_vector_view b
= gsl_vector_view_array (b_data, 5);
gsl_vector *x = gsl_vector_alloc (2); // size equal to n
gsl_vector *residual = gsl_vector_alloc (5); // size equal to m
gsl_vector *tau = gsl_vector_alloc (2); //size equal to min(m,n)
gsl_linalg_QR_decomp (&m.matrix, tau); //
gsl_linalg_QR_lssolve(&m.matrix, tau, &b.vector, x, residual);
printf ("x = \n");
gsl_vector_fprintf (stdout, x, "%g");
gsl_vector_free (x);
gsl_vector_free (tau);
gsl_vector_free (residual);
return 0;
}
In addition to the one you gave, a quick search revealed other GSL examples, one using QR decomposition, the other LU decomposition.
There exist other numeric libraries capable of solving linear systems (a basic functionality in every linear algebra library). For one, Armadillo offers a nice and readable interface:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main()
{
mat A = randu<mat>(5,2);
vec b = randu<vec>(5);
vec x = solve(A, b);
cout << x << endl;
return 0;
}
Another good one is the Eigen library:
#include <iostream>
#include <Eigen/Dense>
using namespace std;
using namespace Eigen;
int main()
{
Matrix3f A;
Vector3f b;
A << 1,2,3, 4,5,6, 7,8,10;
b << 3, 3, 4;
Vector3f x = A.colPivHouseholderQr().solve(b);
cout << "The solution is:\n" << x << endl;
return 0;
}
Now, one thing to remember is that MLDIVIDE is a super-charged function and has multiple execution paths. If the coefficient matrix A has some special structure, then it is exploited to obtain faster or more accurate result (can choose from substitution algorithm, LU and QR factorization, ..)
MATLAB also has PINV which returns the minimal norm least-squares solution, in addition to a number of other iterative methods for solving systems of linear equations.
I'm not sure I understand your question, but if you've already found your solution using MATLAB, you may want to consider using MATLAB Coder, which automatically translates your MATLAB code into C++.

Thin QR decomposition in c++

Is there an easy to use c++ library for "thin" QR decomposition of a rectangular matrix?
Eigen seems to only support full Q matrices. I can take a full Q and discard some columns, but would it be more efficient to not compute them to begin with?
Newmat does exactly what you want.
To decompose A into QR, you can do:
Matrix Q = A;
UpperTriangularMatrix R;
QRZ(Q, R)
If A is a 3x5 matrix, R will be 3x3 and Q will be 3x5 as well.
Even though this question is a bit old, for the record: Eigen does not explicitly compute the Q matrix, but a sequence of Householder vectors, which can directly be multiplied with any matrix (with the correct number of rows).
If you actually explicitly want the thin Q matrix, just multiply by an identity-matrix of the desired size:
#include <Eigen/QR>
#include <iostream>
int main()
{
using namespace Eigen;
MatrixXf A(MatrixXf::Random(5,3));
HouseholderQR<MatrixXf> qr(A);
MatrixXf thinQ = qr.householderQ() * MatrixXf::Identity(5,3);
std::cout << thinQ << '\n';
}