I have sparse matrix A(120,000*120,000) and a vector b(120,000) and I would like to solve the linear system AX=b using Eigen library. I tried to follow the documentation but I always have an error. I also tried change the matrix to dense and solve the system
Eigen::MatrixXd H(N,N);
HH =Eigen:: MatrixXd(A);
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec(H);
Eigen::VectorXd RR = dec.solve(b);
but I got a memory error.
Please help me find the way to solve this problem.
For sparse systems, we generally use iterative methods. One reason is that direct solvers like LU, QR... factorizations fill your initial matrix (fill in the sense that initially zero components are replaced with nonzero ones). In most of the cases the resulting dense matrix can not fit into the memory anymore (-> your memory error). In short, iterative solvers do not change your sparsity pattern as they only involve matrix-vector products (no filling).
That said, you must first know if your system is symmetric positive definite (aka SPD) in such case you can use the conjugate gradient method. Otherwise you must use methods for unsymmetric systems like BiCGSTAB or GMRES.
You must know that most of the time we use a preconditioner, especially if your system is ill-conditioned.
Looking at Eigen doc I found(example without preconditioner afaik):
int n = 10000;
VectorXd x(n), b(n);
SparseMatrix<double> A(n,n);
/* ... fill A and b ... */
BiCGSTAB<SparseMatrix<double> > solver;
solver.compute(A);
x = solver.solve(b);
std::cout << "#iterations: " << solver.iterations() << std::endl;
std::cout << "estimated error: " << solver.error() << std::endl;
which is maybe a good start (use Conjugate Gradient Method if your matrix is SPD). Note that here, there is no preconditioner, hence convergence will be certainly rather slow.
Recap: big matrix -> iterative method + preconditioner.
This is really a first "basic/naive" explanation, you can found futher information about the theory in Saad's book: Iterative Methods for Sparse Linear Systems. Again this topic is huge you can found plenty of other books on this subject.
I am not an Eigen user, however there are precondtioners here.
-> Incomplete LU (unsymmetric systems) or Incomplete Cholesky (SPD systems) are generally good all purpose preconditioners, there are the firsts to test.
Related
I am trying to solve a sparse linear system as quickly as possible using eigen.
The docs give you 4 sparse solvers toc hoose from (but really it;s more like these three):
SimplicialLLT
#include<Eigen/SparseCholesky> Direct LLt factorization SPD Fill-in reducing LGPL
SimplicialLDLT is often preferable
SimplicialLDLT
#include<Eigen/SparseCholesky> Direct LDLt factorization SPD Fill-in reducing LGPL
Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)
SparseLU
#include<Eigen/SparseLU> LU factorization Square Fill-in reducing, Leverage fast dense algebra MPL2
optimized for small and large problems with irregular patterns
When I use the last solver, i.e. I do:
Eigen::SparseLU<Eigen::SparseMatrix<Scalar>> solver(bijection);
Assert(solver.info() == Eigen::Success, "Matrix is degenerate.");
solver.compute(bijection);
Assert(solver.info() == Eigen::Success, "Matrix is degenerate.");
Eigen::VectorXf vertices_u = solver.solve(u);
Assert(solver.info() == Eigen::Success, "Matrix is degenerate.");
Eigen::VectorXf vertices_v = solver.solve(v);
Assert(solver.info() == Eigen::Success, "Matrix is degenerate.");
I get the correct result, which graphically looks like this:
If I use simplicialLDLT, i.e. if I change the solver line and nothing else to:
Eigen::SimplicialLDLT<Eigen::SparseMatrix<Scalar>> solver(bijection);
I get this degenerate monstrosity:
Basically the two solvers are returining wildely different results for the exact same sparse system. How is this possible?
None of the error checks return false, so in both versions the matrices are considered to be fine.
https://eigen.tuxfamily.org/dox/group__TopicSparseSystems.html => SimplicialLDLT only for SPD matrices. You might try a least squares approach https://snaildove.github.io/2017/08/01/positive_definite_and_least_square/
I am currently working on a project where we need to solve
|Ax - b|^2.
In this case, A is a very sparse matrix and A'A has at most 5 nonzero elements in each row.
We are working with images and the dimension of A'A is NxN where N is the number of pixels. In this case N = 76800. We plan to go to RGB and then the dimension will be 3Nx3N.
In matlab solving (A'A)\(A'b) takes about 0.15 s, using doubles.
I have now done some experimenting with Eigens sparse solvers. I have tried:
SimplicialLLT
SimplicialLDLT
SparseQR
ConjugateGradient
and some different orderings. The by far best so far is
SimplicialLDLT
which takes about 0.35 - 0.5 using AMDOrdering.
When I for example use ConjugateGradient it takes roughly 6 s, using 0 as initilization.
The code for solving the problem is:
A_tot.makeCompressed();
// Create solver
Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>, Eigen::Lower, Eigen::AMDOrdering<int> > solver;
// Eigen::ConjugateGradient<Eigen::SparseMatrix<float>, Eigen::Lower> cg;
solver.analyzePattern(A_tot);
t1 = omp_get_wtime();
solver.compute(A_tot);
if (solver.info() != Eigen::Success)
{
std::cerr << "Decomposition Failed" << std::endl;
getchar();
}
Eigen::VectorXf opt = solver.solve(b_tot);
t2 = omp_get_wtime();
std::cout << "Time for normal equations: " << t2 - t1 << std::endl;
This is the first time I use sparse matrices in C++ and its solvers. For this project speed is crucial and below 0.1 s is a minimum.
I would like to get some feedback on which would be the best strategy here. For example one is supposed to be able to use SuiteSparse and OpenMP in Eigen. What are your experiences about these types of problems? Is there a way of extracting the structure for example? And should conjugateGradient really be that slow?
Edit:
Thanks for som valuable comments! Tonight I have been reading a bit about cuSparse on Nvidia. It seems to be able to do factorisation an even solve systems. In particular it seems one can reuse pattern and so forth. The question is how fast could this be and what is the possible overhead?
Given that the amount of data in my matrix A is the same as in an image, the memory copying should not be such an issue. I did some years ago software for real-time 3D reconstruction and then you copy data for each frame and a slow version still runs in over 50 Hz. So if the factorization is much faster it is a possible speed-up? In particualar the rest of the project will be on the GPU, so if one can solve it there directly and keep the solution it is no drawback I guess.
A lot has happened in the field of Cuda and I am not really up to date.
Here are two links I found: Benchmark?, MatlabGPU
Your matrix is extremely sparse and corresponds to a discretization on a 2D domain, so it is expected that SimplicialLDLT is the fastest here. Since the sparsity pattern is fixed, call analyzePattern once, and then factorize instead of compute. This should save some milliseconds. Moreover, since you're working on a regular grid, you might also try to bypass the re-ordering step using NaturalOrdering (not 100% sure, you have to bench). If that's still not fast enough, you might search for a Cholesky solver tailored for skyline matrices (the Cholesky factorization is much simpler and thus faster in this case).
I need to compute the eigenvalues and eigenvectors of a big matrix (about 1000*1000 or even more). Matlab works very fast but it does not guaranty accuracy. I need this to be pretty accurate (about 1e-06 error is ok) and within a reasonable time (an hour or two is ok).
My matrix is symmetric and pretty sparse. The exact values are: ones on the diagonal, and on the diagonal below the main diagonal, and on the diagonal above it. Example:
How can I do this? C++ is the most convenient to me.
MATLAB does not guarrantee accuracy
I find this claim unreasonable. On what grounds do you say that you can find a (significantly) more accurate implementation than MATLAB's highly refined computational algorithms?
AND... using MATLAB's eig, the following is computed in less than half a second:
%// Generate the input matrix
X = ones(1000);
A = triu(X, -1) + tril(X, 1) - X;
%// Compute eigenvalues
v = eig(A);
It's fast alright!
I need this to be pretty accurate (about 1e-06 error is OK)
Remember that solving eigenvalues accurately is related to finding the roots of the characteristic polynomial. This specific 1000x1000 matrix is very ill-conditioned:
>> cond(A)
ans =
1.6551e+003
A general rule of thumb is that for a condition number of 10k, you may lose up to k digits of accuracy (on top of what would be lost to the numerical method due to loss of precision from arithmetic method).
So in your case, I'd expect the results to be accurate up to an approximate error of 10-3.
If you're not opposed to using a third party library, I've had great success using the Armadillo linear algebra libraries.
For the example below, arma is the namespace they like to use, vec is a vector, mat is a matrix.
arma::vec getEigenValues(arma::mat M) {
return arma::eig_sym(M);
}
You can also serialize the data directly into MATLAB and vice versa.
Your system is tridiagonal and a (symmetric) Toeplitz matrix. I'd guess that eigen and Matlab's eig have special cases to handle such matrices. There is a closed-form solution for the eigenvalues in this case (reference (PDF)). In Matlab for your matrix this is simply:
n = size(A,1);
k = (1:n).';
v = 1-2*cos(pi*k./(n+1));
This can be further optimized by noting that the eigenvalues are centered about 1 and thus only half of them need to be computed:
n = size(A,1);
if mod(n,2) == 0
k = (1:n/2).';
u = 2*cos(pi*k./(n+1));
v = 1+[u;-u];
else
k = (1:(n-1)/2).';
u = 2*cos(pi*k./(n+1));
v = 1+[u;0;-u];
end
I'm not sure how you're going to get more fast and accurate than that (other than performing a refinement step using the eigenvectors and optimization) with simple code. The above should be able to translated to C++ very easily (or use Matlab's codgen to generate C/C++ code that uses this or eig). However, your matrix is still ill-conditioned. Just remember that estimates of accuracy are worst case.
I'm currently trying to develop a small matrix-oriented math library (I'm using Eigen 3 for matrix data structures and operations) and I wanted to implement some handy Matlab functions, such as the widely used backslash operator (which is equivalent to mldivide ) in order to compute the solution of linear systems (expressed in matrix form).
Is there any good detailed explanation on how this could be achieved ? (I've already implemented the Moore-Penrose pseudoinverse pinv function with a classical SVD decomposition, but I've read somewhere that A\b isn't always pinv(A)*b , at least Matalb doesn't simply do that)
Thanks
For x = A\b, the backslash operator encompasses a number of algorithms to handle different kinds of input matrices. So the matrix A is diagnosed and an execution path is selected according to its characteristics.
The following page describes in pseudo-code when A is a full matrix:
if size(A,1) == size(A,2) % A is square
if isequal(A,tril(A)) % A is lower triangular
x = A \ b; % This is a simple forward substitution on b
elseif isequal(A,triu(A)) % A is upper triangular
x = A \ b; % This is a simple backward substitution on b
else
if isequal(A,A') % A is symmetric
[R,p] = chol(A);
if (p == 0) % A is symmetric positive definite
x = R \ (R' \ b); % a forward and a backward substitution
return
end
end
[L,U,P] = lu(A); % general, square A
x = U \ (L \ (P*b)); % a forward and a backward substitution
end
else % A is rectangular
[Q,R] = qr(A);
x = R \ (Q' * b);
end
For non-square matrices, QR decomposition is used. For square triangular matrices, it performs a simple forward/backward substitution. For square symmetric positive-definite matrices, Cholesky decomposition is used. Otherwise LU decomposition is used for general square matrices.
Update: MathWorks has updated the algorithm section in the doc page of mldivide with some nice flow charts. See here and here (full and sparse cases).
All of these algorithms have corresponding methods in LAPACK, and in fact it's probably what MATLAB is doing (note that recent versions of MATLAB ship with the optimized Intel MKL implementation).
The reason for having different methods is that it tries to use the most specific algorithm to solve the system of equations that takes advantage of all the characteristics of the coefficient matrix (either because it would be faster or more numerically stable). So you could certainly use a general solver, but it wont be the most efficient.
In fact if you know what A is like beforehand, you could skip the extra testing process by calling linsolve and specifying the options directly.
if A is rectangular or singular, you could also use PINV to find a minimal norm least-squares solution (implemented using SVD decomposition):
x = pinv(A)*b
All of the above applies to dense matrices, sparse matrices are a whole different story. Usually iterative solvers are used in such cases. I believe MATLAB uses UMFPACK and other related libraries from the SuiteSpase package for direct solvers.
When working with sparse matrices, you can turn on diagnostic information and see the tests performed and algorithms chosen using spparms:
spparms('spumoni',2)
x = A\b;
What's more, the backslash operator also works on gpuArray's, in which case it relies on cuBLAS and MAGMA to execute on the GPU.
It is also implemented for distributed arrays which works in a distributed computing environment (work divided among a cluster of computers where each worker has only part of the array, possibly where the entire matrix cannot be stored in memory all at once). The underlying implementation is using ScaLAPACK.
That's a pretty tall order if you want to implement all of that yourself :)
I need to calculate rank of 4096x4096 sparse matrix, and I use C/C++ code.
I found some libraries (like Armadillo) that do it but they're too slow (almost 5 minutes).
I've also tried two Open Source version of Matlab (Freemat and Octave) but both crashed when I tried to make a test with a script.
5 minutes isn't so much but I must get rank from something like a million of matrix so the faster the better.
Someone knows a fast library for rank computation?
The Eigen library supports sparse matrices, try it out.
Computing the algebraic rank is O(n^3), where n is the matrix size, so it's inherently slow. You need eg. to perform pivoting, and this is slow and inaccurate if your matrix is not well conditioned (for n = 4096, a typical matrix is very ill conditioned).
Now, what is the rank ? It is the dimension of the image. It is very difficult to compute when n is large and it'll be spoiled by any small numerical inaccuracy of the input. For n = 4096, unless you happen to have particularly well conditioned matrices, this will prevent you from doing anything useful with a pivoting algorithm.
The best way is in fact to fix a cutoff epsilon, compute the singular values s_1 > ... > s_n and take as the rank the lowest integer r such that sum(s_i^2, i > r) < epsilon^2 * sum(s_i^2).
You thus need a sparse SVD routine, eg. from there.
This may not be faster, but to the very least it will be correct.
You can ask for less singular values that you need to speed up things. This is a tough problem, and with no info on the background and how you got these matrices, there is nothing more we can do.
Try the following code (the documentation is here).
It is an example for calculating the rank of the matrix A with Eigen library:
MatrixXd A(2,2);
A << 1 , 0, 1, 0;
FullPivLU<MatrixXd> luA(A);
int rank = luA.rank();