Eigen SparseQR matrix inverse is not precise as in Matlab - c++

In my algorithm I employ a sparse matrix inverse operation and I solve it by using the A*x=b method using the QR decomposition method. On Matlab the QR operation runs fine.
However, when I tried to convert the code to C++ using the Eigen library, I didn't get the same answer.
In some cases, there is a shift of some value along each element of vector x compared to the results in Matlab. This value which causes the shift is however constant through all the elements in the vector.
A glimpse of what I do:
Eigen::SparseMatrix<float> A(m, n);
Eigen::VectorXf b;
Eigen::SparseQR<Eigen::SparseMatrix<float>, Eigen::COLAMDOrdering<int>> solver;
solver.compute(A);
Eigen::VectorXf x = solver.solve(b);
x is my final vector which contains the result of A.inverse()*b isn't it?
Additionally, I tried to solve it as a full matrix but still produced different answers on C++ compared to Matlab.
Did anyone here face similar problem ? If yes, any help or pointers are welcome.
On the other hand, if there is something wrong with my understanding, any correction is also appreciated.
Thanks.

Related

Right function for computing a limited number of eigenvectors of a complex symmetric matrix in Armadillo

I am using the Armadillo library to manually port a piece of Matlab code. The matlab code uses the eigs() function to find a small number (~3) of eigen vectors of a relative large(200x200) covariance matrix R. The code looks like this:
[E,D] = eigs(R,3,"lm");
In Armadillo there are two functions eigs_sym() and eigs_gen() however the former only support real symmetric matrix and the latter requires ARPACK (I'm building the code for Android). Is there a reason eigs_sym doesn't support complex matrices? Is there any other way to find the eigenvectors of a complex symmetric matrix?
The eigs_sym() and eigs_gen() functions (where the s in eigs stands for sparse) in Armadillo are for large sparse matrices. A "large" size in this context is roughly 5000x5000 or larger.
Your R matrix has a size of 200x200. This is very small by current standards. It would be much faster to simply use the dense eigendecomposition eig_sym() or eig_gen() functions to get all the eigenvalues / eigenvectors, followed by extracting a subset of them using submatrix operations like .tail_cols()
Have you tested constructing a 400x400 real symmetric matrix by replacing each complex value, a+bi, by a 2x2 matrix [a,b;-b,a] (alternatively using a block variant of this)?
This should construct a real symmetric matrix that in some way correspond to the complex one.
There will be a slow-down due to the larger size, and all eigenvalues will be duplicated (which may slow down the algorithm), but it seems straightforward to test.

Eigen sparse matrix multiplications seem to compute full matrix

I am trying to multiply 2 eigen sparse matrices. The code is as follows:
Eigen::SparseMatrix<float> SpMat;
SpMat mat_1;
mat_1.resize(n_e, n_e);
... Fill the matrix. It is sparse
SpMat mat_2;
mat_1.resize(n_e, n_e);
... Fill the matrix. It is sparse
SpMat mat_3 = (mat_1 * mat_2).pruned();
This works fine for small matrices but for larger matrices, it just runs and runs and ten crashes with a seg fault. The same thing in Matlab takes a couple of seconds. So, I wonder if it is trying to keep the full matrix somewhere. If it does, that is really bad! I looked at the documentation and doing this is what it suggested to prune the matrix on the fly.
Basically, the document is sightly confusing for me at least.
the way to do it is simply:
SpMat mat_3 = mat_1 * mat_2
No dense matrix is created along the way.
Eigen rocks!

Convert Matricies in Armadillo from Sparse to Dense (spmat to mat)

I'm using the Armadillo C++ linear algebra library, and I'm trying to figure out how to convert an sp_mat sparse matrix object to a standard mat dense matrix.
Looking at the internal code doc, sp_mat and mat don't share a common parent class, which leads me to believe there isn't a way to cast an sp_mat as a mat. By the way, conv_to<mat>::from(sp_mat x) doesn't work.
Perhaps there's a tricky way to do this using one of the advanced mat constructors? For example, somehow create a mat of zeros and pass the locations and values of non-zero elements in the sp_mat.
Does anyone know of an efficient method to do this? Thanks in advance.
Casting works perfectly fine:
sp_mat X(2,2);
mat Y(X);
Y.print("Y:");

Largest eigenvalues (and corresponding eigenvectors) in C++

What is the easiest and fastest way (with some library, of course) to compute k largest eigenvalues and eigenvectors for a large dense matrix in C++? I'm looking for an equivalent of MATLAB's eigs function; I've looked through Armadillo and Eigen but couldn't find one, and computing all eigenvalues takes forever in my case (I need top 10 eigenvectors for an approx. 30000x30000 dense non-symmetric real matrix).
Desperate, I've even tried to implement power iterations by myself with Armadillo's QR decomposition but ran into complex pairs of eigenvalues and gave up. :)
Did you tried https://github.com/yixuan/spectra ?
It similar to ARPACK but with nice Eigen-like interface (it compatible with Eigen!)
I used it for 30kx30k matrices (PCA) and it was quite ok
AFAIK the problem of finding the first k eigenvalues of a generic matrix has no easy solution. The Matlab function eigs you mentioned is supposed to work with sparse matrices.
Matlab probably uses Arnoldi/Lanczos, you might try if it works decently in your case even if your matrix is not sparse. The reference package for Arnlodi is ARPACK which has a C++ interface.
Here is how I get the k largest eigenvectors of a NxN real-valued (float), dense, symmetric matrix A in C++ Eigen:
#include <Eigen/Dense>
...
Eigen::MatrixXf A(N,N);
...
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> solver(N);
solver.compute(A);
Eigen::VectorXf lambda = solver.eigenvalues().reverse();
Eigen::MatrixXf X = solver.eigenvectors().block(0,N-k,N,k).rowwise().reverse();
Note that the eigenvalues and associated eigenvectors are returned in ascending order so I reverse them to get the largest values first.
If you want eigenvalues and eigenvectors for other (non-symmetric) matrices they will, in general, be complex and you will need to use the Eigen::EigenSolver class instead.
Eigen has an EigenValues module that works pretty well.. But, I've never used it on anything quite that large.

Need an example code for getting the inverse of a square matrix using gsl LU decomposition

Could someone kindly show me an example c++ code on how to call gsl function gsl_linalg_LU_decomp() and related to obtain the inverse of a matrix? Very much appreciate it!
I am assuming you do not need the actual inverse, but you need to solve a problem of the type Ax=b. If so, then there is a pretty good example here. If you are using STL containers for your data, e.g. std::vector, then you need to pass a pointer to the first data entry like
std::vector<double> vec(length,val);
gsl_needs_ptr_to_double(&vec[0]);
If you do need the actual inverse of A, then follow the example I linked to obtain the LU decomposition and then call the function gsl_linalg_LU_invert. The gsl library is a GNU project and is generally well documented online, so I suggest you just take some time to read through it a bit.