I am trying to multiply 2 eigen sparse matrices. The code is as follows:
Eigen::SparseMatrix<float> SpMat;
SpMat mat_1;
mat_1.resize(n_e, n_e);
... Fill the matrix. It is sparse
SpMat mat_2;
mat_1.resize(n_e, n_e);
... Fill the matrix. It is sparse
SpMat mat_3 = (mat_1 * mat_2).pruned();
This works fine for small matrices but for larger matrices, it just runs and runs and ten crashes with a seg fault. The same thing in Matlab takes a couple of seconds. So, I wonder if it is trying to keep the full matrix somewhere. If it does, that is really bad! I looked at the documentation and doing this is what it suggested to prune the matrix on the fly.
Basically, the document is sightly confusing for me at least.
the way to do it is simply:
SpMat mat_3 = mat_1 * mat_2
No dense matrix is created along the way.
Eigen rocks!
Related
I have implemented the following code for the strassen algorithm
http://www.sanfoundry.com/java-program-strassen-algorithm/.
It works for most matrices including 2x2 and 4x4 matrices but fails for 3x3 matrices. Any ideas why and how can I fix it?
Look at the way Strassen works. It works by divide and conquer. You didn't post your code but it probably has to do with trying to divide a 3x3 matrix into 4 submatrices which can't be done. You can pad the 3x3 with zeros to create a matrix with dimensions which can be split or just use basic matrix mult.
Also, Strassen and recursive MM algs need a base case in which it goes to regular matrix multiplication because Strassen is only practical for larger matrices. Depends on your system but my laptop needs matrices larger than 256x256 for Strassen to see an improvement.
In my algorithm I employ a sparse matrix inverse operation and I solve it by using the A*x=b method using the QR decomposition method. On Matlab the QR operation runs fine.
However, when I tried to convert the code to C++ using the Eigen library, I didn't get the same answer.
In some cases, there is a shift of some value along each element of vector x compared to the results in Matlab. This value which causes the shift is however constant through all the elements in the vector.
A glimpse of what I do:
Eigen::SparseMatrix<float> A(m, n);
Eigen::VectorXf b;
Eigen::SparseQR<Eigen::SparseMatrix<float>, Eigen::COLAMDOrdering<int>> solver;
solver.compute(A);
Eigen::VectorXf x = solver.solve(b);
x is my final vector which contains the result of A.inverse()*b isn't it?
Additionally, I tried to solve it as a full matrix but still produced different answers on C++ compared to Matlab.
Did anyone here face similar problem ? If yes, any help or pointers are welcome.
On the other hand, if there is something wrong with my understanding, any correction is also appreciated.
Thanks.
I'm writing a program with Armadillo C++ (4.400.1)
I have a matrix that has to be sparse and complex, and I want to calculate the inverse of such matrix. Since it is sparse it could be the pseudoinverse, but I can guarantee that the matrix has the full diagonal.
In the API documentation of Armadillo, it mentions the method .i() to calculate the inverse of any matrix, but sp_cx_mat members do not contain such method, and the inv() or pinv() functions cannot handle the sp_cx_mat type apparently.
sp_cx_mat Y;
/*Fill Y ensuring that the diagonal is full*/
sp_cx_mat Z = Y.i();
or
sp_cx_mat Z = inv(Y);
None of them work.
I would like to know how to compute the inverse of matrices of sp_cx_mat type.
Sparse matrix support in Armadillo is not complete and many of the factorizations/complex operations that are available for dense matrices are not available for sparse matrices. There are a number of reasons for this, the largest being that efficient complex operations such as factorizations for sparse matrices is still very much an open research field. So, there is no .i() function available for cx_sp_mat or other sp_mat types. Another reason for this is lack of time on the part of the sparse matrix developers (...which includes me).
Given that the inverse of a sparse matrix is generally going to be dense, then you may simply be better off turning your cx_sp_mat into a cx_mat and then using the same inversion techniques that you normally would for dense matrices. Since you are planning to represent this as a dense matrix anyway, then it's a fair assumption that you have enough RAM to do that.
What is the easiest and fastest way (with some library, of course) to compute k largest eigenvalues and eigenvectors for a large dense matrix in C++? I'm looking for an equivalent of MATLAB's eigs function; I've looked through Armadillo and Eigen but couldn't find one, and computing all eigenvalues takes forever in my case (I need top 10 eigenvectors for an approx. 30000x30000 dense non-symmetric real matrix).
Desperate, I've even tried to implement power iterations by myself with Armadillo's QR decomposition but ran into complex pairs of eigenvalues and gave up. :)
Did you tried https://github.com/yixuan/spectra ?
It similar to ARPACK but with nice Eigen-like interface (it compatible with Eigen!)
I used it for 30kx30k matrices (PCA) and it was quite ok
AFAIK the problem of finding the first k eigenvalues of a generic matrix has no easy solution. The Matlab function eigs you mentioned is supposed to work with sparse matrices.
Matlab probably uses Arnoldi/Lanczos, you might try if it works decently in your case even if your matrix is not sparse. The reference package for Arnlodi is ARPACK which has a C++ interface.
Here is how I get the k largest eigenvectors of a NxN real-valued (float), dense, symmetric matrix A in C++ Eigen:
#include <Eigen/Dense>
...
Eigen::MatrixXf A(N,N);
...
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> solver(N);
solver.compute(A);
Eigen::VectorXf lambda = solver.eigenvalues().reverse();
Eigen::MatrixXf X = solver.eigenvectors().block(0,N-k,N,k).rowwise().reverse();
Note that the eigenvalues and associated eigenvectors are returned in ascending order so I reverse them to get the largest values first.
If you want eigenvalues and eigenvectors for other (non-symmetric) matrices they will, in general, be complex and you will need to use the Eigen::EigenSolver class instead.
Eigen has an EigenValues module that works pretty well.. But, I've never used it on anything quite that large.
Is there some easy and fast way to convert a sparse matrix to a dense matrix of doubles?
Because my SparseMatrix is not sparse any more, but became dense after some matrix products.
Another question I have: The Eigen library has excellent performance, how is this possible? I don't understand why, because there are only header files, no compiled source.
Let's declare two matrices:
SparseMatrix<double> spMat;
MatrixXd dMat;
Sparse to dense:
dMat = MatrixXd(spMat);
Dense to sparse:
spMat = dMat.sparseView();