I am trying to convert some methods implemented in Eigen C++ dense matrix class (MatrixXd from <Eigen/Dense>) to methods with Eigen C++ sparse matrix (like SparseMatrix<double> from <Eigen/Sparse>).
Many methods can be directly transformed by simply chance MatrixXd to SparseMatrix<double>. However, some methods cannot be.
One problem I met is to convert the following elementwise dividend into sparse matrix method:
(beta.array() / beta.cwiseAbs().array()).sum()
Originally, beta is declared as MatrixXd beta. Now, if I declare beta as SparseMatrix<double> beta, there is no more corresponding array() method to allow me to do the above.
How should I still perform element-wise operations with sparse matrix?
Is there any efficient way that I can convert dense matrix to sparse matrix and vice versa?
This is not supported because rigorously you would compute 0/0 for any explicit zeros. You can workaround if the matrix is in compress mode, to be sure call:
beta.makeCompressed();
then map the nonzeros as a dense array:
Map<ArrayXd> a(beta.valuePtr(), beta.nonZeros();
(a / a.abs()).sum;
Related
I am using the Armadillo library to manually port a piece of Matlab code. The matlab code uses the eigs() function to find a small number (~3) of eigen vectors of a relative large(200x200) covariance matrix R. The code looks like this:
[E,D] = eigs(R,3,"lm");
In Armadillo there are two functions eigs_sym() and eigs_gen() however the former only support real symmetric matrix and the latter requires ARPACK (I'm building the code for Android). Is there a reason eigs_sym doesn't support complex matrices? Is there any other way to find the eigenvectors of a complex symmetric matrix?
The eigs_sym() and eigs_gen() functions (where the s in eigs stands for sparse) in Armadillo are for large sparse matrices. A "large" size in this context is roughly 5000x5000 or larger.
Your R matrix has a size of 200x200. This is very small by current standards. It would be much faster to simply use the dense eigendecomposition eig_sym() or eig_gen() functions to get all the eigenvalues / eigenvectors, followed by extracting a subset of them using submatrix operations like .tail_cols()
Have you tested constructing a 400x400 real symmetric matrix by replacing each complex value, a+bi, by a 2x2 matrix [a,b;-b,a] (alternatively using a block variant of this)?
This should construct a real symmetric matrix that in some way correspond to the complex one.
There will be a slow-down due to the larger size, and all eigenvalues will be duplicated (which may slow down the algorithm), but it seems straightforward to test.
I have a question regarding the Array operations in Eigen (basically matrix element-wise operations).
Are such operations (+,-,*,/) parallelized in Eigen (when using OpenMP)? The doc does not specify it (c.f. here), however such operations would be expected to be parallelized since it would be pretty straightforward I guess.
Example:
MatrixXd A = MatrixXd::Zero(100,100);
MatrixXd B = MatrixXd::Ones(100,100);
MatrixXd C = A.array() + B.array(); // element-wise addition
MatrixXd D = A.array() / B.array(); // element-wise division
It would be great if it was parallelized. I have a lots of these element-wise operations in my code, and it would be heavier to redefine all of these with OpenMP.
Thanks in advance
The Eigen web site lists the few cases that take advantage of multithreading.
Currently, the following algorithms can make use of multi-threading:
general dense matrix - matrix products
PartialPivLU
row-major-sparse * dense vector/matrix products
ConjugateGradient with Lower|Upper as the UpLo template parameter.
BiCGSTAB with a row-major sparse matrix format.
LeastSquaresConjugateGradient
This does not exclude SIMD operations, so those will still be used.
In Eigen library, I know that there are visitors and reductions for dense Eigen::Matrix class which I can use efficiently to compute their 1-norm, inf-norm, etc. someway like this:
Eigen::MatrixXd A;
...
A.colwise().lpNorm<1>().maxCoeff();
A.rowwise().lpNorm<1>().maxCoeff();
// etc.
Now I have sparse Eigen::SparseMatrix class. How can I efficiently compute these norms in this case?
You can compute the colwise/rowwise 1-norm using a product with a vector of ones:
(Eigen::RowVectorXd::Ones(A.rows()) * A.cwiseAbs()).maxCoeff();
(A.cwiseAbs() * Eigen::VectorXd::Ones(A.cols()).maxCoeff();
Check the generated assembly to see if this gets sufficiently optimized for your purpose. If not, or if you need other lpNorms, you may need to write two nested loops with sparse iterators.
I'm writing a program with Armadillo C++ (4.400.1)
I have a matrix that has to be sparse and complex, and I want to calculate the inverse of such matrix. Since it is sparse it could be the pseudoinverse, but I can guarantee that the matrix has the full diagonal.
In the API documentation of Armadillo, it mentions the method .i() to calculate the inverse of any matrix, but sp_cx_mat members do not contain such method, and the inv() or pinv() functions cannot handle the sp_cx_mat type apparently.
sp_cx_mat Y;
/*Fill Y ensuring that the diagonal is full*/
sp_cx_mat Z = Y.i();
or
sp_cx_mat Z = inv(Y);
None of them work.
I would like to know how to compute the inverse of matrices of sp_cx_mat type.
Sparse matrix support in Armadillo is not complete and many of the factorizations/complex operations that are available for dense matrices are not available for sparse matrices. There are a number of reasons for this, the largest being that efficient complex operations such as factorizations for sparse matrices is still very much an open research field. So, there is no .i() function available for cx_sp_mat or other sp_mat types. Another reason for this is lack of time on the part of the sparse matrix developers (...which includes me).
Given that the inverse of a sparse matrix is generally going to be dense, then you may simply be better off turning your cx_sp_mat into a cx_mat and then using the same inversion techniques that you normally would for dense matrices. Since you are planning to represent this as a dense matrix anyway, then it's a fair assumption that you have enough RAM to do that.
Is there some easy and fast way to convert a sparse matrix to a dense matrix of doubles?
Because my SparseMatrix is not sparse any more, but became dense after some matrix products.
Another question I have: The Eigen library has excellent performance, how is this possible? I don't understand why, because there are only header files, no compiled source.
Let's declare two matrices:
SparseMatrix<double> spMat;
MatrixXd dMat;
Sparse to dense:
dMat = MatrixXd(spMat);
Dense to sparse:
spMat = dMat.sparseView();