Is there an distinct and effective way of finding eigenvalues and eigenvectors of a real, symmetrical, very large, let's say 10000x10000, sparse matrix in Eigen3? There is an eigenvalue solver for dense matrices but that doesn't make use of the property of the matrix e.g. it's symmetry. Furthermore I don't want to store the matrix in dense.
Or (alternative) is there a better (+better documented) library to do that?
For Eigen, there's a library named Spectra. As is described on its web page, Spectra is a redesign of the ARPACK library using C++ language.
Unlike Armadillo, suggested in another answer, Spectra does support long double and any other real floating-point type (e.g. boost::multiprecision::float128).
Here's an example of usage (same as the version in documentation, but adapted for experiments with different floating-point types):
#include <Eigen/Core>
#include <SymEigsSolver.h> // Also includes <MatOp/DenseSymMatProd.h>
#include <iostream>
#include <limits>
int main()
{
using Real=long double;
using Matrix=Eigen::Matrix<Real, Eigen::Dynamic, Eigen::Dynamic>;
// We are going to calculate the eigenvalues of M
const auto A = Matrix::Random(10, 10);
const Matrix M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseGenMatProd
Spectra::DenseSymMatProd<Real> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
Spectra::SymEigsSolver<Real,
Spectra::LARGEST_ALGE,
Spectra::DenseSymMatProd<Real>> eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
const auto nconv = eigs.compute();
std::cout << nconv << " eigenvalues converged.\n";
// Retrieve results
if(eigs.info() == Spectra::SUCCESSFUL)
{
const auto evalues = eigs.eigenvalues();
std::cout.precision(std::numeric_limits<Real>::digits10);
std::cout << "Eigenvalues found:\n" << evalues << '\n';
}
}
Armadillo will do this using eigs_sym
Note that computing all the eigenvalues is a very expensive operation whatever you do, usually what is done is to find only the k largest, or smallest eigenvalues (which is what this will do).
Related
I am actually trying to solve large sparse linear systems using C++ lib Eigen.
Sparse matrices are taken from this page. Each system as this structure: Ax = b where A is the sparse matrix (n x n), b is computed as A*xe with xe vector of dimension n containing only zeros. After computing x I need to compute the relative error between xe and x. I have written some code but I can't understand why the relative error is so high (1.49853e+08) at the end of the computation.
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/SparseExtra>
#include<Eigen/SparseCholesky>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix<double> mat;
loadMarket(mat, "/Users/anto/Downloads/ex15/ex15.mtx");
VectorXd xe = VectorXd::Constant(mat.rows(), 1);
VectorXd b = mat*xe;
SimplicialCholesky<Eigen::SparseMatrix<double> > chol(mat);
VectorXd x = chol.solve(b);
double relative_error = (x-xe).norm()/(xe).norm();
cout << relative_error << endl;
}
The matrix ex15 can be downloaded from this page. It is a symmetric, positive definite matrix. Can anyone help me to solve the problem? Thank you in advance for your help.
According to this page, ex15 is not full rank. You should check that each step went well:
SimplicialLDLT<Eigen::SparseMatrix<double> > chol(mat);
if(chol.info()!=Eigen::Success)
return;
VectorXd x = chol.solve(b);
if(chol.info()!=Eigen::Success)
return;
and then check that you got one solution (if it's not full rank and that at least one solution exists, then there exist a whole subspace of solutions):
cout << (mat*x-b).norm()/b.norm() << "\n";
In finite element analyses it is quite common to apply some prescribed condition(s) to a big sparse matrix and get a reduced one. This can be achieved easily in MATLAB, SciPy and Julia, for instance, in MATLAB
a=sprand(10000,10000,0.2); % create a random sparse matrix; fill %20
tic; c=a(1:2:4000,2:3:5000); toc % slice the matrix to get a reduced one
Assuming that one has a similar sparse matrix in Eigen, what is the most efficient way to slice an Eigen matrix. I don't care about a copy or a view, but the methodology needs to be able to cope up with non-contiguous slicing. The latter requirement makes the Eigen block operations useless in this regard.
I can think of two methodologies that I have tested:
Iterate over the columns and rows using for loops and assign the values to a second sparse matrix (I know this is a truly bad idea).
Create a dummy sparse matrix with zeros and ones and pre and post multiply it with the actual matrix D*A*A.transpose()
I always use setFromTriplets to create a sparse matrices in Eigen and I have been happy with the solvers and assembling of sparse matrices. However it seems that this slicing is the bottleneck in my code at the moment
The timing of MATLAB vs Eigen (using -O3 -DNDEBUG -march=native) is
MATLAB: 0.016 secs
EIGEN LOOP INDEXING: 193 secs
EIGEN PRE-POST MUL: 13.7 secs
The other methodology that I do not know how to go about is to manipulate directly the [I,J,V] triplets outerIndexPtr, innerIndexPtr, valuePtr.
Here is a proof of concept code snippet
#include <Eigen/Core>
#include <Eigen/Sparse>
template<typename T>
using spmatrix = Eigen::SparseMatrix<T,Eigen::RowMajor>;
spmatrix<double> sprand(int rows, int cols, double sparsity) {
std::default_random_engine gen;
std::uniform_real_distribution<double> dist(0.0,1.0);
int sparsity_ = sparsity*100;
typedef Eigen::Triplet<double> T;
std::vector<T> tripletList; tripletList.reserve(rows*cols);
int counter = 0;
for(int i=0;i<rows;++i) {
for(int j=0;j<cols;++j) {
if (counter % sparsity_ == 0) {
auto v_ij=dist(gen);
tripletList.push_back(T(i,j,v_ij));
}
counter++;
}
}
spmatrix<double> mat(rows,cols);
mat.setFromTriplets(tripletList.begin(), tripletList.end());
return mat;
}
int main() {
int m=1000,n=10000;
auto a = sprand(n,n,0.05);
auto b = sprand(m,n,0.1);
spmatrix<double> c;
// this is efficient but definitely not the right way to do this
// c = b*a*b.transpose(); // uncomment to check, much slower than block operation
c = a.block(0,0,1000,1000); // very fast, Faster than MATLAB (I believe this is just a view)
return 0;
}
So Any pointers, in this direction would be useful.
I am working with the EIGEN 3.2 c++ Matrix library. I have a problem that requires my extracting the phase or angle information from a matrix of type Eigen::MatrixXcd. The problem involves my having a matrix of complex numbers that is the result of calculations in my code. I have the result M of dimension nsamp by nsamp where nsamp is an integer of size 256 (for example).
Hence, MatrixXcd M(nsamp, nsamp);
Now I want to extract the phase (or angle information) from M. I know that the complex analysis method of doing this is,
MatrixXcd ratio = M.imag().array().sin()/M.real().array().cos();
MatrixXd phase = M.real().array().atan();
However, there is no atan method in the Eigen library. So, as a work around I am using
MatrixXcd cosPhase = M.real().array().cos()/M.array().abs();
MatrixXd phase = M.real().array().acos();
The math is solid, but I am getting incorrect answers. I have looked at the imaginary component i.e.
MatrixXd phase = M.imag().array().acos();
and get answers that are "more correct," which does not make sense.
Has anyone in the community dealt with this before and what is your solution?
Many Thanks,
Robert
Well. For anyone seeing this. I figured out the answer to my own question. To calculate the phase contribution we need to calculate the phase using the 2*atan(imag/(sqrt(real^2+imag^2)+real)) algorithm.
This is some simple test code included using the armadillo library
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main(int argc, const char * argv[]) {
// calculate the phase content resulting from a complex field
// matrix of type Eigen::MatrixXcd
double pi = acos(-1);
mat phase(2,2);
phase << pi/2 << pi/2 << endr
pi/2 << pi/2 << endr;
// form the complex field
cx_mat complexField = cx_mat(cos(phase), sin(phase));
// calculate the phase using the tan2 identity
mat REAL = real(complexField);
mat IMAG = imag(complexField);
// calculate the phase using real component of complexField
mat phaseResult = 2*atan(IMAG/(sqrt(REAL%REAL+IMAG%IMAG)+REAL));
cout << phaseResult << "\n" << endl;
return 0;
}
Very likely the function did not exist at the time the question was asked, but the simplest solution is to call the arg() function.
Eigen::MatrixXcd mat = ...;
Eigen::MatrixXd phase = mat.array().arg(); // needs .array() since this works per element
If you ever need to calculate this manually, better use atan2(imag, real) instead of that complicated 2*atan(...) formula.
I have a question about Eigen library in C++. Actually, I want to calculate inverse matrix of sparse matrix.
When I used Dense matrix in Eigen, I can use .inverse() operation to calculate inverse of dense matrix.
But in Sparse matrix, I cannot find inverse operation anywhere. Does anyone who know to calculate inverse of sparse matrix? help me.
You cannot do it directly, but you can always calculate it, using one of the sparse solvers. The idea is to solve A*X=I, where I is the identity matrix. If there is a solution, X will be your inverse matrix.
The eigen documentation has a page about sparse solvers and how to use them, but the basic steps are as follows:
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
SparseMatrix<double> I(n,n);
I.setIdentity();
auto A_inv = solver.solve(I);
It's not mathematically meaningful.
A sparse matrix does not necessarily have a sparse inverse.
That's why the method is not available.
A small extension on #Soheib and #MatthiasB's answers, if you're using Eigen::SparseMatrix<float> it's better to use SparseLU rather than SimplicialLLT or SimplicialLDLT, they produced wrong answers with me on float matrices
Be warned that the inverse of a sparse matrix is not necessarily sparse, so if you're working with large matrices (which is likely, if you're using sparse representations) then this is going to be expensive. Think carefully about whether you really need the actual matrix inverse. If you're going to use the matrix inverse to solve a system of equations, then you don't need to actually compute the matrix inverse and multiply it out (use the method typically named solve and supply the right-hand-side of the equation). If you need the inverse of the Fisher matrix for covariances, try to approximate.
You can find a example about inverse of Sparse Complex Matrix
I used of SimplicialLLT class,
you can find other class from bellow
http://eigen.tuxfamily.org/dox-devel/group__TopicSparseSystems.html
This page can help you with proper class name for your work (spead, accuracy and dimmenssion of your Matrix)
////////////////////// In His Name \\\\\\\\\\\\\\\\\\\\\\\\\\\
#include <iostream>
#include <vector>
#include <Eigen/Dense>
#include <Eigen/Sparse>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix< complex<float> > A(4,4);
for (int i=0; i<4; i++) {
for (int j=0; j<4; j++) {
A.coeffRef(i, i) = i+j;
}
}
A.insert(2,1) = {2,1};
A.insert(3,0) = {0,0};
A.insert(3,1) = {2.5,1};
A.insert(1,3) = {2.5,1};
SimplicialLLT<SparseMatrix<complex<float> > > solverA;
A.makeCompressed();
solverA.compute(A);
if(solverA.info()!=Success) {
cout << "Oh: Very bad" << endl;
}
SparseMatrix<float> eye(4,4);
eye.setIdentity();
SparseMatrix<complex<float> > inv_A = solverA.solve(eye);
cout << "A:\n" << A << endl;
cout << "inv_A\n" << inv_A << endl;
}
I have a ~3000x3000 covariance-alike matrix on which I compute the eigenvalue-eigenvector decomposition (it's a OpenCV matrix, and I use cv::eigen() to get the job done).
However, I actually only need the, say, first 30 eigenvalues/vectors, I don't care about the rest. Theoretically, this should allow to speed up the computation significantly, right? I mean, that means it has 2970 eigenvectors less that need to be computed.
Which C++ library will allow me to do that? Please note that OpenCV's eigen() method does have the parameters for that, but the documentation says they are ignored, and I tested it myself, they are indeed ignored :D
UPDATE:
I managed to do it with ARPACK. I managed to compile it for windows, and even to use it. The results look promising, an illustration can be seen in this toy example:
#include "ardsmat.h"
#include "ardssym.h"
int n = 3; // Dimension of the problem.
double* EigVal = NULL; // Eigenvalues.
double* EigVec = NULL; // Eigenvectors stored sequentially.
int lowerHalfElementCount = (n*n+n) / 2;
//whole matrix:
/*
2 3 8
3 9 -7
8 -7 19
*/
double* lower = new double[lowerHalfElementCount]; //lower half of the matrix
//to be filled with COLUMN major (i.e. one column after the other, always starting from the diagonal element)
lower[0] = 2; lower[1] = 3; lower[2] = 8; lower[3] = 9; lower[4] = -7; lower[5] = 19;
//params: dimensions (i.e. width/height), array with values of the lower or upper half (sequentially, row major), 'L' or 'U' for upper or lower
ARdsSymMatrix<double> mat(n, lower, 'L');
// Defining the eigenvalue problem.
int noOfEigVecValues = 2;
//int maxIterations = 50000000;
//ARluSymStdEig<double> dprob(noOfEigVecValues, mat, "LM", 0, 0.5, maxIterations);
ARluSymStdEig<double> dprob(noOfEigVecValues, mat);
// Finding eigenvalues and eigenvectors.
int converged = dprob.EigenValVectors(EigVec, EigVal);
for (int eigValIdx = 0; eigValIdx < noOfEigVecValues; eigValIdx++) {
std::cout << "Eigenvalue: " << EigVal[eigValIdx] << "\nEigenvector: ";
for (int i = 0; i < n; i++) {
int idx = n*eigValIdx+i;
std::cout << EigVec[idx] << " ";
}
std::cout << std::endl;
}
The results are:
9.4298, 24.24059
for the eigenvalues, and
-0.523207, -0.83446237, -0.17299346
0.273269, -0.356554, 0.893416
for the 2 eigenvectors respectively (one eigenvector per row)
The code fails to find 3 eigenvectors (it can only find 1-2 in this case, an assert() makes sure of that, but well, that's not a problem).
In this article, Simon Funk shows a simple, effective way to estimate a singular value decomposition (SVD) of a very large matrix. In his case, the matrix is sparse, with dimensions: 17,000 x 500,000.
Now, looking here, describes how eigenvalue decomposition closely related to SVD. Thus, you might benefit from considering a modified version of Simon Funk's approach, especially if your matrix is sparse. Furthermore, your matrix is not only square but also symmetric (if that is what you mean by covariance-like), which likely leads to additional simplification.
... Just an idea :)
It seems that Spectra will do the job with good performances.
Here is an example from their documentation to compute the 3 first eigen values of a dense symmetric matrix M (likewise your covariance matrix):
#include <Eigen/Core>
#include <Spectra/SymEigsSolver.h>
// <Spectra/MatOp/DenseSymMatProd.h> is implicitly included
#include <iostream>
using namespace Spectra;
int main()
{
// We are going to calculate the eigenvalues of M
Eigen::MatrixXd A = Eigen::MatrixXd::Random(10, 10);
Eigen::MatrixXd M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseSymMatProd
DenseSymMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
SymEigsSolver< double, LARGEST_ALGE, DenseSymMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << "Eigenvalues found:\n" << evalues << std::endl;
return 0;
}