Large sparse linear system solving with SimplicialCholesky Eigen - c++

I am actually trying to solve large sparse linear systems using C++ lib Eigen.
Sparse matrices are taken from this page. Each system as this structure: Ax = b where A is the sparse matrix (n x n), b is computed as A*xe with xe vector of dimension n containing only zeros. After computing x I need to compute the relative error between xe and x. I have written some code but I can't understand why the relative error is so high (1.49853e+08) at the end of the computation.
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/SparseExtra>
#include<Eigen/SparseCholesky>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix<double> mat;
loadMarket(mat, "/Users/anto/Downloads/ex15/ex15.mtx");
VectorXd xe = VectorXd::Constant(mat.rows(), 1);
VectorXd b = mat*xe;
SimplicialCholesky<Eigen::SparseMatrix<double> > chol(mat);
VectorXd x = chol.solve(b);
double relative_error = (x-xe).norm()/(xe).norm();
cout << relative_error << endl;
}
The matrix ex15 can be downloaded from this page. It is a symmetric, positive definite matrix. Can anyone help me to solve the problem? Thank you in advance for your help.

According to this page, ex15 is not full rank. You should check that each step went well:
SimplicialLDLT<Eigen::SparseMatrix<double> > chol(mat);
if(chol.info()!=Eigen::Success)
return;
VectorXd x = chol.solve(b);
if(chol.info()!=Eigen::Success)
return;
and then check that you got one solution (if it's not full rank and that at least one solution exists, then there exist a whole subspace of solutions):
cout << (mat*x-b).norm()/b.norm() << "\n";

Related

C++ Spectra with RowMajor sparse matrix

I'm trying to use the Spectra 3.5 Library on my Linux machine, and the SparseGenMatProd wrapper for the Matrix-Vector multiplication only seems to work when the sparse matrix is in ColMajor format. Is this normal behavior and if so, how can I fix it to take RowMajor format? I've included a basic example where the output is "Segmentation fault (core dumped)". I've gone through several other posts and the documentation, but can't seem to find an answer.
#include <Eigen/Core>
#include <Eigen/SparseCore>
#include <GenEigsSolver.h>
#include <MatOp/SparseGenMatProd.h>
#include <iostream>
using namespace Spectra;
int main()
{
// A band matrix with 1 on the main diagonal, 2 on the below-main subdiagonal,
// and 3 on the above-main subdiagonal
const int n = 10;
Eigen::SparseMatrix<double, Eigen::RowMajor> M(n, n);
M.reserve(Eigen::VectorXi::Constant(n, 3));
for(int i = 0; i < n; i++)
{
M.insert(i, i) = 1.0;
if(i > 0)
M.insert(i - 1, i) = 3.0;
if(i < n - 1)
M.insert(i + 1, i) = 2.0;
}
// Construct matrix operation object using the wrapper class SparseGenMatProd
SparseGenMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
GenEigsSolver< double, LARGEST_MAGN, SparseGenMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXcd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << *emphasized text*"Eigenvalues found:\n" << evalues << std::endl;
return 0;
}
If you change line 15 to:
Eigen::SparseMatrix<double, Eigen::ColMajor> M(n, n);
it will work as expected.
Currently I'm working around this and converting my matrices to ColMajor, but I'd like to understand what's going on. Any help is much appreciated.
The API of SparseGenMatProd seems to be quite misleading. Looks like you have to specify you are dealing with row-major matrices through the second template parameter:
SparseGenMatProd<double,RowMajor> op(M);
otherwise M is implicitly converted to a temporary column-major matrix which is then stored by const reference by op but this temporary is dead right after that line of code.

Matrix division using C++ Boost

Right now I want to use C++ Boost to solve a matrix function: A*P=X, P=A\X. I have matrix A and matrix X, so I need to do P=A\X to get matrix P. That's a matrix division problem, right?
My C++ code is
#include "stdafx.h"
#include <boost\mat2cpp-20130725/mat2cpp.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/matrix_proxy.hpp>
#include <boost/numeric/ublas/io.hpp>
using namespace boost::numeric::ublas;
using namespace std;
int main() {
using namespace mat2cpp;
matrix<double> x(2,2); // initialize a matrix
x(0, 0) = 1; // assign value
x(1, 1) = 1;
matrix<double> y(2, 1);
y(0, 0) = 1;
y(1, 0) = 1;
size_t rank;
matrix<double> z = matrix_div(x, y, rank);
}
But it has errors Error figure, please help me! Thanks!
First of all there is no such thing as matrix division. If you have this equation A * P=X and you want to find P, than the solution is: inv(A) * A * P=inv(A) * X, where inv(A) is the inverse of the A matrix. Because we know that the inv(A) * A is equal to the identity matrix, than we can conclude that P=inv(A) * X.
Now your problem is to calculate the inverse of the A matrix. There are several ways to do this, my suggestion would be to use LU factorization.
Honestly, I don't know if the boost library has such thing as mat2cpp. If you want to use boost, I would recommend using boost/numeric/ublas/matrix.hpp.

Sparse eigenvalues using eigen3/sparse

Is there an distinct and effective way of finding eigenvalues and eigenvectors of a real, symmetrical, very large, let's say 10000x10000, sparse matrix in Eigen3? There is an eigenvalue solver for dense matrices but that doesn't make use of the property of the matrix e.g. it's symmetry. Furthermore I don't want to store the matrix in dense.
Or (alternative) is there a better (+better documented) library to do that?
For Eigen, there's a library named Spectra. As is described on its web page, Spectra is a redesign of the ARPACK library using C++ language.
Unlike Armadillo, suggested in another answer, Spectra does support long double and any other real floating-point type (e.g. boost::multiprecision::float128).
Here's an example of usage (same as the version in documentation, but adapted for experiments with different floating-point types):
#include <Eigen/Core>
#include <SymEigsSolver.h> // Also includes <MatOp/DenseSymMatProd.h>
#include <iostream>
#include <limits>
int main()
{
using Real=long double;
using Matrix=Eigen::Matrix<Real, Eigen::Dynamic, Eigen::Dynamic>;
// We are going to calculate the eigenvalues of M
const auto A = Matrix::Random(10, 10);
const Matrix M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseGenMatProd
Spectra::DenseSymMatProd<Real> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
Spectra::SymEigsSolver<Real,
Spectra::LARGEST_ALGE,
Spectra::DenseSymMatProd<Real>> eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
const auto nconv = eigs.compute();
std::cout << nconv << " eigenvalues converged.\n";
// Retrieve results
if(eigs.info() == Spectra::SUCCESSFUL)
{
const auto evalues = eigs.eigenvalues();
std::cout.precision(std::numeric_limits<Real>::digits10);
std::cout << "Eigenvalues found:\n" << evalues << '\n';
}
}
Armadillo will do this using eigs_sym
Note that computing all the eigenvalues is a very expensive operation whatever you do, usually what is done is to find only the k largest, or smallest eigenvalues (which is what this will do).

angle data from eigen MatrixXcd

I am working with the EIGEN 3.2 c++ Matrix library. I have a problem that requires my extracting the phase or angle information from a matrix of type Eigen::MatrixXcd. The problem involves my having a matrix of complex numbers that is the result of calculations in my code. I have the result M of dimension nsamp by nsamp where nsamp is an integer of size 256 (for example).
Hence, MatrixXcd M(nsamp, nsamp);
Now I want to extract the phase (or angle information) from M. I know that the complex analysis method of doing this is,
MatrixXcd ratio = M.imag().array().sin()/M.real().array().cos();
MatrixXd phase = M.real().array().atan();
However, there is no atan method in the Eigen library. So, as a work around I am using
MatrixXcd cosPhase = M.real().array().cos()/M.array().abs();
MatrixXd phase = M.real().array().acos();
The math is solid, but I am getting incorrect answers. I have looked at the imaginary component i.e.
MatrixXd phase = M.imag().array().acos();
and get answers that are "more correct," which does not make sense.
Has anyone in the community dealt with this before and what is your solution?
Many Thanks,
Robert
Well. For anyone seeing this. I figured out the answer to my own question. To calculate the phase contribution we need to calculate the phase using the 2*atan(imag/(sqrt(real^2+imag^2)+real)) algorithm.
This is some simple test code included using the armadillo library
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main(int argc, const char * argv[]) {
// calculate the phase content resulting from a complex field
// matrix of type Eigen::MatrixXcd
double pi = acos(-1);
mat phase(2,2);
phase << pi/2 << pi/2 << endr
pi/2 << pi/2 << endr;
// form the complex field
cx_mat complexField = cx_mat(cos(phase), sin(phase));
// calculate the phase using the tan2 identity
mat REAL = real(complexField);
mat IMAG = imag(complexField);
// calculate the phase using real component of complexField
mat phaseResult = 2*atan(IMAG/(sqrt(REAL%REAL+IMAG%IMAG)+REAL));
cout << phaseResult << "\n" << endl;
return 0;
}
Very likely the function did not exist at the time the question was asked, but the simplest solution is to call the arg() function.
Eigen::MatrixXcd mat = ...;
Eigen::MatrixXd phase = mat.array().arg(); // needs .array() since this works per element
If you ever need to calculate this manually, better use atan2(imag, real) instead of that complicated 2*atan(...) formula.

How can I calculate inverse of sparse matrix in Eigen library

I have a question about Eigen library in C++. Actually, I want to calculate inverse matrix of sparse matrix.
When I used Dense matrix in Eigen, I can use .inverse() operation to calculate inverse of dense matrix.
But in Sparse matrix, I cannot find inverse operation anywhere. Does anyone who know to calculate inverse of sparse matrix? help me.
You cannot do it directly, but you can always calculate it, using one of the sparse solvers. The idea is to solve A*X=I, where I is the identity matrix. If there is a solution, X will be your inverse matrix.
The eigen documentation has a page about sparse solvers and how to use them, but the basic steps are as follows:
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
SparseMatrix<double> I(n,n);
I.setIdentity();
auto A_inv = solver.solve(I);
It's not mathematically meaningful.
A sparse matrix does not necessarily have a sparse inverse.
That's why the method is not available.
A small extension on #Soheib and #MatthiasB's answers, if you're using Eigen::SparseMatrix<float> it's better to use SparseLU rather than SimplicialLLT or SimplicialLDLT, they produced wrong answers with me on float matrices
Be warned that the inverse of a sparse matrix is not necessarily sparse, so if you're working with large matrices (which is likely, if you're using sparse representations) then this is going to be expensive. Think carefully about whether you really need the actual matrix inverse. If you're going to use the matrix inverse to solve a system of equations, then you don't need to actually compute the matrix inverse and multiply it out (use the method typically named solve and supply the right-hand-side of the equation). If you need the inverse of the Fisher matrix for covariances, try to approximate.
You can find a example about inverse of Sparse Complex Matrix
I used of SimplicialLLT class,
you can find other class from bellow
http://eigen.tuxfamily.org/dox-devel/group__TopicSparseSystems.html
This page can help you with proper class name for your work (spead, accuracy and dimmenssion of your Matrix)
////////////////////// In His Name \\\\\\\\\\\\\\\\\\\\\\\\\\\
#include <iostream>
#include <vector>
#include <Eigen/Dense>
#include <Eigen/Sparse>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix< complex<float> > A(4,4);
for (int i=0; i<4; i++) {
for (int j=0; j<4; j++) {
A.coeffRef(i, i) = i+j;
}
}
A.insert(2,1) = {2,1};
A.insert(3,0) = {0,0};
A.insert(3,1) = {2.5,1};
A.insert(1,3) = {2.5,1};
SimplicialLLT<SparseMatrix<complex<float> > > solverA;
A.makeCompressed();
solverA.compute(A);
if(solverA.info()!=Success) {
cout << "Oh: Very bad" << endl;
}
SparseMatrix<float> eye(4,4);
eye.setIdentity();
SparseMatrix<complex<float> > inv_A = solverA.solve(eye);
cout << "A:\n" << A << endl;
cout << "inv_A\n" << inv_A << endl;
}