Eigen - Check if matrix is Positive (Semi-)Definite - c++

I'm implementing a spectral clustering algorithm and I have to ensure that a matrix (laplacian) is positive semi-definite.
A check if the matrix is positive definite (PD) is enough, since the "semi-" part can be seen in the eigenvalues. The matrix is pretty big (nxn where n is in the order of some thousands) so eigenanalysis is expensive.
Is there any check in Eigen that gives a bool result in runtime?
Matlab can give a result with the chol() method by throwing an exception if a matrix is not PD. Following this idea, Eigen returns a result without complaining for LLL.llt().matrixL(), although I was expecting some warning/error.
Eigen also has the method isPositive, but due to a bug it is unusable for systems with an old Eigen version.

You can use a Cholesky decomposition (LLT), which returns Eigen::NumericalIssue if the matrix is negative, see the documentation.
Example below:
#include <Eigen/Dense>
#include <iostream>
#include <stdexcept>
int main()
{
Eigen::MatrixXd A(2, 2);
A << 1, 0 , 0, -1; // non semi-positive definitie matrix
std::cout << "The matrix A is" << std::endl << A << std::endl;
Eigen::LLT<Eigen::MatrixXd> lltOfA(A); // compute the Cholesky decomposition of A
if(lltOfA.info() == Eigen::NumericalIssue)
{
throw std::runtime_error("Possibly non semi-positive definitie matrix!");
}
}

In addition to #vsoftco 's answer, we shall also check for matrix symmetry, since the definition of PD/PSD requires symmetric matrix.
Eigen::LLT<Eigen::MatrixXd> A_llt(A);
if (!A.isApprox(A.transpose()) || A_llt.info() == Eigen::NumericalIssue) {
throw std::runtime_error("Possibly non semi-positive definitie matrix!");
}
This check is important, e.g. some Eigen solvers (like LTDT) requires PSD(or NSD) matrix input. In fact, there exists non-symmetric and hence non-PSD matrix A that passes the A_llt.info() != Eigen::NumericalIssue test. Consider the following example (numbers taken from Jiuzhang Suanshu, Chapter 8, Problem 1):
Eigen::Matrix3d A;
Eigen::Vector3d b;
Eigen::Vector3d x;
// A is full rank and all its eigen values >= 0
// However A is not symmetric, thus not PSD
A << 3, 2, 1,
2, 3, 1,
1, 2, 3;
b << 39, 34, 26;
// This alone doesn't check matrix symmetry, so can't guarantee PSD
Eigen::LLT<Eigen::Matrix3d> A_llt(A);
std::cout << (A_llt.info() == Eigen::NumericalIssue)
<< std::endl; // false, no issue detected
// ldlt solver requires PSD, wrong answer
x = A.ldlt().solve(b);
std::cout << x << std::endl; // Wrong solution [10.625, 1.5, 4.125]
std::cout << b.isApprox(A * x) << std::endl; // false
// ColPivHouseholderQR doesn't assume PSD, right answer
x = A.colPivHouseholderQr().solve(b);
std::cout << x << std::endl; // Correct solution [9.25, 4.25, 2.75]
std::cout << b.isApprox(A * x) << std::endl; // true
Notes: to be more exact, one could apply the definition of PSD by checking A is symmetric and all of A's eigenvalues >= 0. But as mentioned in the question, this could be computationally expensive.

you have to test that the matrix is symmetric (A.isApprox(A.transpose())), then create the LDLT (and not LLT because LDLT takes care of the case where one of the eigenvalues is 0, ie not strictly positive), then test for numerical issues and positiveness:
template <class MatrixT>
bool isPsd(const MatrixT& A) {
if (!A.isApprox(A.transpose())) {
return false;
}
const auto ldlt = A.template selfadjointView<Eigen::Upper>().ldlt();
if (ldlt.info() == Eigen::NumericalIssue || !ldlt.isPositive()) {
return false;
}
return true;
}
I tested this on
1 2
2 3
which has a negative eigenvalue (hence not PSD). Without the isPositive() test, isPsd() incorrectly returns true here.
and on
1 2
2 4
which has a null eigenvalue (hence PSD but not PD).

Related

C++ Spectra with RowMajor sparse matrix

I'm trying to use the Spectra 3.5 Library on my Linux machine, and the SparseGenMatProd wrapper for the Matrix-Vector multiplication only seems to work when the sparse matrix is in ColMajor format. Is this normal behavior and if so, how can I fix it to take RowMajor format? I've included a basic example where the output is "Segmentation fault (core dumped)". I've gone through several other posts and the documentation, but can't seem to find an answer.
#include <Eigen/Core>
#include <Eigen/SparseCore>
#include <GenEigsSolver.h>
#include <MatOp/SparseGenMatProd.h>
#include <iostream>
using namespace Spectra;
int main()
{
// A band matrix with 1 on the main diagonal, 2 on the below-main subdiagonal,
// and 3 on the above-main subdiagonal
const int n = 10;
Eigen::SparseMatrix<double, Eigen::RowMajor> M(n, n);
M.reserve(Eigen::VectorXi::Constant(n, 3));
for(int i = 0; i < n; i++)
{
M.insert(i, i) = 1.0;
if(i > 0)
M.insert(i - 1, i) = 3.0;
if(i < n - 1)
M.insert(i + 1, i) = 2.0;
}
// Construct matrix operation object using the wrapper class SparseGenMatProd
SparseGenMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
GenEigsSolver< double, LARGEST_MAGN, SparseGenMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXcd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << *emphasized text*"Eigenvalues found:\n" << evalues << std::endl;
return 0;
}
If you change line 15 to:
Eigen::SparseMatrix<double, Eigen::ColMajor> M(n, n);
it will work as expected.
Currently I'm working around this and converting my matrices to ColMajor, but I'd like to understand what's going on. Any help is much appreciated.
The API of SparseGenMatProd seems to be quite misleading. Looks like you have to specify you are dealing with row-major matrices through the second template parameter:
SparseGenMatProd<double,RowMajor> op(M);
otherwise M is implicitly converted to a temporary column-major matrix which is then stored by const reference by op but this temporary is dead right after that line of code.

Quadratic roots shows NaN

I referred to this post for complex number roots of a quadratic equation. Solve Quadratic Equation in C++
So, I wrote something similiar in C++ using OpenCV and std libraries, but I am always getting NaN and dont know why.
cv::Vec3f coefficients(1,-1,1);
cv::Vec<std::complex<float>,2> result_manual = {{0,0},{0,0}};
float c = coefficients.operator()(0);
float b = coefficients.operator()(1);
float a = coefficients.operator()(2);
std::cout << "---------manual method solving quadratic equation\n";
double delta;
delta = std::pow(b,2)-4*a*c;
if ( delta < 0) {
result_manual[0].real(-b/(2*a));
result_manual[1].real(-b/(2*a));
result_manual[0].imag((float)std::sqrt(delta)/(2*a));
result_manual[1].imag((float)-std::sqrt(delta)/(2*a));
}
else {
result_manual[0].real((float)(-b + std::sqrt(delta))/2*a);
result_manual[1].real((float)(-b - std::sqrt(delta))/2*a);
}
std::cout << result_manual[0] << std::endl;
std::cout << result_manual[1] << std::endl;
Result
---------manual method solving quadratic equation
(0.5,-nan)
(0.5,nan)
Answering myself just for the completion, after many useful comments.
The link in the question is a wrong implementation as the sqrt of a negative number is not defined. The correct implementation would be
result_manual[0].imag((float)-std::sqrt(std::abs(delta))/(2*a));
result_manual[1].imag((float)std::sqrt(std::abs(delta))/(2*a));

Increase precision in SelfAdjointEigenSolver in Eigen

I am trying to determine the eigenvalues and eigenvectors of a sparse array in Eigen. Since I need to compute all the eigenvectors and eigenvalues, and I could not get this done using the unsupported ArpackSupport module working, I chose to convert the system to a dense matrix and compute the eigensystem using SelfAdjointEigenSolver (I know my matrix is real and has real eigenvalues). This works well until I have matrices of size 1024*1024 but then I start getting deviations from the expected results.
In the documentation of this module (https://eigen.tuxfamily.org/dox/classEigen_1_1SelfAdjointEigenSolver.html) from what I understood it is possible to change the number of max iterations:
const int m_maxIterations
static
Maximum number of iterations.
The algorithm terminates if it does not converge within m_maxIterations * n iterations, where n denotes the size of the matrix. This value is currently set to 30 (copied from LAPACK).
However, I do not understand how do you implement this, using their examples:
SelfAdjointEigenSolver<Matrix4f> es;
Matrix4f X = Matrix4f::Random(4,4);
Matrix4f A = X + X.transpose();
es.compute(A);
cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
es.compute(A + Matrix4f::Identity(4,4)); // re-use es to compute eigenvalues of A+I
cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl
How would you modify it in order to change the maximum number of iterations?
Additionally, will this solve my problem or should I try to find an alternative function or algorithm to solve the eigensystem?
My thanks in advance.
Increasing the number of iterations is unlikely to help. On the other hand, moving from float to double will help a lot!
If that does not help, please, be more specific on "deviations from the expected results".
m_maxIterations is a static const int variable, and as such it can be considered an intrinsic property of the type. Changing such a type property usually would be done via a specific template parameter. In this case, however, it is set to the constant number 30, so it's not possible.
Therefore, you're only choice is to change the value in the header file and recompile your program.
However, before doing that, I would try the Singular Value Decomposition. According to the homepage, its accuracy is "Excellent-Proven". Moreover, it can overcome problems due to numerically not completely symmetric matrices.
I solved the problem by writing the Jacobi algorithm adapted from the Book Numerical Recipes:
void ROTATy(MatrixXd &a, int i, int j, int k, int l, double s, double tau)
{
double g,h;
g=a(i,j);
h=a(k,l);
a(i,j)=g-s*(h+g*tau);
a(k,l)=h+s*(g-h*tau);
}
void jacoby(int n, MatrixXd &a, MatrixXd &v, VectorXd &d )
{
int j,iq,ip,i;
double tresh,theta,tau,t,sm,s,h,g,c;
VectorXd b(n);
VectorXd z(n);
v.setIdentity();
z.setZero();
for (ip=0;ip<n;ip++)
{
d(ip)=a(ip,ip);
b(ip)=d(ip);
}
for (i=0;i<50;i++)
{
sm=0.0;
for (ip=0;ip<n-1;ip++)
{
for (iq=ip+1;iq<n;iq++)
sm += fabs(a(ip,iq));
}
if (sm == 0.0) {
break;
}
if (i < 3)
tresh=0.2*sm/(n*n);
else
tresh=0.0;
for (ip=0;ip<n-1;ip++)
{
for (iq=ip+1;iq<n;iq++)
{
g=100.0*fabs(a(ip,iq));
if (i > 3 && (fabs(d(ip))+g) == fabs(d[ip]) && (fabs(d[iq])+g) == fabs(d[iq]))
a(ip,iq)=0.0;
else if (fabs(a(ip,iq)) > tresh)
{
h=d(iq)-d(ip);
if ((fabs(h)+g) == fabs(h))
{
t=(a(ip,iq))/h;
}
else
{
theta=0.5*h/(a(ip,iq));
t=1.0/(fabs(theta)+sqrt(1.0+theta*theta));
if (theta < 0.0)
{
t = -t;
}
c=1.0/sqrt(1+t*t);
s=t*c;
tau=s/(1.0+c);
h=t*a(ip,iq);
z(ip)=z(ip)-h;
z(iq)=z(iq)+h;
d(ip)=d(ip)- h;
d(iq)=d(iq) + h;
a(ip,iq)=0.0;
for (j=0;j<ip;j++)
ROTATy(a,j,ip,j,iq,s,tau);
for (j=ip+1;j<iq;j++)
ROTATy(a,ip,j,j,iq,s,tau);
for (j=iq+1;j<n;j++)
ROTATy(a,ip,j,iq,j,s,tau);
for (j=0;j<n;j++)
ROTATy(v,j,ip,j,iq,s,tau);
}
}
}
}
}
}
the function jacoby receives the size of of the square matrix n, the matrix we want to calculate the we want to solve (a) and a matrix that will receive the eigenvectors in each column and a vector that is going to receive the eigenvalues. It is a bit slower so I tried to parallelize it with OpenMp (see: Parallelization of Jacobi algorithm using eigen c++ using openmp) but for 4096x4096 sized matrices what I did not mean an improvement in computation time, unfortunately.

Sparse eigenvalues using eigen3/sparse

Is there an distinct and effective way of finding eigenvalues and eigenvectors of a real, symmetrical, very large, let's say 10000x10000, sparse matrix in Eigen3? There is an eigenvalue solver for dense matrices but that doesn't make use of the property of the matrix e.g. it's symmetry. Furthermore I don't want to store the matrix in dense.
Or (alternative) is there a better (+better documented) library to do that?
For Eigen, there's a library named Spectra. As is described on its web page, Spectra is a redesign of the ARPACK library using C++ language.
Unlike Armadillo, suggested in another answer, Spectra does support long double and any other real floating-point type (e.g. boost::multiprecision::float128).
Here's an example of usage (same as the version in documentation, but adapted for experiments with different floating-point types):
#include <Eigen/Core>
#include <SymEigsSolver.h> // Also includes <MatOp/DenseSymMatProd.h>
#include <iostream>
#include <limits>
int main()
{
using Real=long double;
using Matrix=Eigen::Matrix<Real, Eigen::Dynamic, Eigen::Dynamic>;
// We are going to calculate the eigenvalues of M
const auto A = Matrix::Random(10, 10);
const Matrix M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseGenMatProd
Spectra::DenseSymMatProd<Real> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
Spectra::SymEigsSolver<Real,
Spectra::LARGEST_ALGE,
Spectra::DenseSymMatProd<Real>> eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
const auto nconv = eigs.compute();
std::cout << nconv << " eigenvalues converged.\n";
// Retrieve results
if(eigs.info() == Spectra::SUCCESSFUL)
{
const auto evalues = eigs.eigenvalues();
std::cout.precision(std::numeric_limits<Real>::digits10);
std::cout << "Eigenvalues found:\n" << evalues << '\n';
}
}
Armadillo will do this using eigs_sym
Note that computing all the eigenvalues is a very expensive operation whatever you do, usually what is done is to find only the k largest, or smallest eigenvalues (which is what this will do).

C++ eigenvalue/vector decomposition, only need first n vectors fast

I have a ~3000x3000 covariance-alike matrix on which I compute the eigenvalue-eigenvector decomposition (it's a OpenCV matrix, and I use cv::eigen() to get the job done).
However, I actually only need the, say, first 30 eigenvalues/vectors, I don't care about the rest. Theoretically, this should allow to speed up the computation significantly, right? I mean, that means it has 2970 eigenvectors less that need to be computed.
Which C++ library will allow me to do that? Please note that OpenCV's eigen() method does have the parameters for that, but the documentation says they are ignored, and I tested it myself, they are indeed ignored :D
UPDATE:
I managed to do it with ARPACK. I managed to compile it for windows, and even to use it. The results look promising, an illustration can be seen in this toy example:
#include "ardsmat.h"
#include "ardssym.h"
int n = 3; // Dimension of the problem.
double* EigVal = NULL; // Eigenvalues.
double* EigVec = NULL; // Eigenvectors stored sequentially.
int lowerHalfElementCount = (n*n+n) / 2;
//whole matrix:
/*
2 3 8
3 9 -7
8 -7 19
*/
double* lower = new double[lowerHalfElementCount]; //lower half of the matrix
//to be filled with COLUMN major (i.e. one column after the other, always starting from the diagonal element)
lower[0] = 2; lower[1] = 3; lower[2] = 8; lower[3] = 9; lower[4] = -7; lower[5] = 19;
//params: dimensions (i.e. width/height), array with values of the lower or upper half (sequentially, row major), 'L' or 'U' for upper or lower
ARdsSymMatrix<double> mat(n, lower, 'L');
// Defining the eigenvalue problem.
int noOfEigVecValues = 2;
//int maxIterations = 50000000;
//ARluSymStdEig<double> dprob(noOfEigVecValues, mat, "LM", 0, 0.5, maxIterations);
ARluSymStdEig<double> dprob(noOfEigVecValues, mat);
// Finding eigenvalues and eigenvectors.
int converged = dprob.EigenValVectors(EigVec, EigVal);
for (int eigValIdx = 0; eigValIdx < noOfEigVecValues; eigValIdx++) {
std::cout << "Eigenvalue: " << EigVal[eigValIdx] << "\nEigenvector: ";
for (int i = 0; i < n; i++) {
int idx = n*eigValIdx+i;
std::cout << EigVec[idx] << " ";
}
std::cout << std::endl;
}
The results are:
9.4298, 24.24059
for the eigenvalues, and
-0.523207, -0.83446237, -0.17299346
0.273269, -0.356554, 0.893416
for the 2 eigenvectors respectively (one eigenvector per row)
The code fails to find 3 eigenvectors (it can only find 1-2 in this case, an assert() makes sure of that, but well, that's not a problem).
In this article, Simon Funk shows a simple, effective way to estimate a singular value decomposition (SVD) of a very large matrix. In his case, the matrix is sparse, with dimensions: 17,000 x 500,000.
Now, looking here, describes how eigenvalue decomposition closely related to SVD. Thus, you might benefit from considering a modified version of Simon Funk's approach, especially if your matrix is sparse. Furthermore, your matrix is not only square but also symmetric (if that is what you mean by covariance-like), which likely leads to additional simplification.
... Just an idea :)
It seems that Spectra will do the job with good performances.
Here is an example from their documentation to compute the 3 first eigen values of a dense symmetric matrix M (likewise your covariance matrix):
#include <Eigen/Core>
#include <Spectra/SymEigsSolver.h>
// <Spectra/MatOp/DenseSymMatProd.h> is implicitly included
#include <iostream>
using namespace Spectra;
int main()
{
// We are going to calculate the eigenvalues of M
Eigen::MatrixXd A = Eigen::MatrixXd::Random(10, 10);
Eigen::MatrixXd M = A + A.transpose();
// Construct matrix operation object using the wrapper class DenseSymMatProd
DenseSymMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
SymEigsSolver< double, LARGEST_ALGE, DenseSymMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << "Eigenvalues found:\n" << evalues << std::endl;
return 0;
}