Largest eigenvalues (and corresponding eigenvectors) in C++ - c++

What is the easiest and fastest way (with some library, of course) to compute k largest eigenvalues and eigenvectors for a large dense matrix in C++? I'm looking for an equivalent of MATLAB's eigs function; I've looked through Armadillo and Eigen but couldn't find one, and computing all eigenvalues takes forever in my case (I need top 10 eigenvectors for an approx. 30000x30000 dense non-symmetric real matrix).
Desperate, I've even tried to implement power iterations by myself with Armadillo's QR decomposition but ran into complex pairs of eigenvalues and gave up. :)

Did you tried https://github.com/yixuan/spectra ?
It similar to ARPACK but with nice Eigen-like interface (it compatible with Eigen!)
I used it for 30kx30k matrices (PCA) and it was quite ok

AFAIK the problem of finding the first k eigenvalues of a generic matrix has no easy solution. The Matlab function eigs you mentioned is supposed to work with sparse matrices.
Matlab probably uses Arnoldi/Lanczos, you might try if it works decently in your case even if your matrix is not sparse. The reference package for Arnlodi is ARPACK which has a C++ interface.

Here is how I get the k largest eigenvectors of a NxN real-valued (float), dense, symmetric matrix A in C++ Eigen:
#include <Eigen/Dense>
...
Eigen::MatrixXf A(N,N);
...
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> solver(N);
solver.compute(A);
Eigen::VectorXf lambda = solver.eigenvalues().reverse();
Eigen::MatrixXf X = solver.eigenvectors().block(0,N-k,N,k).rowwise().reverse();
Note that the eigenvalues and associated eigenvectors are returned in ascending order so I reverse them to get the largest values first.
If you want eigenvalues and eigenvectors for other (non-symmetric) matrices they will, in general, be complex and you will need to use the Eigen::EigenSolver class instead.

Eigen has an EigenValues module that works pretty well.. But, I've never used it on anything quite that large.

Related

Raise a matrix to a power

Suppose we have a Hermitian matrix A that is known to have an inverse.
I know that ZGETRF and ZGETRI subroutines in LAPACK library can compute the inverse matrix.
Is there any subroutine in LAPACK or BLAS library can calculate A^{-1/2} directly or any other way to compute A^{-1/2}?
You can raise a matrix to a power following a similar procedure to taking the exponential of a matrix:
Diagonalise the matrix, to give the eigenvectors v_i and corresponding eigenvalues e_i.
Raise the eigenvalues to the power, {e_i}^{-1/2}.
Construct the matrix whose eigenalues are {e_i}^{-1/2} and whose eigenvectors are v_i.
It's worth noting that, as described here, this problem does not have a unique solution. In step 2 above, both {e_i}^{-1/2} and -{e_i}^{-1/2} will lead to valid solutions, so an N*N matrix A will have at least 2^N matrices B such that B^{-2}=A. If any of the eigenvalues are degenerate then there will be a continuous space of valid solutions.

What could be the best way to find Inverse Matrix in SIMD?

I've been digging this for a while
To find the inverse matrix, I need to find the determinant of the matrix(correct?)
But only way that i found is to calculate all the matrix using a11 (a22a33 - a23*a32) and so on..
Please enlighten me, what could be the best way to find the determinant, so that I can get inverse matrix?
or is there any more efficient way to get inverse matrix without finding the determinant???
Instead of finding the determinant of a general matrix, you can use an LU decomposition and then, like what Intel Math Kernel Library does:
computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A).
inv(U) (U is an upper triangular matrix) is easier and more efficient to compute, for example with procedures shown here, but it boils down to the determinant of an upper triangular matrix being just the product of its diagonal.
And an obligatory reminder: use existing math library if possible, numerical computations like this is very easy to get wrong.

Right function for computing a limited number of eigenvectors of a complex symmetric matrix in Armadillo

I am using the Armadillo library to manually port a piece of Matlab code. The matlab code uses the eigs() function to find a small number (~3) of eigen vectors of a relative large(200x200) covariance matrix R. The code looks like this:
[E,D] = eigs(R,3,"lm");
In Armadillo there are two functions eigs_sym() and eigs_gen() however the former only support real symmetric matrix and the latter requires ARPACK (I'm building the code for Android). Is there a reason eigs_sym doesn't support complex matrices? Is there any other way to find the eigenvectors of a complex symmetric matrix?
The eigs_sym() and eigs_gen() functions (where the s in eigs stands for sparse) in Armadillo are for large sparse matrices. A "large" size in this context is roughly 5000x5000 or larger.
Your R matrix has a size of 200x200. This is very small by current standards. It would be much faster to simply use the dense eigendecomposition eig_sym() or eig_gen() functions to get all the eigenvalues / eigenvectors, followed by extracting a subset of them using submatrix operations like .tail_cols()
Have you tested constructing a 400x400 real symmetric matrix by replacing each complex value, a+bi, by a 2x2 matrix [a,b;-b,a] (alternatively using a block variant of this)?
This should construct a real symmetric matrix that in some way correspond to the complex one.
There will be a slow-down due to the larger size, and all eigenvalues will be duplicated (which may slow down the algorithm), but it seems straightforward to test.

library for full SVD of sparse matrices

I want to do a singular value decomposition for large matrices containing a lot of zeros. In particular I need U and S, obtained from the diagonalization of a symmetric matrix A. This means that A = U * S * transpose(U^*), where S is a diagonal matrix and U contains all eigenvectors as columns.
I searched the web for c++ librarys that combine SVD and sparse matrices, but could only find libraries that find a few, but not all eigenvectors. Does anyone know if there is such a library?
Also after obtaining U and S I need to multiply them to some dense vector.
For this problem, I am using a combination of different techniques:
Arpack can compute a set of eigenvalues and associated eigenvectors, unfortunately it is fast only for high frequencies and slow for low frequencies
but since the eigenvectors of the inverse of a matrix are the same as the eigenvectors of a matrix, one can factor the matrix (using a sparse matrix factorization routine, such as SuperLU, or Choldmod if the matrix is symmetric). The "communication protocol" with Arpack only expects you to compute a matrix-vector product, so if you do a linear system solve using the factored matrix instead, then this makes Arpack fast for the low frequencies of the spectrum (do not forget then to replace the eigenvalue lambda by 1/lambda !)
This trick can be used to explore the entire spectrum, with a generalized transform (the transform in the previous point is refered as "invert" transform). There is also a "shift-invert" transform that allows one to explore an arbitrary portion of the spectrum and have fast convergence of Arpack. Then you compute (1/lambda + sigma) instead of lambda, when sigma is a "shift" (the transform is slightly more complicated than the "invert" transform, see the references below for a full explanation).
ARPACK: http://www.caam.rice.edu/software/ARPACK/
SUPERLU: http://crd-legacy.lbl.gov/~xiaoye/SuperLU/
The numerical algorithm is explained in my article that can be downloaded here:
http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=ManifoldHarmonics#2008
Sourcecode is available there:
https://gforge.inria.fr/frs/download.php/file/27277/manifold_harmonics-a4-src.tar.gz
See also my answer to this question:
https://scicomp.stackexchange.com/questions/20243/sparse-generalized-eigensolver-using-opencl/20255#20255

Getting eigenvectors for the largest n eigenvalues in OpenCV

I am applying Kernel PCA for a feature extraction task in a computer vision problem, which involves solving the eigen value problem for a very large symmetrix matrix, like 6400x6400 in size. I am using OpenCV in my implementation and I use the cv::eigen method for the purpose of EigenDecomposition. This method calculates all the eigenvalues and eigenvectors of the given matrix, which becomes easily intractable in case of very large and dense matrices, like in my case, since the problem has O(N^3) complexity as far as I know. But in fact, I only need a small subset of the eigenvectors, which correspond to n largest eigenvalues of the matrix, which is n < N. Is there any method available in OpenCV for this purpose, which only calculates some of the largest eigenvalues and their corresponding eigenvectors? I failed to locate such a method in OpenCV documentation. Any method from any other library is welcome, as well.