What could be the best way to find Inverse Matrix in SIMD? - c++

I've been digging this for a while
To find the inverse matrix, I need to find the determinant of the matrix(correct?)
But only way that i found is to calculate all the matrix using a11 (a22a33 - a23*a32) and so on..
Please enlighten me, what could be the best way to find the determinant, so that I can get inverse matrix?
or is there any more efficient way to get inverse matrix without finding the determinant???

Instead of finding the determinant of a general matrix, you can use an LU decomposition and then, like what Intel Math Kernel Library does:
computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A).
inv(U) (U is an upper triangular matrix) is easier and more efficient to compute, for example with procedures shown here, but it boils down to the determinant of an upper triangular matrix being just the product of its diagonal.
And an obligatory reminder: use existing math library if possible, numerical computations like this is very easy to get wrong.

Related

Calculating determinant of Matrix in C++ without creating submatrices

Is there a way to determine determinant of given matrix in C++ using only one variable (first loaded matrix) and in next recursion functions using only a reference for that matrix?
How to use coordinates of elements in matrix to determine determinants of submatrices of given matrix without creating them as matrices, just using elements in first matrix and their coordinates? Can that be done using recursion or recursion should not be used?
If you're trying to calculate a determinant for any matrix of size larger than 3x3 using Cramer's Rule, you're certainly doing something wrong. Performance will be terrible.
Probably the easiest approach for you to think your way through is to use row reduction to make it into an upper triangular matrix. Finding the determinant of an upper triangular matrix is easy - just multiply down the diagonal. As for the rest, just multiply by the constant factors that you used and remember that every swap is a -1.

library for full SVD of sparse matrices

I want to do a singular value decomposition for large matrices containing a lot of zeros. In particular I need U and S, obtained from the diagonalization of a symmetric matrix A. This means that A = U * S * transpose(U^*), where S is a diagonal matrix and U contains all eigenvectors as columns.
I searched the web for c++ librarys that combine SVD and sparse matrices, but could only find libraries that find a few, but not all eigenvectors. Does anyone know if there is such a library?
Also after obtaining U and S I need to multiply them to some dense vector.
For this problem, I am using a combination of different techniques:
Arpack can compute a set of eigenvalues and associated eigenvectors, unfortunately it is fast only for high frequencies and slow for low frequencies
but since the eigenvectors of the inverse of a matrix are the same as the eigenvectors of a matrix, one can factor the matrix (using a sparse matrix factorization routine, such as SuperLU, or Choldmod if the matrix is symmetric). The "communication protocol" with Arpack only expects you to compute a matrix-vector product, so if you do a linear system solve using the factored matrix instead, then this makes Arpack fast for the low frequencies of the spectrum (do not forget then to replace the eigenvalue lambda by 1/lambda !)
This trick can be used to explore the entire spectrum, with a generalized transform (the transform in the previous point is refered as "invert" transform). There is also a "shift-invert" transform that allows one to explore an arbitrary portion of the spectrum and have fast convergence of Arpack. Then you compute (1/lambda + sigma) instead of lambda, when sigma is a "shift" (the transform is slightly more complicated than the "invert" transform, see the references below for a full explanation).
ARPACK: http://www.caam.rice.edu/software/ARPACK/
SUPERLU: http://crd-legacy.lbl.gov/~xiaoye/SuperLU/
The numerical algorithm is explained in my article that can be downloaded here:
http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=ManifoldHarmonics#2008
Sourcecode is available there:
https://gforge.inria.fr/frs/download.php/file/27277/manifold_harmonics-a4-src.tar.gz
See also my answer to this question:
https://scicomp.stackexchange.com/questions/20243/sparse-generalized-eigensolver-using-opencl/20255#20255

How to efficiently use inverse and determinant in Eigen?

In Eigen there are recommendations that warn against the explicit calculation of determinants and inverse matrices.
I'm implementing the posterior predictive for the multivariate normal with a normal-inverse-wishart prior distribution. This can be expressed as a multivariate t-distribution.
In the multivariate t-distribution you will find a term |Sigma|^{-1/2} as well as (x-mu)^T Sigma^{-1} (x-mu).
I'm quite ignorant with respect to Eigen. I can imagine that for a positive semidefinite matrix (it is a covariance matrix) I can use the LLT solver.
There are however no .determinant() and .inverse() methods defined on the solver itself. Do I have to use the .matrixL() function and inverse the elements on the diagonal myself for the inverse, as well as calculate the product to get the determinant? I think I'm missing something.
If you have the Cholesky factorization of Sigma=LL^T and want (x-mu)^T*Sigma^{-1}*(x-mu), you can compute: (llt.matrixL().solve(x-mu)).squaredNorm() (assuming x and mu are vectors).
For the square root of the determinant, just calculate llt.matrixL().determinant() (calculating the determinant of a triangular matrix is just the product of its diagonal elements).

Largest eigenvalues (and corresponding eigenvectors) in C++

What is the easiest and fastest way (with some library, of course) to compute k largest eigenvalues and eigenvectors for a large dense matrix in C++? I'm looking for an equivalent of MATLAB's eigs function; I've looked through Armadillo and Eigen but couldn't find one, and computing all eigenvalues takes forever in my case (I need top 10 eigenvectors for an approx. 30000x30000 dense non-symmetric real matrix).
Desperate, I've even tried to implement power iterations by myself with Armadillo's QR decomposition but ran into complex pairs of eigenvalues and gave up. :)
Did you tried https://github.com/yixuan/spectra ?
It similar to ARPACK but with nice Eigen-like interface (it compatible with Eigen!)
I used it for 30kx30k matrices (PCA) and it was quite ok
AFAIK the problem of finding the first k eigenvalues of a generic matrix has no easy solution. The Matlab function eigs you mentioned is supposed to work with sparse matrices.
Matlab probably uses Arnoldi/Lanczos, you might try if it works decently in your case even if your matrix is not sparse. The reference package for Arnlodi is ARPACK which has a C++ interface.
Here is how I get the k largest eigenvectors of a NxN real-valued (float), dense, symmetric matrix A in C++ Eigen:
#include <Eigen/Dense>
...
Eigen::MatrixXf A(N,N);
...
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> solver(N);
solver.compute(A);
Eigen::VectorXf lambda = solver.eigenvalues().reverse();
Eigen::MatrixXf X = solver.eigenvectors().block(0,N-k,N,k).rowwise().reverse();
Note that the eigenvalues and associated eigenvectors are returned in ascending order so I reverse them to get the largest values first.
If you want eigenvalues and eigenvectors for other (non-symmetric) matrices they will, in general, be complex and you will need to use the Eigen::EigenSolver class instead.
Eigen has an EigenValues module that works pretty well.. But, I've never used it on anything quite that large.

Determinant Value For Very Large Matrix

I have a very large square matrix of order around 100000 and I want to know whether the determinant value is zero or not for that matrix.
What can be the fastest way to know that ?
I have to implement that in C++
Assuming you are trying to determine if the matrix is non-singular you may want to look here:
https://math.stackexchange.com/questions/595/what-is-the-most-efficient-way-to-determine-if-a-matrix-is-invertible
As mentioned in the comments its best to use some sort of BLAS library that will do this for you such as Boost::uBLAS.
Usually, matrices of that size are extremely sparse. Use row and column reordering algorithms to concentrate the entries near the diagonal and then use a QR decomposition or LU decomposition. The product of the diagonal entries of the second factor is - up to a sign in the QR case - the determinant. This may still be too ill-conditioned, the best result for rank is obtained by performing a singular value decomposition. However, SVD is more expensive.
There is a property that if any two rows are equal or one row is a constant multiple of another row we can say that determinant of that matrix is zero.It is applicable to columns as well.
From my knowledge your application doesnt need to calculate determinant but the rank of matrix is sufficient to check if system of equations have non-trivial solution : -
Rank of Matrix