How to efficiently use inverse and determinant in Eigen? - c++

In Eigen there are recommendations that warn against the explicit calculation of determinants and inverse matrices.
I'm implementing the posterior predictive for the multivariate normal with a normal-inverse-wishart prior distribution. This can be expressed as a multivariate t-distribution.
In the multivariate t-distribution you will find a term |Sigma|^{-1/2} as well as (x-mu)^T Sigma^{-1} (x-mu).
I'm quite ignorant with respect to Eigen. I can imagine that for a positive semidefinite matrix (it is a covariance matrix) I can use the LLT solver.
There are however no .determinant() and .inverse() methods defined on the solver itself. Do I have to use the .matrixL() function and inverse the elements on the diagonal myself for the inverse, as well as calculate the product to get the determinant? I think I'm missing something.

If you have the Cholesky factorization of Sigma=LL^T and want (x-mu)^T*Sigma^{-1}*(x-mu), you can compute: (llt.matrixL().solve(x-mu)).squaredNorm() (assuming x and mu are vectors).
For the square root of the determinant, just calculate llt.matrixL().determinant() (calculating the determinant of a triangular matrix is just the product of its diagonal elements).

Related

Raise a matrix to a power

Suppose we have a Hermitian matrix A that is known to have an inverse.
I know that ZGETRF and ZGETRI subroutines in LAPACK library can compute the inverse matrix.
Is there any subroutine in LAPACK or BLAS library can calculate A^{-1/2} directly or any other way to compute A^{-1/2}?
You can raise a matrix to a power following a similar procedure to taking the exponential of a matrix:
Diagonalise the matrix, to give the eigenvectors v_i and corresponding eigenvalues e_i.
Raise the eigenvalues to the power, {e_i}^{-1/2}.
Construct the matrix whose eigenalues are {e_i}^{-1/2} and whose eigenvectors are v_i.
It's worth noting that, as described here, this problem does not have a unique solution. In step 2 above, both {e_i}^{-1/2} and -{e_i}^{-1/2} will lead to valid solutions, so an N*N matrix A will have at least 2^N matrices B such that B^{-2}=A. If any of the eigenvalues are degenerate then there will be a continuous space of valid solutions.

What could be the best way to find Inverse Matrix in SIMD?

I've been digging this for a while
To find the inverse matrix, I need to find the determinant of the matrix(correct?)
But only way that i found is to calculate all the matrix using a11 (a22a33 - a23*a32) and so on..
Please enlighten me, what could be the best way to find the determinant, so that I can get inverse matrix?
or is there any more efficient way to get inverse matrix without finding the determinant???
Instead of finding the determinant of a general matrix, you can use an LU decomposition and then, like what Intel Math Kernel Library does:
computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A).
inv(U) (U is an upper triangular matrix) is easier and more efficient to compute, for example with procedures shown here, but it boils down to the determinant of an upper triangular matrix being just the product of its diagonal.
And an obligatory reminder: use existing math library if possible, numerical computations like this is very easy to get wrong.

Eigen sparse matrix determinant is zero

I am trying to compute if a sparse matrix I am operating on is positive definite. For this I am trying to use the sylvester criterion, meaning that the leading minors are positive.
To calculate the determinant of the matrix I am constructing a sparseLU solver of each block of the matrix, which can then give me the determinant of the matrix. But starting from a certain dimension (around 130*130) I am getting the result that all determinants are 0. This is not some special dimension in my problem (the matrix has blocks of 32*32) so I am believing this issue is related to some truncation algorithm applied by Eigen with determinants simply falling below some thresholds.
My search for such a mechanism has resulted in no decent results.
My matrix has dimensions of around 16k*16k and all non-zero elements are on the on the 96 elements near the diagonal.
Is any truncation mechanism implemented in Eigen and can I control its thresholds somehow?
This is very likely due to underflow, i.e., the determinant is calculated as a product of lots of numbers smaller than 1.0. If you calculate the product of 130 values around 0.5 you are near the border of what can be represented with single precision floats.
You can use the methods logAbsDeterminant and signDeterminant to get meaningful results.

library for full SVD of sparse matrices

I want to do a singular value decomposition for large matrices containing a lot of zeros. In particular I need U and S, obtained from the diagonalization of a symmetric matrix A. This means that A = U * S * transpose(U^*), where S is a diagonal matrix and U contains all eigenvectors as columns.
I searched the web for c++ librarys that combine SVD and sparse matrices, but could only find libraries that find a few, but not all eigenvectors. Does anyone know if there is such a library?
Also after obtaining U and S I need to multiply them to some dense vector.
For this problem, I am using a combination of different techniques:
Arpack can compute a set of eigenvalues and associated eigenvectors, unfortunately it is fast only for high frequencies and slow for low frequencies
but since the eigenvectors of the inverse of a matrix are the same as the eigenvectors of a matrix, one can factor the matrix (using a sparse matrix factorization routine, such as SuperLU, or Choldmod if the matrix is symmetric). The "communication protocol" with Arpack only expects you to compute a matrix-vector product, so if you do a linear system solve using the factored matrix instead, then this makes Arpack fast for the low frequencies of the spectrum (do not forget then to replace the eigenvalue lambda by 1/lambda !)
This trick can be used to explore the entire spectrum, with a generalized transform (the transform in the previous point is refered as "invert" transform). There is also a "shift-invert" transform that allows one to explore an arbitrary portion of the spectrum and have fast convergence of Arpack. Then you compute (1/lambda + sigma) instead of lambda, when sigma is a "shift" (the transform is slightly more complicated than the "invert" transform, see the references below for a full explanation).
ARPACK: http://www.caam.rice.edu/software/ARPACK/
SUPERLU: http://crd-legacy.lbl.gov/~xiaoye/SuperLU/
The numerical algorithm is explained in my article that can be downloaded here:
http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=ManifoldHarmonics#2008
Sourcecode is available there:
https://gforge.inria.fr/frs/download.php/file/27277/manifold_harmonics-a4-src.tar.gz
See also my answer to this question:
https://scicomp.stackexchange.com/questions/20243/sparse-generalized-eigensolver-using-opencl/20255#20255

Armadillo complex sparse matrix inverse

I'm writing a program with Armadillo C++ (4.400.1)
I have a matrix that has to be sparse and complex, and I want to calculate the inverse of such matrix. Since it is sparse it could be the pseudoinverse, but I can guarantee that the matrix has the full diagonal.
In the API documentation of Armadillo, it mentions the method .i() to calculate the inverse of any matrix, but sp_cx_mat members do not contain such method, and the inv() or pinv() functions cannot handle the sp_cx_mat type apparently.
sp_cx_mat Y;
/*Fill Y ensuring that the diagonal is full*/
sp_cx_mat Z = Y.i();
or
sp_cx_mat Z = inv(Y);
None of them work.
I would like to know how to compute the inverse of matrices of sp_cx_mat type.
Sparse matrix support in Armadillo is not complete and many of the factorizations/complex operations that are available for dense matrices are not available for sparse matrices. There are a number of reasons for this, the largest being that efficient complex operations such as factorizations for sparse matrices is still very much an open research field. So, there is no .i() function available for cx_sp_mat or other sp_mat types. Another reason for this is lack of time on the part of the sparse matrix developers (...which includes me).
Given that the inverse of a sparse matrix is generally going to be dense, then you may simply be better off turning your cx_sp_mat into a cx_mat and then using the same inversion techniques that you normally would for dense matrices. Since you are planning to represent this as a dense matrix anyway, then it's a fair assumption that you have enough RAM to do that.