I am trying to compute if a sparse matrix I am operating on is positive definite. For this I am trying to use the sylvester criterion, meaning that the leading minors are positive.
To calculate the determinant of the matrix I am constructing a sparseLU solver of each block of the matrix, which can then give me the determinant of the matrix. But starting from a certain dimension (around 130*130) I am getting the result that all determinants are 0. This is not some special dimension in my problem (the matrix has blocks of 32*32) so I am believing this issue is related to some truncation algorithm applied by Eigen with determinants simply falling below some thresholds.
My search for such a mechanism has resulted in no decent results.
My matrix has dimensions of around 16k*16k and all non-zero elements are on the on the 96 elements near the diagonal.
Is any truncation mechanism implemented in Eigen and can I control its thresholds somehow?
This is very likely due to underflow, i.e., the determinant is calculated as a product of lots of numbers smaller than 1.0. If you calculate the product of 130 values around 0.5 you are near the border of what can be represented with single precision floats.
You can use the methods logAbsDeterminant and signDeterminant to get meaningful results.
Related
Is there a way to determine determinant of given matrix in C++ using only one variable (first loaded matrix) and in next recursion functions using only a reference for that matrix?
How to use coordinates of elements in matrix to determine determinants of submatrices of given matrix without creating them as matrices, just using elements in first matrix and their coordinates? Can that be done using recursion or recursion should not be used?
If you're trying to calculate a determinant for any matrix of size larger than 3x3 using Cramer's Rule, you're certainly doing something wrong. Performance will be terrible.
Probably the easiest approach for you to think your way through is to use row reduction to make it into an upper triangular matrix. Finding the determinant of an upper triangular matrix is easy - just multiply down the diagonal. As for the rest, just multiply by the constant factors that you used and remember that every swap is a -1.
I have a diagonal matrix with eigenvalues e.g. 1, 2 and 3. I disturb its values with some noise but it is small enough to change the sequence. When I obtain the eigenvalues of this matrix they are 1,2,3 in 50% cases and 1,3,2 in another 50%.
When I do the same thing without the noise the order is always 1,2,3.
I obtain the eigenvalues using:
matrix.eigenvalues().real();
or using:
Eigen::EigenSolver<Eigen::Matrix3d> es(matrix, false);
es.eigenvalues().real();
The result is the same. Any ideas how to fix it?
There is no "natural" order for eigenvalues of a non-selfadjoint matrix, since they are usually complex (even for real-valued matrices). One could sort them lexicographically (first by real then by complex) or by magnitude, but Eigen does neither. If you have a look at the documentation, you'll find:
The eigenvalues are repeated according to their algebraic multiplicity, so there are as many eigenvalues as rows in the matrix. The eigenvalues are not sorted in any particular order.
If your matrix happens to be self-adjoint you should use the SelfAdjointEigenSolver, of course (which does sort the eigenvalues, since they are all real and therefore sortable). Otherwise, you need to sort the eigenvalues manually by whatever criterion you prefer.
N.B.: The result of matrix.eigenvalues() and es.eigenvalues() should indeed be the same, since exactly the same algorithm is applied. Essentially the first variant is just a short-hand, if you are only interested in the eigenvalues.
I want to do a singular value decomposition for large matrices containing a lot of zeros. In particular I need U and S, obtained from the diagonalization of a symmetric matrix A. This means that A = U * S * transpose(U^*), where S is a diagonal matrix and U contains all eigenvectors as columns.
I searched the web for c++ librarys that combine SVD and sparse matrices, but could only find libraries that find a few, but not all eigenvectors. Does anyone know if there is such a library?
Also after obtaining U and S I need to multiply them to some dense vector.
For this problem, I am using a combination of different techniques:
Arpack can compute a set of eigenvalues and associated eigenvectors, unfortunately it is fast only for high frequencies and slow for low frequencies
but since the eigenvectors of the inverse of a matrix are the same as the eigenvectors of a matrix, one can factor the matrix (using a sparse matrix factorization routine, such as SuperLU, or Choldmod if the matrix is symmetric). The "communication protocol" with Arpack only expects you to compute a matrix-vector product, so if you do a linear system solve using the factored matrix instead, then this makes Arpack fast for the low frequencies of the spectrum (do not forget then to replace the eigenvalue lambda by 1/lambda !)
This trick can be used to explore the entire spectrum, with a generalized transform (the transform in the previous point is refered as "invert" transform). There is also a "shift-invert" transform that allows one to explore an arbitrary portion of the spectrum and have fast convergence of Arpack. Then you compute (1/lambda + sigma) instead of lambda, when sigma is a "shift" (the transform is slightly more complicated than the "invert" transform, see the references below for a full explanation).
ARPACK: http://www.caam.rice.edu/software/ARPACK/
SUPERLU: http://crd-legacy.lbl.gov/~xiaoye/SuperLU/
The numerical algorithm is explained in my article that can be downloaded here:
http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=ManifoldHarmonics#2008
Sourcecode is available there:
https://gforge.inria.fr/frs/download.php/file/27277/manifold_harmonics-a4-src.tar.gz
See also my answer to this question:
https://scicomp.stackexchange.com/questions/20243/sparse-generalized-eigensolver-using-opencl/20255#20255
In Eigen there are recommendations that warn against the explicit calculation of determinants and inverse matrices.
I'm implementing the posterior predictive for the multivariate normal with a normal-inverse-wishart prior distribution. This can be expressed as a multivariate t-distribution.
In the multivariate t-distribution you will find a term |Sigma|^{-1/2} as well as (x-mu)^T Sigma^{-1} (x-mu).
I'm quite ignorant with respect to Eigen. I can imagine that for a positive semidefinite matrix (it is a covariance matrix) I can use the LLT solver.
There are however no .determinant() and .inverse() methods defined on the solver itself. Do I have to use the .matrixL() function and inverse the elements on the diagonal myself for the inverse, as well as calculate the product to get the determinant? I think I'm missing something.
If you have the Cholesky factorization of Sigma=LL^T and want (x-mu)^T*Sigma^{-1}*(x-mu), you can compute: (llt.matrixL().solve(x-mu)).squaredNorm() (assuming x and mu are vectors).
For the square root of the determinant, just calculate llt.matrixL().determinant() (calculating the determinant of a triangular matrix is just the product of its diagonal elements).
I have a very large square matrix of order around 100000 and I want to know whether the determinant value is zero or not for that matrix.
What can be the fastest way to know that ?
I have to implement that in C++
Assuming you are trying to determine if the matrix is non-singular you may want to look here:
https://math.stackexchange.com/questions/595/what-is-the-most-efficient-way-to-determine-if-a-matrix-is-invertible
As mentioned in the comments its best to use some sort of BLAS library that will do this for you such as Boost::uBLAS.
Usually, matrices of that size are extremely sparse. Use row and column reordering algorithms to concentrate the entries near the diagonal and then use a QR decomposition or LU decomposition. The product of the diagonal entries of the second factor is - up to a sign in the QR case - the determinant. This may still be too ill-conditioned, the best result for rank is obtained by performing a singular value decomposition. However, SVD is more expensive.
There is a property that if any two rows are equal or one row is a constant multiple of another row we can say that determinant of that matrix is zero.It is applicable to columns as well.
From my knowledge your application doesnt need to calculate determinant but the rank of matrix is sufficient to check if system of equations have non-trivial solution : -
Rank of Matrix