PYOMO: sparsity of the constraint Jacobian and Lagrange Hessian - pyomo

Does PYOMO feed sparse matrices for the constraint Jacobian and Lagrangian Hessian to the solver?

Related

Raise a matrix to a power

Suppose we have a Hermitian matrix A that is known to have an inverse.
I know that ZGETRF and ZGETRI subroutines in LAPACK library can compute the inverse matrix.
Is there any subroutine in LAPACK or BLAS library can calculate A^{-1/2} directly or any other way to compute A^{-1/2}?
You can raise a matrix to a power following a similar procedure to taking the exponential of a matrix:
Diagonalise the matrix, to give the eigenvectors v_i and corresponding eigenvalues e_i.
Raise the eigenvalues to the power, {e_i}^{-1/2}.
Construct the matrix whose eigenalues are {e_i}^{-1/2} and whose eigenvectors are v_i.
It's worth noting that, as described here, this problem does not have a unique solution. In step 2 above, both {e_i}^{-1/2} and -{e_i}^{-1/2} will lead to valid solutions, so an N*N matrix A will have at least 2^N matrices B such that B^{-2}=A. If any of the eigenvalues are degenerate then there will be a continuous space of valid solutions.

Eigen sparse matrix determinant is zero

I am trying to compute if a sparse matrix I am operating on is positive definite. For this I am trying to use the sylvester criterion, meaning that the leading minors are positive.
To calculate the determinant of the matrix I am constructing a sparseLU solver of each block of the matrix, which can then give me the determinant of the matrix. But starting from a certain dimension (around 130*130) I am getting the result that all determinants are 0. This is not some special dimension in my problem (the matrix has blocks of 32*32) so I am believing this issue is related to some truncation algorithm applied by Eigen with determinants simply falling below some thresholds.
My search for such a mechanism has resulted in no decent results.
My matrix has dimensions of around 16k*16k and all non-zero elements are on the on the 96 elements near the diagonal.
Is any truncation mechanism implemented in Eigen and can I control its thresholds somehow?
This is very likely due to underflow, i.e., the determinant is calculated as a product of lots of numbers smaller than 1.0. If you calculate the product of 130 values around 0.5 you are near the border of what can be represented with single precision floats.
You can use the methods logAbsDeterminant and signDeterminant to get meaningful results.

Why does Essential matrix has 2 euqal singular values and 1 zero singular values?

I was watching the lecture about an essential matrix. and the professor was teaching eight-point linear algorithm. I understood that we need 8 points to estimate the essential matrix. But in this slide, he said that the estimated matrix doesn't correspond to an essential matrix and we should project that matrix it to the essential space. And he didn't prove this theorem, and just skipped it.
So I have some questions
Why does an essential matrix has two equal singular values, and a zero singular value?
Why should we average the two largest singular values that is obtained from eight-point algorithm in order to make an essential matrix?
Asking your first question.
The singular values of the esential matrix E are the square-roots of the eigenvalues of EE^t. We know that E = TR where R is a rotation matrix and T=[0,-z,y;z,0,-x;-y,x,0] then we have EE^t=TRR^tT^t=TT^t.
Also T is a singular skew-symmetric matrix then we can decompose it as T=P^t[0,k,0;0,-k,0;0,0,0]P where P is an orthogonal matrix.
Replacing this in the previous formula we have EE^t=P^t[-k^2,0,0;0,-k^2,0;0,0,0]P=P^tUP then E is a normal matrix and the eigenvalues are the diagonal elements of U.
I have used Matlab notation to denote matrices and A^t denotes the transpose of the matrix A.

Armadillo compute the eigenvectors related to non null eigenvalues

I need to compute the eigenvectors Corresponding to the N Smallest non null eigenvalues ​​of a symmetric sparse matrix positive semi- definite.
I use the function eigs_sym from armadillo but it takes way longer to compute the eigenvectors corresponding to the smallest eigenvalues than to compute the eigenvector corresponding to the biggest eigenvalues.
Is it possible that the eigenvector corresponding to (almost) null eigenvalues are more difficult to compute ?
Since I'm not interested in the eigenvectors corresponding to the (almost) null eigenvalues, is there a way to indicate to armadillo to not compute thoses ?
Thank you by advance

How to efficiently use inverse and determinant in Eigen?

In Eigen there are recommendations that warn against the explicit calculation of determinants and inverse matrices.
I'm implementing the posterior predictive for the multivariate normal with a normal-inverse-wishart prior distribution. This can be expressed as a multivariate t-distribution.
In the multivariate t-distribution you will find a term |Sigma|^{-1/2} as well as (x-mu)^T Sigma^{-1} (x-mu).
I'm quite ignorant with respect to Eigen. I can imagine that for a positive semidefinite matrix (it is a covariance matrix) I can use the LLT solver.
There are however no .determinant() and .inverse() methods defined on the solver itself. Do I have to use the .matrixL() function and inverse the elements on the diagonal myself for the inverse, as well as calculate the product to get the determinant? I think I'm missing something.
If you have the Cholesky factorization of Sigma=LL^T and want (x-mu)^T*Sigma^{-1}*(x-mu), you can compute: (llt.matrixL().solve(x-mu)).squaredNorm() (assuming x and mu are vectors).
For the square root of the determinant, just calculate llt.matrixL().determinant() (calculating the determinant of a triangular matrix is just the product of its diagonal elements).