Eigen library, Jacobi SVD - c++

I'm trying to estimate a 3D rotation matrix between two sets of points, and I want to do that by computing the SVD of the covariance matrix, say C, as follows:
U,S,V = svd(C)
R = V * U^T
C in my case is 3x3 . I am using the Eigen's JacobiSVD module for this and I only recently found out that it stores matrices in column-major format. So that has had me confused.
So, when using Eigen, should I do:
V*U.transpose() or V.transpose()*U ?
Additionally, the rotation is accurate upto changing the sign of the column of U corresponding to the smallest singular value,such that determinant of R is positive. Let's say the index of the smallest singular value is minIndex .
So when the determinant is negative, because of the column major confusion, should I do:
U.col(minIndex) *= -1 or U.row(minIndex) *= -1
Thanks!

This has nothing to do with matrices being stored row-major or column major. svd(C) gives you:
U * S.asDiagonal() * V.transpose() == C
so the closest rotation R to C is:
R = U * V.transpose();
If you want to apply R to a point p (stored as column-vector), then you do:
q = R * p;
Now whether you are interested R or its inverse R.transpose()==V.transpose()*U is up to you.
The singular values scale the columns of U, so you should invert the columns to get det(U)=1. Again, nothing to do with storage layout.

Related

Eigen c++ triangular from

I use C++ 14 and Eigen. For n x n matrix A how can I extract Q and R matrices using QR decomposition in Eigen, I tried to read the documentation but I'm disorientated
I've obtain only R:
HouseholderQR<MatrixXd> qr(A);
qr.compute(A);
MatrixXd R = qr.matrixQR().template triangularView<Upper>();
Anyway, I just want to convert matrix A into a triangular matrix (in a efficient way, around O(n^3) I think), which have the determinant equal to determinant of A, in this way accept any other methods to do this in Eigen. (or another Linear Algebra library, if you know some good libraries I waiting for suggestions )
You can get Q and R as follows:
Eigen::MatrixXd Q = qr.householderQ();
Eigen::MatrixXd QR = qr.matrixQR();
The R matrix is in the upper triangular portion of matrix QR. You can compute the determinant of R as R.diagonal().prod() which is equal in magnitude to A.determinant(). If you want to isolate the upper triangular
portion you can do this:
Eigen::MatrixXd T = R.triangularView<Eigen::UnitUpper>();

Fast matrix multiplication of XDX^T for D diagonal

Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.

How can I multiply an nxn matrix A in fortran x times to get its power without amplifying rounding errors?

How can I multiply an NxN matrix A in Fortran x times to get its power without amplifying rounding errors?
If A can be diagonalized as
A P = P D,
where P is some NxN matrix (each column is called 'eigenvector'), and D is an NxN diagonal matrix (the diagonal elements are called 'eigenvalues'), then
A = P D P^{-1},
where P^{-1} is the inverse matrix of P. Therefore the second power of A results in
A A= P D P^{-1} P D P^{-1} = P D D P^{-1}.
Repeating multiplication of A for x times yields
A^x = P D^x P^{-1}.
Note here that D^x is still a diagonal matrix. Let the i-th diagonal element of D be D_{ii}. Then, the i-th diagonal element of D^x is
[D^x]_{ii} = (D_{ii})^x.
That is, the elements of D^x is simply x-th power of the elements of D and can be computed without much rounding error, I guess. Now, you multiply P and P^{-1} from left and right, respectively, to this D^x to obtain A^x. The error in A^x depends on the error of P and P^{-1}, which can be calculated by some subroutines in numerical packages such as LAPACK.
as alluded to in the answer by norio, one can employ in general the Jordan (or alternatively Schur) decomposition and proceed in a similar fashion - for details (including brief error analysis) see, e.g., Chapter 11 of Matrix computations by Golub and Loan.

sparse sparse product A^T*A optim in Eigen lib

In the case of multiple of same matrix matA, like
matA.transpose()*matA,
You don't have to compute all result product, because the result matrix is symmetric(so only if the m>n), in my specific case is always symmetric! square.
So its enough the compute only for. ex. lower triangular part and rest only copy..... because the results of the multiple 2nd and 3rd row, resp.col, is the same like 3rd and 2nd.....And etc....
So my question is , exist way how to tell Eigen, to compute only lower part. and optionally save to only lower trinaguler part the product?
DATA = SparseMatrix<double>((SparseMatrix<double>(matA.transpose()) * matA).pruned()).toDense();
According to the documentation, you can evaluate the lower triangle of a matrix with:
m1.triangularView<Eigen::Lower>() = m2 + m3;
or in your case:
m1.triangularView<Eigen::Lower>() = matA.transpose()*matA;
(where it says "Writing to a specific triangular part: (only the referenced triangular part is evaluated)"). Otherwise, in the line you've written
Eigen will calculate the entire sparse matrix matA.transpose()*matA.
Regarding saving the resulting m1 matrix, it is the same as saving whatever type of matrix it is (Eigen::MatrixXt or Eigen::SparseMatrix<t>). If m1 is sparse, then it will be only half the size of a straightforward matA.transpose()*matA. If m1 is dense, then it will be the full square matrix.
https://eigen.tuxfamily.org/dox/classEigen_1_1SparseSelfAdjointView.html
The symmetric rank update is defined as:
B = B + alpha * A * A^T
where alpha is a scalar. In your case, you are doing A^T * A, so you should pass the transposed matrix instead. The resulting matrix will only store the upper or lower portion of the matrix, whichever you prefer. For example:
SparseMatrix<double> B;
B.selfadjointView<Lower>().rankUpdate(A.transpose());

Image reconstruction using SVD Decomposition

I have performed block SVD decomposition over image and I stored results.
Now, I need to make reconstruction from this results. I found few examples all written in Matlab, which is a mystery for me.
I only need formula from which I can reconstruct my picture, or example written in C language.
Matrix A is equal U*S*V'. How will look formula, e.g. for calculating first five singular values (product of which rows and columns)? Please provide formula with indexes in C like style. U and V' are matrices and S is vector (not matrix).
Not sure if I get your question right, but if you just need to know singular values, they are the diagonal values of the middle matrix S. S in general is a diagonal matrix, which is stored here as a vector. I mean, only the diagonal is stored, you should imagine it as a matrix if you're thinking in matrix calculations.
Those diagonal values are your singular values, if you need the first biggest singular values, just take the 5 biggest values of the vector S.
Quoting from Wikipedia:
The diagonal entries Σi,i of Σ are known as the singular values of M.
The m columns of U and the n columns of V are called the left-singular
vectors and right-singular vectors of M, respectively.
In the above quote, sigma is your S, and M is the original matrix.
You have asked for C code, yet my hope is that pseudocode will suffice (it's late, I'm tired). The target matrix A has m rows, c columns and rank rho. The variable p = min(m,n).
One strategy is to first form the the intermediate matrix product B = US. This is trivial due to the diagonal-like nature of the matrix of singular values. Assume you have rho ( = 5 ) singular values. You must enforce rho <= p.
Replace column vector u1 with s1u1.
Replace column vector u2 with s2u2.
...
Replace column vector urho with srhourho.
Replace column vector urho+1 with a zero vector of length m.
Replace column vector urho+2 with a zero vector of length m.
...
Replace column vector up with a zero vector of length m.
Next form the new image matrix A = BVT. The matrix element in row r and column c is the dot product of the rth row vector (length rho) of B with the cth column vector (length rho) of VT.
Another strategy is to jump to the form where the matrix elements of A in row r and column c are
ar,c = sum ( skur,kvc,k, { k, 1, rho } )
The row counter r runs from 1 to m; the column counter c runs from 1 to n.