Eigen c++ triangular from - c++

I use C++ 14 and Eigen. For n x n matrix A how can I extract Q and R matrices using QR decomposition in Eigen, I tried to read the documentation but I'm disorientated
I've obtain only R:
HouseholderQR<MatrixXd> qr(A);
qr.compute(A);
MatrixXd R = qr.matrixQR().template triangularView<Upper>();
Anyway, I just want to convert matrix A into a triangular matrix (in a efficient way, around O(n^3) I think), which have the determinant equal to determinant of A, in this way accept any other methods to do this in Eigen. (or another Linear Algebra library, if you know some good libraries I waiting for suggestions )

You can get Q and R as follows:
Eigen::MatrixXd Q = qr.householderQ();
Eigen::MatrixXd QR = qr.matrixQR();
The R matrix is in the upper triangular portion of matrix QR. You can compute the determinant of R as R.diagonal().prod() which is equal in magnitude to A.determinant(). If you want to isolate the upper triangular
portion you can do this:
Eigen::MatrixXd T = R.triangularView<Eigen::UnitUpper>();

Related

Fast matrix multiplication of XDX^T for D diagonal

Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.

Eigen library, Jacobi SVD

I'm trying to estimate a 3D rotation matrix between two sets of points, and I want to do that by computing the SVD of the covariance matrix, say C, as follows:
U,S,V = svd(C)
R = V * U^T
C in my case is 3x3 . I am using the Eigen's JacobiSVD module for this and I only recently found out that it stores matrices in column-major format. So that has had me confused.
So, when using Eigen, should I do:
V*U.transpose() or V.transpose()*U ?
Additionally, the rotation is accurate upto changing the sign of the column of U corresponding to the smallest singular value,such that determinant of R is positive. Let's say the index of the smallest singular value is minIndex .
So when the determinant is negative, because of the column major confusion, should I do:
U.col(minIndex) *= -1 or U.row(minIndex) *= -1
Thanks!
This has nothing to do with matrices being stored row-major or column major. svd(C) gives you:
U * S.asDiagonal() * V.transpose() == C
so the closest rotation R to C is:
R = U * V.transpose();
If you want to apply R to a point p (stored as column-vector), then you do:
q = R * p;
Now whether you are interested R or its inverse R.transpose()==V.transpose()*U is up to you.
The singular values scale the columns of U, so you should invert the columns to get det(U)=1. Again, nothing to do with storage layout.

Solving for Lx=b and Px=b when A=LLt

I am decomposing a sparse SPD matrix A using Eigen. It will either be a LLt or a LDLt deomposition (Cholesky), so we can assume the matrix will be decomposed as A = P-1 LDLt P where P is a permutation matrix, L is triangular lower and D diagonal (possibly identity). If I do
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
To solve Lx=b then is it efficient to do the following?
solver.matrixL().TriangularView<Lower>().solve(b)
Similarly, to solve Px=b then is it efficient to do the following?
solver.permutationPinv()*b
I would like to do this in order to compute bt A-1 b efficiently and stably.
Have a look how _solve_impl is implemented for SimplicialCholesky. Essentially, you can simply write:
Eigen::VectorXd x = solver.permutationP()*b; // P not Pinv!
solver.matrixL().solveInPlace(x); // matrixL is already a triangularView
// depending on LLt or LDLt use either:
double res_llt = x.squaredNorm();
double res_ldlt = x.dot(solver.vectorD().asDiagonal().inverse()*x);
Note that you need to multiply by P and not Pinv, since the inverse of
A = P^-1 L D L^t P is
P^-1 L^-t D^-1 L^-1 P
because the order of matrices reverses when taking the inverse of a product.

Apply a large Matrix on QR decomposed

I have to convert the MATLAB code to C++ on eigen library,but I have some problems at QR decomposed, matlab has a function:
[Q,R]=qr(A,0); // A is m-by-n
It produces the economy-size decomposition.If m>n,only the first n columns of Q and the first n rows of R are computed. If m<=n,this is the same as [Q,R]=qr(A).
I have tried to compute it on eigen library. But the A is 20000x1000, so there is always a application crash at QR decomposed.And I don't know how to produce the economy-size decomposition on eigen or other ways.
How can I convert [Q,R]=qr(A,0) to C++/Eigen?
MatrixXd A(m,n);
HouseholderQR<MatrixXd> qr;
qr.compute(A);
temp= qr.matrixQR().triangularView<Upper>();
Q= qr.householderQ() * Eigen::MatrixXd::Identity(m, n);
R=temp.topRows(n);

sparse sparse product A^T*A optim in Eigen lib

In the case of multiple of same matrix matA, like
matA.transpose()*matA,
You don't have to compute all result product, because the result matrix is symmetric(so only if the m>n), in my specific case is always symmetric! square.
So its enough the compute only for. ex. lower triangular part and rest only copy..... because the results of the multiple 2nd and 3rd row, resp.col, is the same like 3rd and 2nd.....And etc....
So my question is , exist way how to tell Eigen, to compute only lower part. and optionally save to only lower trinaguler part the product?
DATA = SparseMatrix<double>((SparseMatrix<double>(matA.transpose()) * matA).pruned()).toDense();
According to the documentation, you can evaluate the lower triangle of a matrix with:
m1.triangularView<Eigen::Lower>() = m2 + m3;
or in your case:
m1.triangularView<Eigen::Lower>() = matA.transpose()*matA;
(where it says "Writing to a specific triangular part: (only the referenced triangular part is evaluated)"). Otherwise, in the line you've written
Eigen will calculate the entire sparse matrix matA.transpose()*matA.
Regarding saving the resulting m1 matrix, it is the same as saving whatever type of matrix it is (Eigen::MatrixXt or Eigen::SparseMatrix<t>). If m1 is sparse, then it will be only half the size of a straightforward matA.transpose()*matA. If m1 is dense, then it will be the full square matrix.
https://eigen.tuxfamily.org/dox/classEigen_1_1SparseSelfAdjointView.html
The symmetric rank update is defined as:
B = B + alpha * A * A^T
where alpha is a scalar. In your case, you are doing A^T * A, so you should pass the transposed matrix instead. The resulting matrix will only store the upper or lower portion of the matrix, whichever you prefer. For example:
SparseMatrix<double> B;
B.selfadjointView<Lower>().rankUpdate(A.transpose());