How to find (Q, R ) from SuiteSparseQR_factorization object? - c++

In C++ interface of SuiteSparse, I can use
SuiteSparseQR_factorization <double> *QR;
QR = SuiteSparseQR_factorize(A) ;
to calculate QR decomposition of matrix A so that I can reuse QR for further calculation. But I wonder can I get the real Q,R directly from
this QR object?

SuiteSparse is awesome, but the interface can be confusing. Unfortunately, the methods that involve the SuiteSparseQR_factorization struct, which appear to be the most convenient, haven't worked so well for me in practice. For instance, using SuiteSparseQR_factorize and then SuiteSparseQR_qmult with a sparse matrix input argument actually converts it to a dense matrix first, which seems completely unnecessary!
Instead, use
template <typename Entry> SuiteSparse_long SuiteSparseQR
(
// inputs, not modified
int ordering, // all, except 3:given treated as 0:fixed
double tol, // only accept singletons above tol
SuiteSparse_long econ, // number of rows of C and R to return; a value
// less than the rank r of A is treated as r, and
// a value greater than m is treated as m.
int getCTX, // if 0: return Z = C of size econ-by-bncols
// if 1: return Z = C' of size bncols-by-econ
// if 2: return Z = X of size econ-by-bncols
cholmod_sparse *A, // m-by-n sparse matrix
// B is either sparse or dense. If Bsparse is non-NULL, B is sparse and
// Bdense is ignored. If Bsparse is NULL and Bdense is non-NULL, then B is
// dense. B is not present if both are NULL.
cholmod_sparse *Bsparse,
cholmod_dense *Bdense,
// output arrays, neither allocated nor defined on input.
// Z is the matrix C, C', or X
cholmod_sparse **Zsparse,
cholmod_dense **Zdense,
cholmod_sparse **R, // the R factor
SuiteSparse_long **E, // size n; fill-reducing ordering of A.
cholmod_sparse **H, // the Householder vectors (m-by-nh)
SuiteSparse_long **HPinv,// size m; row permutation for H
cholmod_dense **HTau, // size nh, Householder coefficients
// workspace and parameters
cholmod_common *cc
) ;
This method will perform the factorization and then, optionally, output (among other things) R, the matrix product Z = Q^T * B (or its transpose -- B^T * Q), or the solution of a linear system. To get Q, define B as the identity matrix. Here's an example to get Q and R.
cholmod_common Common, * cc;
cc = &Common;
cholmod_l_start(cc);
cholmod_sparse *A;//assume you have already defined this
int ordering = SPQR_ORDERING_BEST;
double tol = 0;
Long econ = A->nrow;
int getCTX = 1;// Z = (Q^T * B)^T = B^T * Q
cholmod_sparse *B = cholmod_l_speye(A->nrow, A->nrow, CHOLMOD_REAL, cc);//the identity matrix
cholmod_sparse *Q, *R;//output pointers to the Q and R sparse matrices
SuiteSparseQR<double>(ordering, tol, econ, getCTX, A, B, NULL, &Q, NULL, &R, NULL, NULL, NULL, NULL, cc);
If you want any of the other outputs to perform subsequent operations without the use of an explicitly formed Q and/or R, then you need to substitute the NULL's for additional pointers and then make calls to SuiteSparseQR_qmult.

Related

Eigen LLT Module Giving incorrect result?

First off, I assume the problem is with me and not with Eigen's LLT module. That said, here is the code (I will explain the problem briefly) but sourcing the code in Rstudio should recreate the bug.
#include <RcppEigen.h>
using namespace Rcpp;
using Eigen::MatrixXd;
using Eigen::VectorXd;
// [[Rcpp::depends(RcppEigen)]]
template <typename T>
void fillUnitNormal(Eigen::PlainObjectBase<T>& Z){
int m = Z.rows();
int n = Z.cols();
Rcpp::NumericVector r(m*n);
r = Rcpp::rnorm(m*n, 0, 1); // using vectorization from Rcpp sugar
std::copy(std::begin(r), std::end(r), Z.data());
}
template <typename T1, typename T2, typename T3>
// #param z is object derived from class MatrixBase to overwrite with sample
// #param m MAP estimate
// #param S the hessian of the NEGATIVE log-likelihood evaluated at m
// #param pars structure of type pars
// #return int 0 success, 1 failure
int cholesky_lap(Eigen::MatrixBase<T1>& z, Eigen::MatrixBase<T2>& m,
Eigen::MatrixBase<T3>& S){
int nc=z.cols();
int nr=z.rows();
Eigen::LLT<MatrixXd> hesssqrt;
hesssqrt.compute(-S);
if (hesssqrt.info() == Eigen::NumericalIssue){
Rcpp::warning("Cholesky of Hessian failed with status status Eigen::NumericalIssue");
return 1;
}
typename T1::PlainObject samp(nr, nc);
fillUnitNormal(samp);
z = hesssqrt.matrixL().solve(samp);
z.template colwise() += m;
return 0;
}
// #param z an object derived from class MatrixBase to overwrite with samples
// #param m MAP estimate (as a vector)
// #param S the hessian of the NEGATIVE log-likelihood evaluated at m
// block forms should be given as blocks row bound together, blocks
// must be square and of the same size!
// [[Rcpp::export]]
Eigen::MatrixXd LaplaceApproximation(int n_samples, Eigen::VectorXd m,
Eigen::MatrixXd S){
int p=m.rows();
MatrixXd z = MatrixXd::Zero(p, n_samples);
int status = cholesky_lap(z, m, S);
if (status==1) Rcpp::stop("decomposition failed");
return z;
}
/*** R
library(testthat)
n_samples <- 1000000
m <- 1:3
S <- diag(1:3)
S[1,2] <- S[2,1] <- -1
S <- -S # Pretending this is the negative precision matrix
# e.g., hessian of negative log likelihood
z <- LaplaceApproximation(n_samples, m, S)
expect_equal(var(t(z)), solve(-S), tolerance=0.005)
expect_equal(rowMeans(z), m, tolerance=.01)
*/
Here is the (key) output:
> expect_equal(var(t(z)), solve(-S), tolerance=0.005)
Error: var(t(z)) not equal to solve(-S).
2/9 mismatches (average diff: 1)
[1] 0.998 - 2 == -1
[5] 2.003 - 1 == 1
In Words:
I am trying to write a function to perform a Laplace approximation. This means essentially sampling from a multivariate normal with mean m and covariance inverse(-S) where S is the Hessian of the negative log-liklihood.
My code works perfectly for an eigen decomposition I coded but for some reason, it is failing with the Cholesky. (I have tried to just give a minimal reproducible example and for space am not showing the eigen decomposition).
The best thought I have now is that some aliasing issue is happening but I can't figure out where that would be...
Thank you in advance!
It turned out to be a simple math error. Not a code error. Issue was that cholesky of matrix inverse has a transpose compared to just the inverse of the cholesky of the original matrix. Changing
z = hesssqrt.matrixL().solve(samp);
to
z = hesssqrt.matrixU().solve(samp);
Solved the problem.

Matrix multiplication very slow in Eigen

I have implemented a Gauss-Newton optimization process which involves calculating the increment by solving a linearized system Hx = b. The H matrx is calculated by H = J.transpose() * W * J and b is calculated from b = J.transpose() * (W * e) where e is the error vector. Jacobian here is a n-by-6 matrix where n is in thousands and stays unchanged across iterations and W is a n-by-n diagonal weight matrix which will change across iterations (some diagonal elements will be set to zero). However I encountered a speed issue.
When I do not add the weight matrix W, namely H = J.transpose()*J and b = J.transpose()*e, my Gauss-Newton process can run very fast in 0.02 sec for 30 iterations. However when I add the W matrix which is defined outside the iteration loop, it becomes so slow (0.3~0.7 sec for 30 iterations) and I don't understand if it is my coding problem or it normally takes this long.
Everything here are Eigen matrices and vectors.
I defined my W matrix using .asDiagonal() function in Eigen library from a vector of inverse variances. then just used it in the calculation for H ad b. Then it gets very slow. I wish to get some hints about the potential reasons for this huge slowdown.
EDIT:
There are only two matrices. Jacobian is definitely dense. Weight matrix is generated from a vector by the function vec.asDiagonal() which comes from the dense library so I assume it is also dense.
The code is really simple and the only difference that's causing the time change is the addition of the weight matrix. Here is a code snippet:
for (int iter=0; iter<max_iter; ++iter) {
// obtain error vector
error = ...
// calculate H and b - the fast one
Eigen::MatrixXf H = J.transpose() * J;
Eigen::VectorXf b = J.transpose() * error;
// calculate H and b - the slow one
Eigen::MatrixXf H = J.transpose() * weight_ * J;
Eigen::VectorXf b = J.transpose() * (weight_ * error);
// obtain delta and update state
del = H.ldlt().solve(b);
T <- T(del) // this is pseudo code, meaning update T with del
}
It is in a function in a class, and weight matrix now for debug purposes is defined as a class variable that can be accessed by the function and is defined before the function is called.
I guess that weight_ is declared as a dense MatrixXf? If so, then replace it by w.asDiagonal() everywhere you use weight_, or make the later an alias to the asDiagonal expression:
auto weight = w.asDiagonal();
This way Eigen will knows that weight is a diagonal matrix and computations will be optimized as expected.
Because the matrix multiplication is just the diagonal, you can change it to use coefficient wise multiplication like so:
MatrixXd m;
VectorXd w;
w.setLinSpaced(5, 2, 6);
m.setOnes(5,5);
std::cout << (m.array().rowwise() * w.array().transpose()).matrix() << "\n";
Likewise, the matrix vector product can be written as:
(w.array() * error.array()).matrix()
This avoids the zero elements in the matrix. Without an MCVE for me to base this on, YMMV...

How to use placeholders for unneeded parameters in c++

I am using this function from the GMP arbitrary precision arithmetic library:
Function: void mpz_gcdext (mpz_t g, mpz_t s, mpz_t t, const mpz_t a, const mpz_t b)
Set g to the greatest common divisor of a and b, and in addition set s and t to coefficients satisfying a*s + b*t = g. The value in g is always positive, even if one or both of a and b are negative (or zero if both inputs are zero). The values in s and t are chosen such that normally, abs(s) < abs(b) / (2 g) and abs(t) < abs(a) / (2 g), and these relations define s and t uniquely. There are a few exceptional cases:
If abs(a) = abs(b), then s = 0, t = sgn(b).
Otherwise, s = sgn(a) if b = 0 or abs(b) = 2 g, and t = sgn(b) if a = 0 or abs(a) = 2 g.
In all cases, s = 0 if and only if g = abs(b), i.e., if b divides a or a = b = 0.
If t is NULL then that value is not computed.
I do not need the values of 'g' or 't' and would not like to create variables for the sole purpose of passing to this function. What can I do to pass something like a placeholder to this specific function, and how can I do this in c++ in general?
You might overload the function.
void mpz_gcdext (mpz_t s, const mpz_t a, const mpz_t b)
{
mpz_t g, t;
// initialize g and t as needed
mpz_gcdext(g, s, t, a, b);
}
Does that help?

Fortran - Passing a variable into CGESV

I am trying to test the LAPACK method CGESV, but I am encountering an issue. I want to reuse my 'A' matrix in other parts of my code, but it changes when I pass it into the method. The definition of 'A':
(input/output) COMPLEX array, dimension (LDA,N)
On entry, the N-by-N coefficient matrix A.
On exit, the factors L and U from the factorization
A = P*L*U; the unit diagonal elements of L are not stored.
Is there a way to keep the value of A after passing it into CGESV short of creating a temp variable to store the value?
As you already noticed the A matrix is overwritten with P*L*U decomposition. If the size of the matrix is not so big, you can copy the contents of A matrix and use the copy for the decomposition.
CALL CCOPY(N*N, A, 1, A_NEW, 1)
If the matrix size is so big that you can not keep two copies of it in memory, you can perform the math operations with the decomposed matrix. For example to compute y=A*x
* y = x
CALL CCOPY(N, X, 1, Y, 1)
* y = U * y
CALL CTRMV('Upper', 'No transpose', 'Non-unit', N, A, N, Y, 1)
* y = L * y
CALL CTRMV('Lower', 'No transpose', 'Unit', N, A, N, Y, 1)
* y = P * y
CALL DLASWP( 1, Y, N, 1, N, IPIV, 1 )
The additional memory needed is the integer IPIV sized N.
The routines do their work in-place, so the only way to keep the original array is to make a copy.

Convert cv::MatExpr to type

A number of matrix expressions I have evaluate to a 1 by 1 matrix. I would like to do something like:
cv::Mat a = cv::Mat(n, m, CV_64F), b = ..., c = ...
double d = a.t() * b * c.inv(); // result happens to be 1 x 1 matrix
The way I found to do this is to write:
double d = ((cv::Mat)(a.t() * b * c.inv())).at<double>(0);
Which is a bit long and very confusing, especially if long expressions are involved.
Is there a better, clearer way to write this? can I somehow overload the operator double to apply only to 1x1 cv::MatExpr's?
Edit
A simple function to do this is of course possible, though ugly. Any more elegant solutions?
double toDouble(cv::MatExpr M) {
cv::Mat A = M;
if (A.rows != 1 || A.cols != 1) throw "Matrix is not 1 by 1!";
return A.at<double>(0);
}
What you could do is make use of the cv::Mat::dot function (documentation link), which takes two cv::Mat of same sizes and returns a double.
If the result of your operation is a 1x1 matrix, then you should be able to express it using cv::Mat::dot. For example, if a and b are nx1, the two following lines are equivalent:
double d = ((cv::Mat)(a.t() * b)).at<double>(0);
double d = a.dot(b);
One could also imagine more complex operations:
double d = (M.t()*U.inv()*a).dot(V.inv()*b);