Thin QR decomposition in c++ - c++

Is there an easy to use c++ library for "thin" QR decomposition of a rectangular matrix?
Eigen seems to only support full Q matrices. I can take a full Q and discard some columns, but would it be more efficient to not compute them to begin with?

Newmat does exactly what you want.
To decompose A into QR, you can do:
Matrix Q = A;
UpperTriangularMatrix R;
QRZ(Q, R)
If A is a 3x5 matrix, R will be 3x3 and Q will be 3x5 as well.

Even though this question is a bit old, for the record: Eigen does not explicitly compute the Q matrix, but a sequence of Householder vectors, which can directly be multiplied with any matrix (with the correct number of rows).
If you actually explicitly want the thin Q matrix, just multiply by an identity-matrix of the desired size:
#include <Eigen/QR>
#include <iostream>
int main()
{
using namespace Eigen;
MatrixXf A(MatrixXf::Random(5,3));
HouseholderQR<MatrixXf> qr(A);
MatrixXf thinQ = qr.householderQ() * MatrixXf::Identity(5,3);
std::cout << thinQ << '\n';
}

Related

Eigen c++ triangular from

I use C++ 14 and Eigen. For n x n matrix A how can I extract Q and R matrices using QR decomposition in Eigen, I tried to read the documentation but I'm disorientated
I've obtain only R:
HouseholderQR<MatrixXd> qr(A);
qr.compute(A);
MatrixXd R = qr.matrixQR().template triangularView<Upper>();
Anyway, I just want to convert matrix A into a triangular matrix (in a efficient way, around O(n^3) I think), which have the determinant equal to determinant of A, in this way accept any other methods to do this in Eigen. (or another Linear Algebra library, if you know some good libraries I waiting for suggestions )
You can get Q and R as follows:
Eigen::MatrixXd Q = qr.householderQ();
Eigen::MatrixXd QR = qr.matrixQR();
The R matrix is in the upper triangular portion of matrix QR. You can compute the determinant of R as R.diagonal().prod() which is equal in magnitude to A.determinant(). If you want to isolate the upper triangular
portion you can do this:
Eigen::MatrixXd T = R.triangularView<Eigen::UnitUpper>();

Is there a something like a sparse cube in armadillo or some way of using sparse matrices as slices in a cube?

I am using armadillos sparse matrices. But now I would like to use something like a "sparse cube" which does not exist in armadillo. writing sparse matrices into a cube with cube.slice(some_sparse_matrix) converts everything back to a dense cube.
I am using sparse matrices in order to multiply a vector with. for larger vectors/matrices the sparse variant is much faster. Now I have to sum up the multiplications of several sparse matrices with several vectors.
would a std:vector be a way?
In my experience it is faster to use armadillos functions (for example a subvector or arma::span() or arma::sum() )) as opposed to write loops myself. So I was wondering what would be the fastest way of doing this.
It's possible to approximate a sparse cube using the field class, like so.
arma::uword number_of_matrices = 10;
arma::uword number_of_rows = 5000;
arma::uword number_of_cols = 5000;
arma::field<arma::sp_mat> F(number_of_matrices);
F.for_each( [&](arma::sp_mat& X) { X.set_size(number_of_rows, number_of_cols); } );
F(0)(1,2) = 456.7; // write to element (1,2) in matrix 0
F(1)(2,3) = 567.8; // write to element (2,3) in matrix 1
F.print("F:"); // show all matrices
Your compiler must support at least C++11 for this to work.

Solving Systems of Linear Equations using Eigen

I'm currently working on a fluid simulation in C++, and part of the algorithm is to solve a sparse system of linear equations. People recommended using the library Eigen for this. I decided to test it out using this short program that I wrote:
#include <Eigen/SparseCholesky>
#include <vector>
#include <iostream>
int main() {
std::vector<Eigen::Triplet<double>> triplets;
triplets.push_back(Eigen::Triplet<double>(0, 0, 1));
triplets.push_back(Eigen::Triplet<double>(0, 1, -2));
triplets.push_back(Eigen::Triplet<double>(1, 0, 3));
triplets.push_back(Eigen::Triplet<double>(1, 1, -2));
Eigen::SparseMatrix<double> A(2, 2);
A.setFromTriplets(triplets.begin(), triplets.end());
Eigen::VectorXd b(2);
b[0] = -2;
b[1] = 2;
Eigen::SimplicialCholesky<Eigen::SparseMatrix<double>> chol(A);
Eigen::VectorXd x = chol.solve(b);
std::cout << x[0] << ' ' << x[1] << std::endl;
system("pause");
}
It gives it these two equations:
x - 2y = -2
3x - 2y = 2
The correct solution is:
x = 2
y = 2
But the problem is that when the program runs, it outputs:
0.181818 -0.727273
Which is totally wrong! I have been debugging this for hours, but it's a very short program and I'm following the tutorial on the Eigen website exactly. Does anybody know what is causing this issue?
P.S. I know that the classes I'm using are for sparse matrices, but the only difference between those and the normal Matrix classes is the way the elements are stored.
SimplicialCholesky is for symmetric positive definite (SPD) matrices, the matrix you assembled is not even symmetric. By default it only reads the entries in the lower triangular part ignoring the others, so it solved:
x + 3y = -2
3x -2y = 2
As you noticed, for non-symmetric square problems you need to use a direct solver based on LU or BICGSTAB in the world of iterative solvers. This is all summarized in the doc.
You should use a solver capable to process non-symmetric sparse matrices. Another possible approach is to seek a solution not of the original system [A]x=b, but [A]T*[A]x=[A]T*b, where [A]T stands for the [A] transpose. The latter system's matrix is symmetric and positive definite (as long as [A] is non-singular). The only shortcoming would be the fact that [A]T[A] may be rather ill-conditioned if the original [A] is not "good" in that sense. Just an example of software designed to solve such problems:
http://members.ozemail.com.au/~comecau/CMA_LS_Sparse.htm

Solving for Lx=b and Px=b when A=LLt

I am decomposing a sparse SPD matrix A using Eigen. It will either be a LLt or a LDLt deomposition (Cholesky), so we can assume the matrix will be decomposed as A = P-1 LDLt P where P is a permutation matrix, L is triangular lower and D diagonal (possibly identity). If I do
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
To solve Lx=b then is it efficient to do the following?
solver.matrixL().TriangularView<Lower>().solve(b)
Similarly, to solve Px=b then is it efficient to do the following?
solver.permutationPinv()*b
I would like to do this in order to compute bt A-1 b efficiently and stably.
Have a look how _solve_impl is implemented for SimplicialCholesky. Essentially, you can simply write:
Eigen::VectorXd x = solver.permutationP()*b; // P not Pinv!
solver.matrixL().solveInPlace(x); // matrixL is already a triangularView
// depending on LLt or LDLt use either:
double res_llt = x.squaredNorm();
double res_ldlt = x.dot(solver.vectorD().asDiagonal().inverse()*x);
Note that you need to multiply by P and not Pinv, since the inverse of
A = P^-1 L D L^t P is
P^-1 L^-t D^-1 L^-1 P
because the order of matrices reverses when taking the inverse of a product.

How to implement a left matrix division on C++ using gsl

I am trying to port a MATLAB program to C++.
And I want to implement a left matrix division between a matrix A and a column vector B.
A is an m-by-n matrix with m is not equal to n and B is a column vector with m components.
And I want the result X = A\B is the solution in the least squares sense to the under- or overdetermined system of equations AX = B. In other words, X minimizes norm(A*X - B), the length of the vector AX - B.
That means I want it has the same result as the A\B in MATLAB.
I want to implement this feature in GSL-GNU (GNU Science Library) and I don't know too much about math, least square fitting or matrix operation, can somebody told me how to do this in GSL? Or if implement them in GSL is too complicate, can someone suggest me a good open source C/C++ library that provides the above matrix operation?
Okay, I finally figure out by my self after spend another 5 hours on it.. But still thanks for the suggestions to my question.
Assuming we have a 5 * 2 matrix
A = [1 0
1 0
0 1
1 1
1 1]
and a vector b = [1.8388,2.5595,0.0462,2.1410,0.6750]
The solution to the A \ b would be
#include <stdio.h>
#include <gsl/gsl_linalg.h>
int
main (void)
{
double a_data[] = {1.0, 0.0,1.0, 0.0, 0.0,1.0,1.0,1.0,1.0,1.0};
double b_data[] = {1.8388,2.5595,0.0462,2.1410,0.6750};
gsl_matrix_view m
= gsl_matrix_view_array (a_data, 5, 2);
gsl_vector_view b
= gsl_vector_view_array (b_data, 5);
gsl_vector *x = gsl_vector_alloc (2); // size equal to n
gsl_vector *residual = gsl_vector_alloc (5); // size equal to m
gsl_vector *tau = gsl_vector_alloc (2); //size equal to min(m,n)
gsl_linalg_QR_decomp (&m.matrix, tau); //
gsl_linalg_QR_lssolve(&m.matrix, tau, &b.vector, x, residual);
printf ("x = \n");
gsl_vector_fprintf (stdout, x, "%g");
gsl_vector_free (x);
gsl_vector_free (tau);
gsl_vector_free (residual);
return 0;
}
In addition to the one you gave, a quick search revealed other GSL examples, one using QR decomposition, the other LU decomposition.
There exist other numeric libraries capable of solving linear systems (a basic functionality in every linear algebra library). For one, Armadillo offers a nice and readable interface:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main()
{
mat A = randu<mat>(5,2);
vec b = randu<vec>(5);
vec x = solve(A, b);
cout << x << endl;
return 0;
}
Another good one is the Eigen library:
#include <iostream>
#include <Eigen/Dense>
using namespace std;
using namespace Eigen;
int main()
{
Matrix3f A;
Vector3f b;
A << 1,2,3, 4,5,6, 7,8,10;
b << 3, 3, 4;
Vector3f x = A.colPivHouseholderQr().solve(b);
cout << "The solution is:\n" << x << endl;
return 0;
}
Now, one thing to remember is that MLDIVIDE is a super-charged function and has multiple execution paths. If the coefficient matrix A has some special structure, then it is exploited to obtain faster or more accurate result (can choose from substitution algorithm, LU and QR factorization, ..)
MATLAB also has PINV which returns the minimal norm least-squares solution, in addition to a number of other iterative methods for solving systems of linear equations.
I'm not sure I understand your question, but if you've already found your solution using MATLAB, you may want to consider using MATLAB Coder, which automatically translates your MATLAB code into C++.