Right now I want to use C++ Boost to solve a matrix function: A*P=X, P=A\X. I have matrix A and matrix X, so I need to do P=A\X to get matrix P. That's a matrix division problem, right?
My C++ code is
#include "stdafx.h"
#include <boost\mat2cpp-20130725/mat2cpp.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/matrix_proxy.hpp>
#include <boost/numeric/ublas/io.hpp>
using namespace boost::numeric::ublas;
using namespace std;
int main() {
using namespace mat2cpp;
matrix<double> x(2,2); // initialize a matrix
x(0, 0) = 1; // assign value
x(1, 1) = 1;
matrix<double> y(2, 1);
y(0, 0) = 1;
y(1, 0) = 1;
size_t rank;
matrix<double> z = matrix_div(x, y, rank);
}
But it has errors Error figure, please help me! Thanks!
First of all there is no such thing as matrix division. If you have this equation A * P=X and you want to find P, than the solution is: inv(A) * A * P=inv(A) * X, where inv(A) is the inverse of the A matrix. Because we know that the inv(A) * A is equal to the identity matrix, than we can conclude that P=inv(A) * X.
Now your problem is to calculate the inverse of the A matrix. There are several ways to do this, my suggestion would be to use LU factorization.
Honestly, I don't know if the boost library has such thing as mat2cpp. If you want to use boost, I would recommend using boost/numeric/ublas/matrix.hpp.
Related
I'm trying to reproduce some numpy code on Gaussian Processes (from here) using Eigen. Basically, I need to sample from a multivariate normal distribution:
samples = np.random.multivariate_normal(mu.ravel(), cov, 1)
The mean vector is currently zero, while the covariance matrix is a square matrix generated via the isotropic squared exponential kernel:
sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T)
return sigma_f**2 * np.exp(-0.5 / l**2 * sqdist)
I can generate the covariance matrix just fine for now (it can probably be cleaned but for now it's a POC):
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise()
+ x1.rowwise().squaredNorm()).rowwise() +
+ x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
However, my problems start when I need to sample the multivariate normal.
I've tried using the solution proposed in this accepted answer; however, the decomposition only works with covariance matrices of size up to 30x30; more than that and LLT fails to decompose the matrix. The alternative version provided in the answer also does not work, and creates NaNs. I tried LDLT as well but it also breaks (D contains negative values, so sqrt gives NaN).
Then, I got curious, and I looked into how numpy does this. Turns out the numpy implementation uses SVD decomposition (with LAPACK), rather than Cholesky. So I tried copying their implementation:
// SVD on the covariance matrix generated via kernel function
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
// Generate gaussian samples, "randN" is from the multivariate StackOverflow answer
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
The various minuses are me trying to exactly reproduce numpy's results.
In any case, this works perfectly fine, even with large dimensions. I was wondering why Eigen is not able to do LLT, but SVD works. The covariance matrix I use is the same. Is there something I can do to simply use LLT?
EDIT: Here is my full example:
#include <iostream>
#include <random>
#include <Eigen/Cholesky>
#include <Eigen/SVD>
#include <Eigen/Eigenvalues>
using Matrix2D = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor | Eigen::AutoAlign>;
using Vector = Eigen::Matrix<double, Eigen::Dynamic, 1>;
/*
We need a functor that can pretend it's const,
but to be a good random number generator
it needs mutable state.
*/
namespace Eigen {
namespace internal {
template<typename Scalar>
struct scalar_normal_dist_op
{
static std::mt19937 rng; // The uniform pseudo-random algorithm
mutable std::normal_distribution<Scalar> norm; // The gaussian combinator
EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)
template<typename Index>
inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};
template<typename Scalar> std::mt19937 scalar_normal_dist_op<Scalar>::rng;
template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
} // end namespace Eigen
Matrix2D kernel(const Matrix2D & x1, const Matrix2D & x2, double l = 1.0, double sigma = 1.0) {
auto dists = ((- 2.0 * (x1 * x2.transpose())).colwise() + x1.rowwise().squaredNorm()).rowwise() + x2.rowwise().squaredNorm().transpose();
return std::pow(sigma, 2) * ((-0.5 / std::pow(l, 2)) * dists).array().exp();
}
int main() {
unsigned size = 50;
unsigned seed = 1;
Matrix2D X = Vector::LinSpaced(size, -5.0, 4.8);
Eigen::internal::scalar_normal_dist_op<double> randN; // Gaussian functor
Eigen::internal::scalar_normal_dist_op<double>::rng.seed(seed); // Seed the rng
Vector means = Vector::Zero(X.rows());
auto covs = kernel(X, X);
Eigen::LLT<Matrix2D> cholSolver(covs);
// We can only use the cholesky decomposition if
// the covariance matrix is symmetric, pos-definite.
// But a covariance matrix might be pos-semi-definite.
// In that case, we'll go to an EigenSolver
Eigen::MatrixXd normTransform;
if (cholSolver.info()==Eigen::Success) {
std::cout << "Used LLT\n";
// Use cholesky solver
normTransform = cholSolver.matrixL();
} else {
std::cout << "Broken\n";
Eigen::BDCSVD<Matrix2D> solver(covs, Eigen::ComputeFullV);
normTransform = (-solver.matrixV().transpose()).array().colwise() * solver.singularValues().array().sqrt();
}
Matrix2D gaussianSamples = Eigen::MatrixXd::NullaryExpr(1, means.size(), randN);
Eigen::MatrixXd samples = (gaussianSamples * normTransform).rowwise() + means.transpose();
return 0;
}
I am actually trying to solve large sparse linear systems using C++ lib Eigen.
Sparse matrices are taken from this page. Each system as this structure: Ax = b where A is the sparse matrix (n x n), b is computed as A*xe with xe vector of dimension n containing only zeros. After computing x I need to compute the relative error between xe and x. I have written some code but I can't understand why the relative error is so high (1.49853e+08) at the end of the computation.
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/SparseExtra>
#include<Eigen/SparseCholesky>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix<double> mat;
loadMarket(mat, "/Users/anto/Downloads/ex15/ex15.mtx");
VectorXd xe = VectorXd::Constant(mat.rows(), 1);
VectorXd b = mat*xe;
SimplicialCholesky<Eigen::SparseMatrix<double> > chol(mat);
VectorXd x = chol.solve(b);
double relative_error = (x-xe).norm()/(xe).norm();
cout << relative_error << endl;
}
The matrix ex15 can be downloaded from this page. It is a symmetric, positive definite matrix. Can anyone help me to solve the problem? Thank you in advance for your help.
According to this page, ex15 is not full rank. You should check that each step went well:
SimplicialLDLT<Eigen::SparseMatrix<double> > chol(mat);
if(chol.info()!=Eigen::Success)
return;
VectorXd x = chol.solve(b);
if(chol.info()!=Eigen::Success)
return;
and then check that you got one solution (if it's not full rank and that at least one solution exists, then there exist a whole subspace of solutions):
cout << (mat*x-b).norm()/b.norm() << "\n";
I'm trying to use the Spectra 3.5 Library on my Linux machine, and the SparseGenMatProd wrapper for the Matrix-Vector multiplication only seems to work when the sparse matrix is in ColMajor format. Is this normal behavior and if so, how can I fix it to take RowMajor format? I've included a basic example where the output is "Segmentation fault (core dumped)". I've gone through several other posts and the documentation, but can't seem to find an answer.
#include <Eigen/Core>
#include <Eigen/SparseCore>
#include <GenEigsSolver.h>
#include <MatOp/SparseGenMatProd.h>
#include <iostream>
using namespace Spectra;
int main()
{
// A band matrix with 1 on the main diagonal, 2 on the below-main subdiagonal,
// and 3 on the above-main subdiagonal
const int n = 10;
Eigen::SparseMatrix<double, Eigen::RowMajor> M(n, n);
M.reserve(Eigen::VectorXi::Constant(n, 3));
for(int i = 0; i < n; i++)
{
M.insert(i, i) = 1.0;
if(i > 0)
M.insert(i - 1, i) = 3.0;
if(i < n - 1)
M.insert(i + 1, i) = 2.0;
}
// Construct matrix operation object using the wrapper class SparseGenMatProd
SparseGenMatProd<double> op(M);
// Construct eigen solver object, requesting the largest three eigenvalues
GenEigsSolver< double, LARGEST_MAGN, SparseGenMatProd<double> > eigs(&op, 3, 6);
// Initialize and compute
eigs.init();
int nconv = eigs.compute();
// Retrieve results
Eigen::VectorXcd evalues;
if(eigs.info() == SUCCESSFUL)
evalues = eigs.eigenvalues();
std::cout << *emphasized text*"Eigenvalues found:\n" << evalues << std::endl;
return 0;
}
If you change line 15 to:
Eigen::SparseMatrix<double, Eigen::ColMajor> M(n, n);
it will work as expected.
Currently I'm working around this and converting my matrices to ColMajor, but I'd like to understand what's going on. Any help is much appreciated.
The API of SparseGenMatProd seems to be quite misleading. Looks like you have to specify you are dealing with row-major matrices through the second template parameter:
SparseGenMatProd<double,RowMajor> op(M);
otherwise M is implicitly converted to a temporary column-major matrix which is then stored by const reference by op but this temporary is dead right after that line of code.
I have a question about Eigen library in C++. Actually, I want to calculate inverse matrix of sparse matrix.
When I used Dense matrix in Eigen, I can use .inverse() operation to calculate inverse of dense matrix.
But in Sparse matrix, I cannot find inverse operation anywhere. Does anyone who know to calculate inverse of sparse matrix? help me.
You cannot do it directly, but you can always calculate it, using one of the sparse solvers. The idea is to solve A*X=I, where I is the identity matrix. If there is a solution, X will be your inverse matrix.
The eigen documentation has a page about sparse solvers and how to use them, but the basic steps are as follows:
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
SparseMatrix<double> I(n,n);
I.setIdentity();
auto A_inv = solver.solve(I);
It's not mathematically meaningful.
A sparse matrix does not necessarily have a sparse inverse.
That's why the method is not available.
A small extension on #Soheib and #MatthiasB's answers, if you're using Eigen::SparseMatrix<float> it's better to use SparseLU rather than SimplicialLLT or SimplicialLDLT, they produced wrong answers with me on float matrices
Be warned that the inverse of a sparse matrix is not necessarily sparse, so if you're working with large matrices (which is likely, if you're using sparse representations) then this is going to be expensive. Think carefully about whether you really need the actual matrix inverse. If you're going to use the matrix inverse to solve a system of equations, then you don't need to actually compute the matrix inverse and multiply it out (use the method typically named solve and supply the right-hand-side of the equation). If you need the inverse of the Fisher matrix for covariances, try to approximate.
You can find a example about inverse of Sparse Complex Matrix
I used of SimplicialLLT class,
you can find other class from bellow
http://eigen.tuxfamily.org/dox-devel/group__TopicSparseSystems.html
This page can help you with proper class name for your work (spead, accuracy and dimmenssion of your Matrix)
////////////////////// In His Name \\\\\\\\\\\\\\\\\\\\\\\\\\\
#include <iostream>
#include <vector>
#include <Eigen/Dense>
#include <Eigen/Sparse>
using namespace std;
using namespace Eigen;
int main()
{
SparseMatrix< complex<float> > A(4,4);
for (int i=0; i<4; i++) {
for (int j=0; j<4; j++) {
A.coeffRef(i, i) = i+j;
}
}
A.insert(2,1) = {2,1};
A.insert(3,0) = {0,0};
A.insert(3,1) = {2.5,1};
A.insert(1,3) = {2.5,1};
SimplicialLLT<SparseMatrix<complex<float> > > solverA;
A.makeCompressed();
solverA.compute(A);
if(solverA.info()!=Success) {
cout << "Oh: Very bad" << endl;
}
SparseMatrix<float> eye(4,4);
eye.setIdentity();
SparseMatrix<complex<float> > inv_A = solverA.solve(eye);
cout << "A:\n" << A << endl;
cout << "inv_A\n" << inv_A << endl;
}
I am trying to do a simple matrix inversion operation using boost. But I
am getting an error.
Basically what I am trying to find is inversted_matrix =
inverse(trans(matrix) * matrix)
But I am getting an error
Check failed in file boost_1_53_0/boost/numeric/ublas/lu.hpp at line 299:
detail::expression_type_check (prod (triangular_adaptor<const_matrix_type,
upper> (m), e), cm2)
terminate called after throwing an instance of
'boost::numeric::ublas::internal_logic'
what(): internal logic
Aborted (core dumped)
My attempt:
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/io.hpp>
#include <boost/numeric/ublas/vector_proxy.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/triangular.hpp>
#include <boost/numeric/ublas/lu.hpp>
namespace ublas = boost::numeric::ublas;
template<class T>
bool InvertMatrix (const ublas::matrix<T>& input, ublas::matrix<T>& inverse) {
using namespace boost::numeric::ublas;
typedef permutation_matrix<std::size_t> pmatrix;
// create a working copy of the input
matrix<T> A(input);
// create a permutation matrix for the LU-factorization
pmatrix pm(A.size1());
// perform LU-factorization
int res = lu_factorize(A,pm);
if( res != 0 )
return false;
// create identity matrix of "inverse"
inverse.assign(ublas::identity_matrix<T>(A.size1()));
// backsubstitute to get the inverse
lu_substitute(A, pm, inverse);
return true;
}
int main(){
using namespace boost::numeric::ublas;
matrix<double> m(4,5);
vector<double> v(4);
vector<double> thetas;
m(0,0) = 1; m(0,1) = 2104; m(0,2) = 5; m(0,3) = 1;m(0,4) = 45;
m(1,0) = 1; m(1,1) = 1416; m(1,2) = 3; m(1,3) = 2;m(1,4) = 40;
m(2,0) = 1; m(2,1) = 1534; m(2,2) = 3; m(2,3) = 2;m(2,4) = 30;
m(3,0) = 1; m(3,1) = 852; m(3,2) = 2; m(3,3) = 1;m(3,4) = 36;
std::cout<<m<<std::endl;
matrix<double> product = prod(trans(m), m);
std::cout<<product<<std::endl;
matrix<double> inversion(5,5);
bool inverted;
inverted = InvertMatrix(product, inversion);
std::cout << inversion << std::endl;
}
Boost Ublas has runtime checks to ensure among other thing numerical stability.
If you look at source of the error, you can see that it tries to make sure that
U*X = B, X = U^-1*B, U*X = B (or smth like that) are coorect to within some epsilon. If you have too much deviation numerically this will likely not hold.
You can disable checks via -DBOOST_UBLAS_NDEBUG or twiddle with BOOST_UBLAS_TYPE_CHECK_EPSILON, BOOST_UBLAS_TYPE_CHECK_MIN.
As m has only 4 rows, prod(trans(m), m) cannot have a rank higher than 4, and as the product is a 5x5 matrix, it must be singular (i.e. it has determinant 0) and calculating the inverse of a singular matrix is like division by 0. Add independent rows to m to solve this singularity problem.
I think your matrix dimension, 4 by 5, caused the error. Like what Maarten Hilferink mentioned, you may try with a square matrix like 5 by 5. Here are requirement to have an inverse:
The matrix must be square (same number of rows and columns).
The determinant of the matrix must not be zero (determinants are covered in section 6.4). This is instead of the real number not being zero to have an inverse, the determinant must not be zero to have an inverse.