C++ invert matrix - c++

The following dynamic array contains a non-symmetric n*n matrix (with n <=100):
int **matrix;
matrix = new int*[n];
for (int i = 0; i < n; i++)
matrix[i] = new int[n];
Is there an extremely easy way to invert it? Ideally I'd only use something from the STL or download a single header file.

Using Eigen.
http://eigen.tuxfamily.org/index.php?title=Main_Page
You can map your array to an Eigen matrix and then perform efficient matrix inversion.
You must only include it.
I add that usually if you have to perform your inversion for linear system solving, it's better to use a matrix decomposition based on the properties of the matrix that you can exploit.
http://eigen.tuxfamily.org/dox/TutorialLinearAlgebra.html

Not extremely easy but it works: Numerical Recipes in c page 48, using LU decomposition.

Related

How to optimize matrix product of sparse and dense matrices in eigen when the result is selfadjoint

I am working with square matrices of type std::complex<double>. In particular, a sparse matrix S and a self-adjoint, dense matrix H, and I would like to compute the product of the form S*H*S.adjoint() and add it to another dense, self-adjoint matrix J. So a straight-forward way to do this in Eigen would be:
#include <Eigen/Dense>
#include <Eigen/Sparse>
#include <complex>
Eigen::Matrix<std::complex<double>, Eigen::Dynamic, Eigen::Dynamic> J, H;
Eigen::SparseMatrix<std::complex<double>> S;
// ...
// Set H, and J to be some self-adjoint matrices of the same size, and S also same
// size, but not necessarily self-adjoint.
// ...
J += S*H*S.adjoint();
But because H and J are self-adjoint and by the form of the product S*H*S.adjoint(), we know that J will remain self-adjoint after the operation. So there is really no need to compute the entire dense matrix result S*H*S.adjoint() and we could probably save some computation time by only computing the lower- or upper-triangular part of the result and adding that to the corresponding part of the matrix J. Eigen provides an API for this sort of optimization, but I'm not able to use it in this case. For example if instead of the sparse matrix S we had a dense matrix D, then doing
J += D*H*D.adjoint();
should be less efficient than
J.triangularView<Eigen::Lower>() = D*H*D.adjoint();
or
J.triangularView<Eigen::Lower>() = D*H.selfadjointView<Eigen::Lower>()*H.adjoint();
but the API doesn't seem to provide this level of optimization when computing the former product with a sparse matrix S instead of the dense matrix D. That is,
J.triangularView<Eigen::Lower>() = S*H*S.adjoint();
doesn't compile. So my question is: is there a way to tell Eigen to only compute the lower- (or upper-) triangular part of the matrix S*H*S.adjoint() and add it to the lower- (or upper-) triangular part of the self-adjoint matrix J to improve performance?
Perhaps even better would be an overload of a rank 1 update that looked something like
J.selfadjointView<Eigen::Lower>().rankUpdate(S,H);
Of course the current API doesn't support this form and to get the desired result would require taking the square root of H, call it G and do
J.selfadjointView<Eigen::Lower>().rankUpdate(S*G);
but although this should give the correct result, taking the square root is probably super expensive compared to the rest, so this would probably be slower.
The best performance I've found so far is
J.noalias() += S*H*S.adjoint();

Any C++/C equivalent function with sparse in MATLAB

I tried to port m code to c or cpp.
In my code there is a line
A = sparse(I,J,IA,nR,nC);
which converts row index I, col index J, and data IA to sparse matrix A with size nR x nC.
Is there any equivalent code with C++ or C?
An naïve algorithm to duplicate result in full matrix is
double *A;
A = malloc(sizeof(double)*nR*nC);
memset(A, 0, sizeof(double));
for(k=0; k<size_of_IA; k++)
A[I[k]*nC + J[k]] += IA[k];
Note that if there is common indices, the value is not over overwritten, but accumulated.
Eigen is an example of a C++ math matrix library that cobtains sparse matrices. It overloads operators to make it feel like a built in feature.
There are many C and C++ matrix libraries. None ship as part of std, nor is there anything built in.
Writing a good sparse matrix library would be quite hard; your best bet is finding a pre-written one. Recommendation questions are off topic

Submatrix view from indices in Eigen

Is it possible in Eigen to do the equivalent of the following operation in Matlab?
A=rand(10,10);
indices = [2,5,6,8,9];
B=A(indices,indices)
I want to have a submatrix as a view on the original matrix with given, non consecutive indices.
The best option would be to have a shared memory view of the original matrix, is this possible?
I've figured out a method that works but is not very fast, since it involves non vectorized for loops:
MatrixXi slice(const MatrixXi &A, const std::set<int> &indices)
{
int n = indices.size();
Eigen::MatrixXi B;
B.setZero(n,n);
std::set<int>::const_iterator iInd1 = indices.begin();
for (int i=0; i<n;++i)
{
std::set<int>::const_iterator iInd2=indices.begin();
for (int j=0; j<n;++j)
{
B(i,j) = A.coeffRef(*iInd1,*iInd2);
++iInd2;
}
++iInd1;
}
return B;
}
How can this be made faster?
Make your matrix traversal col-major (which is default in Eigen) http://eigen.tuxfamily.org/dox-devel/group__TopicStorageOrders.html
Disable debug asserts, EIGEN_NO_DEBUG, see http://eigen.tuxfamily.org/dox/TopicPreprocessorDirectives.html, as the comment by Deepfreeze suggested.
It is very non-trivial to implement vectorized version since elements are not contiguous in general. If you are up to it, take a look at AVX2 gather instructions (provided you have CPU with AVX2 support)
To implement matrix view (you called it shared-memory) you'd need to implement an Eigen expression, which is not too hard if you are well versed in C++ and know Eigen codebase. I can help you to get started if you so want.

Use Eigen library to perform sparseLU and display L & U?

I'm a new to Eigen and I'm working with sparse LU problem.
I found that if I create a vector b(n), Eigen could compute the x(n) for the Ax=b equation.
Questions:
How to display the L & U, which is the factorization result of the original matrix A?
How to insert non-zeros in Eigen? Right now I just test with some small sparse matrix so I insert non-zeros one by one, but if I have a large-scale matrix, how can I input the matrix in my program?
I realize that this question was asked a long time ago. Apparently, referring to Eigen documentation:
an expression of the matrix L, internally stored as supernodes The only operation available with this expression is the triangular solve
So there is no way to actually convert this to an actual sparse matrix to display it. Eigen::FullPivLU performs dense decomposition and is of no use to us here. Using it on a large sparse matrix, we would quickly run out of memory while trying to convert it to dense, and the time required to compute the factorization would increase several orders of magnitude.
An alternative solution is using the CSparse library from the Suite Sparse as:
extern "C" { // we are in C++ now, since you are using Eigen
#include <csparse/cs.h>
}
const cs *p_matrix = ...; // perhaps possible to use Eigen::internal::viewAsCholmod()
css *p_symbolic_decomposition;
csn *p_factor;
p_symbolic_decomposition = cs_sqr(2, p_matrix, 0); // 1 = ordering A + AT, 2 = ATA
p_factor = cs_lu(p_matrix, m_p_symbolic_decomposition, 1.0); // tol = 1.0 for ATA ordering, or use A + AT with a small tol if the matrix has amostly symmetric nonzero pattern and large enough entries on its diagonal
// calculate ordering, symbolic decomposition and numerical decomposition
cs *L = p_factor->L, *U = p_factor->U;
// there they are (perhaps can use Eigen::internal::viewAsEigen())
cs_sfree(p_symbolic_decomposition); cs_nfree(p_factor);
// clean up (deletes the L and U matrices)
Note that although this does not use expliit vectorization as some Eigen functions do, it is still fairly fast. CSparse is also very compact, it is just a single header and about thirty .c files with no external dependencies. Easy to incorporate in any C++ project. There is no need to actually include all of Suite Sparse.
If you'll use Eigen::FullPivLU::matrixLU() to the original matrix, you'll receive LU decomposition matrix. To display L and U separately, you can use method triangularView<mode>. In Eigen wiki you can find good example of it. Inserting nonzeros into matrices depends on numbers, which you wan't to put. Eigen has convenient syntax, so you can easily insert values in loop:
for(int i=0;i<size;i++)
{
for(int j=size;j>someNumber;j--)
{
matrix(i,j)=yourClass.getNextNumber();
}
}

Can I solve a system of linear equations, in the form Ax = b with A being sparse, using Eigen?

I need to convert a MATLAB code into C++, and I'm stuck with this instruction:
a = K\F
, where K is a sparse matrix of size n x n, and F is a column vector of size n.
I know it's easy to solve that using the Eigen library - I have tried the fullPivLu() method, and I've been able to built a working snippet, using a Matrix and a Vector.
However, my K is a SparseMatrix<double> (while F is a VectorXd). My declarations:
SparseMatrix<double> K(nec, nec);
VectorXd F(nec);
and it seems that SparseMatrix doesn't have the fullPivLu() method, nor the lu() one.
I've tried, in fact, these two different approaches, taken from the documentation:
//1.
MatrixXd x = K.fullPivLu().solve(F);
//2.
VectorXf x;
K.lu().solve(F, &x);
They don't work, because fullPivLu() and lu() are not members of 'Eigen::SparseMatrix<_Scalar>'
So, I am asking: is there a way to solve a system of linear equations (the MATLAB's mldivide, or '\'), using Eigen for C++, with K being a sparse matrix?
Thank you for any help.
Would Eigen::SparseLU work for you?