convert a dolfin::matrix in a eigen::matrix - c++

I'm coding in c++ and i'm using FEniCSĀ  fenics/2016.1.0. A part of my code is
Matrix A;
Vector f;
std::vector<std::shared_ptr<const DirichletBC>> dirichlet_matrici({dirichlet});
assemble_system(A,f,a,L,dirichlet_matrici);
solve(A, *(u.vector()), f);
I want so solve the system with Eigen, so I need to convert the dolfin::Matrix A and the dolfin::Vector f in Eigen objects. Is it possible?
Thank you for your help

I am not sure if it is possible to do a direct conversion. However, it is possible to create a new Eigen matrix and then feed each individual value from the first matrix into the second.

Related

How can I multiply two Eigen::DiagonalMatrix and ad the result to an Eigen::SparseMatrix?

I'm trying to write a solver for a linear system, and coming from Matlab/NumPy and the like, I find Eigen's types a bit limited.
My current issue resolves around this:
D * DD + S
Where D and DD are of type Eigen::DiagonalMatrix<double, Eigen::Dynamic, Eigen::Dynamic> and S is an Eigen::SparseMatrix`.
Is there an (efficient) way to do this? It seems rather basic so I must be missing something. I'm willing to give up D and DD being DiagonalMatrix and them being SparseMatrix instead, as long as the above expression is complicated too much.
Assuming the sparse matrix S already has non-zero coefficients along the diagonal you can do:
S.diagonal() += D.cwiseProduct(DD);

Eigen equivalent to Octave/MATLAB mldivide for rectangular matrices

I'm using Eigen v3.2.7.
I have a medium-sized rectangular matrix X (170x17) and row vector Y (170x1) and I'm trying to solve them using Eigen. Octave solves this problem fine using X\Y, but Eigen is returning incorrect values for these matrices (but not smaller ones) - however I suspect that it's how I'm using Eigen, rather than Eigen itself.
auto X = Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>{170, 17};
auto Y = Eigen::Matrix<T, Eigen::Dynamic, 1>{170};
// Assign their values...
const auto theta = X.colPivHouseholderQr().solve(Y).eval(); // Wrong!
According to the Eigen documentation, the ColPivHouseholderQR solver is for general matrices and pretty robust, but to make sure I've also tried the FullPivHouseholderQR. The results were identical.
Is there some special magic that Octave's mldivide does that I need to implement manually for Eigen?
Update
This spreadsheet has the two input matrices, plus Octave's and my result matrices.
Replacing auto doesn't make a difference, nor would I expect it to because construction cannot be a lazy operation, and I have to call .eval() on the solve result because the next thing I do with the result matrix is get at the raw data (using .data()) on tail and head operations. The expression template versions of the result of those block operations do not have a .data() member, so I have to force evaluation beforehand - in other words theta is the concrete type already, not an expression template.
The result for (X*theta-Y).norm()/Y.norm() is:
2.5365e-007
And the result for (X.transpose()*X*theta-X.transpose()*Y).norm() / (X.transpose()*Y).norm() is:
2.80096e-007
As I'm currently using single precision float for my basic numerical type, that's pretty much zero for both.
According to your verifications, the solution you get is perfectly fine. If you want more accuracy, then use double floating point numbers. Note that MatLab/Octave use double precision by default.
Moreover, it might also likely be that your problem is not full rank, in which case your problem admit an infinite number of solution. ColPivHouseholderQR picks one, somehow arbitrarily. On the other hand, mldivide will pick the minimal norm one that you can also obtain with Eigen::BDCSVD (Eigen 3.3), or the slower Eigen::JacobiSVD.

Element wise multiplication between matrices in BLAS?

Im starting to use BLAS functions in c++ (specifically intel MKL) to create faster versions of some of my old Matlab code.
Its been working out well so far, but I cant figure out how to perform elementwise multiplication on 2 matrices (A .* B in Matlab).
I know gemv does something similar between a matrix and a vector, so should I just break one of my matrices into vectprs and call gemv repeatedly? I think this would work, but I feel like there should be aomething built in for this operation.
Use the Hadamard product. In MKL it's v?MUL. E.g. for doubles:
vdMul( n, a, b, y );
in Matlab notation it performs:
y[1:n] = a[1:n] .* b[1:n]
In your case you can treat matrices as vectors.

CUBLAS - matrix addition.. how?

I am trying to use CUBLAS to sum two big matrices of unknown size. I need a fully optimized code (if possible) so I chose not to rewrite the matrix addition code (simple) but using CUBLAS, in particular the cublasSgemm function which allows to sum A and C (if B is a unit matrix): *C = alpha*op(A)*op(B)+beta*c*
The problem is: C and C++ store the matrices in row-major format, cublasSgemm is intended (for fortran compatibility) to work in column-major format. You can specify whether A and B are to be transposed first, but you can NOT indicate to transpose C. So I'm unable to complete my matrix addition..
I can't transpose the C matrix by myself because the matrix is something like 20000x20000 maximum size.
Any idea on how to solve please?
cublasgeam has been added to CUBLAS5.0.
It computes the weighted sum of 2 optionally transposed matrices
If you're just adding the matrices, it doesn't actually matter. You give it alpha, Aij, beta, and Cij. It thinks you're giving it alpha, Aji, beta, and Cji, and gives you what it thinks is Cji = beta Cji + alpha Aji. But that's the correct Cij as far as you're concerned. My worry is when you start going to things which do matter -- like matrix products. There, there's likely no working around it.
But more to the point, you don't want to be using GEMM to do matrix addition -- you're doing a completely pointless matrix multiplication (which takes takes ~20,0003 operations and many passes through memory) for an operatinon which should only require ~20,0002 operations and a single pass! Treat the matricies as 20,000^2-long vectors and use saxpy.
Matrix multiplication is memory-bandwidth intensive, so there is a huge (factors of 10x or 100x) difference in performance between coding it yourself and a tuned version. Ideally, you'd change structures in your code to match the library. If you can't, in this case you can manage just by using linear algebra identities. The C-vs-Fortran ordering means that when you pass in A, CUBLAS "sees" AT (A transpose). Which is fine, we can work around it. If what you want is C=A.B, pass in the matricies in the opposite order, B.A . Then the library sees (BT . AT), and calculates CT = (A.B)T; and then when it passes back CT, you get (in your ordering) C. Test it and see.

Matrix Template Library matrix inversion

I'm trying to inverse a matrix with version Boost boost_1_37_0 and MTL mtl4-alpha-1-r6418. I can't seem to locate the matrix inversion code. I've googled for examples and they seem to reference lu.h that seems to be missing in the above release(s). Any hints?
#Matt suggested copying lu.h, but that seems to be from MTL2 rather than MTL4. I'm having trouble compiling with MTL2 with VS05 or higher.
So, any idea how to do a matrix inversion in MTL4?
Update: I think I understand Matt better and I'm heading down this ITL path.
Looks like you use lu_factor, and then lu_inverse. I don't remember what you have to do with the pivots, though. From the documentation.
And yeah, like you said, it looks like their documentations says you need lu.h, somehow:
How do I invert a matrix?
The first question you should ask
yourself is whether you want to really
compute the inverse of a matrix or if
you really want to solve a linear
system. For solving a linear system of
equations, it is not necessary to
explicitly compute the matrix inverse.
Rather, it is more efficient to
compute triangular factors of the
matrix and then perform forward and
backward triangular solves with the
factors. More about solving linear
systems is given below. If you really
want to invert a matrix, there is a
function lu_inverse() in mtl/lu.h.
If nothing else, you can look at lu.h on their site.
I've never used boost or MTL for matrix math but I have used JAMA/TNT.
This page http://wiki.cs.princeton.edu/index.php/TNT shows how to take a matrix inverse. The basic method is library-independent:
factor matrix M into XY where X and Y are appropriate factorizations (LU would be OK but for numerical stability I would think you would want to use QR or maybe SVD).
solve I = MN = (XY)N for N with the prerequisite that M has been factored; the library should have a routine for this.
In MTL4 use this:
mtl::matrix::inv(Matrix const &A, MatrixOut &Inv);
Here is a link to the api.