Let's say I have two eigen matrices A and B, and I want to create a third matrix defined by
C(i,j) = 5.0 if A(i,j) > B(i,j), 0 otherwise
I guess it is possible to do it without an explicit for loop. But I am not very proficient with Eigen yet. What whould be the best approach?
Assuming A, B and C are MatrixXd you can do:
C = (A.array()>B.Array()).cast<double>() * 5.0;
Related
I have MatrixXf A & MatrixXf B and I want to create a new matrix MatrixXf C with max values for each index i.e.
C(i,j) = max(A(i,j), B(i,J);
Does Eigen have a function to do this?
A.cwiseMax(B) can also be used.
Check http://eigen.tuxfamily.org/dox/group__QuickRefPage.html#title6 for reference
in Matlab if I write
A = B*inv(C)
(with A, B and C being square matrices), I get a warning that matrix inversion should be replaced with a matrix "right-division" (due to being numerically more stable and accurate) like
A = B/C
In my Eigen C++ project I have the following code:
Eigen::Matrix<double> A = B*(C.inverse());
and I was woundering if there is an equivalent replacement for taking the matrix inverse in Eigen analogous to the one in Matlab mentioned above?
I know that matrix "left-division" can be expressed by solving a system of equations for expressions like
A = inv(C)*B
but what about
A = C*inv(B)
in Eigen?
At the moment the most efficient way to do this is to rewrite your equation to
A^T = inv(C^T) * B^T
A = (inv(C^T) * B^T)^T
which can be implemented in Eigen as
SomeDecomposition decompC(C); // decompose C with a suiting decomposition
Eigen::MatrixXd A = decompC.transpose().solve(B.transpose()).transpose();
There were/are plans, that eventually, one can write
A = B * decompC.inverse();
and Eigen will evaluate this in the most efficient way.
Say we have a matrix A of dimension MxN and a vector a of dimension Mx1. In Matlab, to multiply 'a' with all columns of 'A', we can do
bsxfun(#times, a, A)
Is there an equivalent approach in Eigen, without having to loop over the columns of the matrix?
I'm trying to do
M = bsxfun(#times, a, A) + bsxfun(#times, a2, A2)
and hoping that Eigen's lazy evaluation will make it more efficient.
Thanks!
You can do:
M = A.array().colwise()*a.array();
The .array() is needed to redefine the semantic of operator* to coefficient-wise products (not needed if A and a are Array<> objects).
In this special case, it is probably better to write it as a scaling operation:
M = a.asDiagonal() * A;
In both cases you won't get any temporary thanks to lazy evaluation.
I can't seem to make it work, should it?
e.g.:
Vector3d a;
Vector3d b;
...
double c = a.transpose() * b; // Doesn't work
double c = a.dot(b); // Seems to work
I'm coming from MATLAB where a'*b is the thing. I can deal with using dot if needed, but I'd like to know if I'm just doing something dumb.
In matlab, a'*b is syntactic sugar for dot(a, b). Note that the requirement for vectors is "they must have the same length" and not that one is a row vector, and one a column. This is the same as Eigen's a.dot(b).
In Eigen, a.transpose() * b works, it just doesn't return a double but rather a 1x1 matrix. If you wrote it as MatrixXd c = a.transpose() * b; or double c = (a.transpose() * b)[0]; it should work as expected.
That above paragraph was the case at in Eigen 2 (which apparently OP was using). Since then (Eigen 3), #ggael of course, is right. This answer regarded a general case where the dimensions of a and b are not known at compile time. In the case where Vector3d or VectorXd are used, then double c = a.transpose() * b; works as well, not as stated in the question. With versions <= 2.0.15, the original is correct without any reservations.
I need to convert a MATLAB code into C++, and I'm stuck with this instruction:
a = K\F
, where K is a sparse matrix of size n x n, and F is a column vector of size n.
I know it's easy to solve that using the Eigen library - I have tried the fullPivLu() method, and I've been able to built a working snippet, using a Matrix and a Vector.
However, my K is a SparseMatrix<double> (while F is a VectorXd). My declarations:
SparseMatrix<double> K(nec, nec);
VectorXd F(nec);
and it seems that SparseMatrix doesn't have the fullPivLu() method, nor the lu() one.
I've tried, in fact, these two different approaches, taken from the documentation:
//1.
MatrixXd x = K.fullPivLu().solve(F);
//2.
VectorXf x;
K.lu().solve(F, &x);
They don't work, because fullPivLu() and lu() are not members of 'Eigen::SparseMatrix<_Scalar>'
So, I am asking: is there a way to solve a system of linear equations (the MATLAB's mldivide, or '\'), using Eigen for C++, with K being a sparse matrix?
Thank you for any help.
Would Eigen::SparseLU work for you?