Apply function to all elements in Eigen Matrix without loop - c++

I have an Eigen::Matrix and I would like to generate a new matrix where all its elements are generated by the some function call to the elements of the matrix:
Matrix< Foo,2,2 > m = ...;
Matrix< int, 2, 2> new_m;
for each m[i][j]:
new_m[i][j] = m[i][j].member_of_foo_returns_int()
I had a look on Eigen::unaryExpr but the elements changed and the return have to be the same. However, I have Foo objects in the first matrix, and an int returned in the new matrix. Is this possible without a vanilla loop?

You can pass a lambda expression to unaryExpr, like so:
Eigen::Matrix<int,2,2> new_m = m.unaryExpr(
[](const Foo& x) {
return x.member_of_foo_returns_int();
});
If you can't use c++11, you need to write a small helper function:
int func_wrapper(const Foo& x) {
return x.member_of_foo_returns_int();
}
and pass that using std::ptr_fun:
Eigen::Matrix<int,2,2> new_m = m.unaryExpr(std::ptr_fun(func_wrapper));
For calling member functions there is actually a nice helper function already implemented named std::mem_fun_ref (this takes a member function pointer and returns a functor object which is accepted by unaryExpr):
Eigen::Matrix<int,2,2> new_m = m.unaryExpr(
std::mem_fun_ref(&Foo::member_of_foo_returns_int));
All these variants are type safe, i.e., trying to store the result in a non-int-Matrix will not compile.

Related

C++: Return a lambda expression from a function that captures parameter of the function

The following function is supposed to take the coefficients of a polynomial and create a function of time from them:
std::function<double(double)> to_equation(const std::vector<double>& coefficients)
{
return [coefficients](double t)
{
auto total = 0.0;
for (int i = 0; i < coefficients.size(); i++)
{
total += coefficients[i] * pow(t,i);
return total;
}
};
}
It should be usable as follows:
std::vector<double> coefficients = {1.0,2.0,3.0};
auto f = to_equation(coefficients);
auto value = f(t);
The code does however not work as intended, since at the time of execution of f(t), not the coefficients passed to to_equation(coefficients) are used, but some totally different values magically captured from the context. What is happening and how can I fix that?
Well, you are returning a lambda that capture coefficients by value. If you pass some vector to the to_equation function, all values will be copied, and the lambda won't refer to the original vector anymore.
I suggest this solution:
// auto is faster than std::function
auto to_equation(const std::vector<double>& coefficients)
{
// Here, you capture by reference.
// The lambda will use the vector passed in coefficients
return [&coefficients](double t)
{
// ...
};
}
However, you must sometime deal with code like this:
std::function<double(double)> f;
{
std::vector<double> coeff{0.2, 0.4, 9.8};
f = to_equation(coeff);
}
auto result = f(3);
This is bad, the vector coeff don't live long enough, and we refer to it after the vector is destroyed.
I suggest adding this overload to your function:
// when a vector is moved into coefficients, move it to the lambda
auto to_equation(std::vector<double>&& coefficients)
{
// Here, you capture by value.
// The lambda will use it's own copy.
return [coeff = std::move(coefficients)](double t)
{
// ...
};
}
Then, calling your function is possible in both ways:
std::vector<double> coeff{0.2, 0.4, 9.8};
auto f1 = to_equation(coeff); // uses reference to coeff
auto f2 = to_equation({0.2, 0.4, 9.8}) // uses value moved into the lambda
You can capture by reference, instead of by value. But, of course, if the underlying vector goes out of scope and gets destroyed before the lambda gets invoked, you'll have a big mess on your hands.
The safest course of action is to use a std::shared_ptr<std::vector<double>> instead of a plain vector, and capture that by value. Then, the lambda will always, essentially, feed on whatever were the most recent set of coefficients, and won't blow up if it gets called after all other references to the underlying vector, from whatever code computed them, go out of scope.
(Of course, you have to keep in mind what's going to happen here if the lambda gets copied around, since all copies of the original lambda will be using the same vector).
For more information, open the chapter of your C++ book that explains the difference between capturing by value and by reference, when using lambdas.

How do I pass an Eigen matrix row reference, to be treated as a vector?

I have a function that operates on a Vector reference, e.g.
void auto_bias(const Eigen::VectorXf& v, Eigen:Ref<Eigen::VectorXf>> out)
{
out = ...
}
and at some point I need to have this function operate on a Matrix row. Now, because the default memory layout is column-major, I can't just Map<> the data the row points to into a vector. So, how do I go about passing the row into the above function so that I can operate on it?
The not-so-pretty solution is to have a temp vector, e.g.
VectorXf tmpVec = matrix.row(5);
auto_bias(otherVector, tmpVec);
matrix.row(5) = tmpVec;
but is there a way of doing it directly?
You can modify your function to take a reference to the row type (which is a vector expression) instead of a vector. This is really only manageable with a template to infer that type for you:
#include <iostream>
#include <Eigen/Core>
template<typename V>
void set_row(V&& v) {
v = Eigen::Vector3f(4.0f, 5.0f, 6.0f);
}
int main() {
Eigen::Matrix3f m = Eigen::Matrix3f::Identity();
set_row(m.row(1));
std::cout << m;
return 0;
}
You can allow Ref<> to have a non default inner-stride (also called increment), as follows:
Ref<VectorXf, 0, InnerStride<>>
See the example function foo3 of the Ref's documentation.
The downside is a possible loss of performance even when you are passing a true VectorXf.

Use eigen matrix as argument for eigen array reference

I am using a library where a function takes array references and updates them:
void foo(ArrayXXd A&)
However, in my code I want to use
Matrix<double,Dynamic,Dynamic>
How can I call the function foo with a matrix? Can I map the matrix to an array somehow?
This is the compiler error:
error: invalid initialization of reference of type ‘Mat& {aka Eigen::Array<double, -1, -1>&}’ from expression of type ‘Eigen::Matrix<double, -1, -1>’
I did the following that seems to work, but I don't know if it is a general solution (different memory layouts and so on).
//X_IN is a Matrix<double,Dynamic,Dynamic> &
//Map Matrix to pointer
X_pntr = X_IN.data();
//Map pointer to Array
ArrayXXd X_array = Map<ArrayXXd>(X_pntr,X_IN.rows(),X_IN.cols());
foo(X_array);
Most objects in Eigen are expressions (more specifically, objects derived from MatrixBase). If you want to write a function that works for any type of Matrix/Array etc. and not be restricted only to e.g. Array, you need to write it in the following form:
template<typename T>
void foo(Eigen::MatrixBase<T>& A)
{
// do something here with A
}
Now you can invoke foo with any kind of object, for example foo(A*A), where A is a MatrixXd, or MatrixXf or ArrayXd, you get the idea. See the official documentation for more details:
http://eigen.tuxfamily.org/dox/TopicFunctionTakingEigenTypes.html
After reading your comment I can just come up with this solution:
Eigen::Matrix<double,Eigen::Dynamic,Eigen::Dynamic> m(2,2);
m << 1,2,3,4;
Eigen::ArrayXXd tmp = m; // convert into array (via copy)
foo(tmp); // modify tmp
m = tmp; // copy back into m
cout << m; // now m is modified
When Eigen will support move semantics, then you will be able to use std::move instead of making 2 copies.

How to index and assign elements in a tensor using identical call signatures?

OK, I've been googling around for too long, I'm just not sure what to call this technique, so I figured it's better to just ask here on SO. Please point me in the right direction if this has an obvious name and/or solution I've overlooked.
For the laymen: a tensor is the logical extension of the matrix, in the same way a matrix is the logical extension of the vector. A vector is a rank-1 tensor (in programming terms, a 1D array of numbers), a matrix is a rank-2 tensor (a 2D array of numbers), and a rank-N tensor is then simply an N-D array of numbers.
Now, suppose I have something like this Tensor class:
template<typename T = double> // possibly also with size parameters
class Tensor
{
private:
T *M; // Tensor data (C-array)
// alternatively, std::vector<T> *M
// or std::array<T> *M
// etc., or possibly their constant-sized versions
// using Tensor<>'s template parameters
public:
... // insert trivial fluffy stuff here
// read elements
const T & operator() (size_t a, size_t b) const {
... // error checks etc.
return M[a + rows*b];
}
// write elements
T & operator() (size_t a, size_t b) {
... // error checks etc.
return M[a + rows*b];
}
...
};
With these definitions of operator()(...), indexing/assign individual elements then has the same call signature:
Tensor<> B(5,5);
double a = B(3,4); // operator() (size_t,size_t) used to both GET elements
B(3,4) = 5.5; // and SET elements
It is fairly trivial to extend this up to arbitrary tensor rank. But what I'd like to be able to implement is a more high-level way of indexing/assigning elements:
Tensor<> B(5,5);
Tensor<> C = B( Slice(0,4,2), 2 ); // operator() (Slice(),size_t) used to GET elements
B( Slice(0,4,2), 2 ) = C; // and SET elements
// (C is another tensor of the correct dimensions)
I am aware that std::valarray (and many others for that matter) does a very similar thing already, but it's not my objective to just accomplish the behavior; my objective here is to learn how to elegantly, efficiently and safely add the following functionality to my Tensor<> class:
// Indexing/assigning with Tensor<bool>
B( B>0 ) += 1.0;
// Indexing/assigning arbitrary amount of dimensions, each dimension indexed
// with either Tensor<bool>, size_t, Tensor<size_t>, or Slice()
B( Slice(0,2,FINAL), 3, Slice(0,3,FINAL), 4 ) = C;
// double indexing/assignment operation
B(3, Slice(0,4,FINAL))(mask) = C; // [mask] == Tensor<bool>
.. etc.
Note that it's my intention to use operator[] for non-checked versions of operator(). Alternatively, I'll stick more to the std::vector<> approach of using .at() methods for checked versions of operator[]. Anyway, this is a design choice and besides the issue right now.
I've conjured up the following incomplete "solution". This method is only really manageable for vectors/matrices (rank-1 or rank-2 tensors), and has many undesirable side-effects:
// define a simple slice class
Slice ()
{
private:
size_t
start, stride, end;
public:
Slice(size_t s, size_t e) : start(s), stride(1), end(e) {}
Slice(size_t s, size_t S, size_t e) : start(s), stride(S), end(e) {}
...
};
template<typename T = double>
class Tensor
{
... // same as before
public:
// define two operators() for use with slices:
// version for retrieving data
const Tensor<T> & operator() (Slice r, size_t c) const {
// use slicing logic to construct return tensor
...
return M;
{
// version for assigning data
Sass operator() (Slice r, size_t c) {
// returns Sass object, defined below
return Sass(*this, r,c);
}
protected:
class Sass
{
friend class Tensor<T>;
private:
Tensor<T>& M;
const Slice &R;
const size_t c;
public:
Sass(Tensor<T> &M, const Slice &R, const size_t c)
: M(M)
, R(R)
, c(c)
{}
operator Tensor<T>() const { return M; }
Tensor<T> & operator= (const Tensor<T> &M2) {
// use R/c to copy contents of M2 into M using the same
// Slice-logic as in "Tensor<T>::operator()(...) const" above
...
return M;
}
};
But this just feels wrong...
For each of the indexing/assignment methods outlined above, I'd have to define a separate Tensor<T>::Sass::Sass(...) constructor, a new Tensor<T>::Sass::operator=(...), and a new Tensor<T>::operator()(...) for each and every such operation. Moreover, the Tensor<T>::Sass::operators=(...) would need to contain much of the same stuff that's already in the corresponding Tensor<T>::operator()(...), and making everything suitable for a Tensor<> of arbitrary rank makes this approach quite ugly, way too verbose and more importantly, completely unmanageable.
So, I'm under the impression there is a much more effective approach to all this.
Any suggestions?
First of all I'd like to point out some design issues:
T & operator() (size_t a, size_t b) const;
suggests you can't alter the matrix through this method, because it's const. But you are giving back a nonconst reference to a matrix element, so in fact you can alter it. This only compiles because of the raw pointer you are using. I suggest to use std::vector instead, which does the memory management for you and will give you an error because vector's const version of operator[] gives a const reference like it should.
Regarding your actual question, I am not sure what the parameters of the Slice constructor should do, nor what a Sass object is meant to be (I am no native speaker, and "Sass" gives me only one translation in the dictionary, meaning sth. like "impudence", "impertinence").
However, I suppose with a slice you want to create an object that gives access to a subset of a matrix, defined by the slice's parameters.
I would advice against using operator() for every way to access the matrix. op() with two indices to access a given element seems natural. Using a similar operator to get a whole matrix to me seems less intuitive.
Here's an idea: make a Slice class that holds a reference to a Matrix and the necessary parameters that define which part of the Matrix is represented by the Slice. That way a Slice would be something like a proxy to the Matrix subset it defines, similar to a pair of iterators which can be seen as a proxy to a subrange of the container they are pointing to. Give your Matrix a pair of slice() methods (const and nonconst) that give back a Slice/ConstSlice, referencing the Matrix you call the method on. That way, you can even put checks into the method to see if the Slice's parameters make sense for the Matrix it refers to. If it makes sense and is necessary, you can also add a conversion operator, to convert a Slice into a Matrix of its own.
Overloading operator() again and again and using the parameters as a mask, as linear indices and other stuff is more confusing than helping imo. operator() is slick if it does something natural which everybody expects from it. It only obfuscates the code if it is used everywhere. Use named methods instead.
Not an answer, just a note to follow up my comment:
Tensor<bool> T(false);
// T (whatever its rank) contains all false
auto lazy = T(Slice(0,4,2));
// if I use lazy here, it will be all false
T = true;
// now T contains all true
// if I use lazy here, it will be all true
This may be what you want, or it might be unexpected.
In general, this can work cleanly with immutable tensors, but allowing mutation gives the same class of problem as COW strings.
If you allow for your Tensor to implicitly be a double you can return only Tensors from your operator() overload.
operator double() {
return M.size() == 1 ? M[0] : std::numeric_limits<double>::quiet_NaN();
};
That should allow for
double a = B(3,4);
Tensor<> a = B(Slice(1,2,3),4);
To get the operator() to work with multiple overloads with Slice and integer is another issue. I'd probably just use Slice and create another implicit conversion so integers can be Slice's, then maybe using the variable argument elipses.
const Tensor<T> & operator() (int numOfDimensions, ...)
Although the variable argument route is kind of a kludge best to just have 8 specializations for 1-8 parameters of Slice.

Multiply vector elements by a scalar value using STL

Hi I want to (multiply,add,etc) vector by scalar value for example myv1 * 3 , I know I can do a function with a forloop , but is there a way of doing this using STL function? Something like the {Algorithm.h :: transform function }?
Yes, using std::transform:
std::transform(myv1.begin(), myv1.end(), myv1.begin(),
std::bind(std::multiplies<T>(), std::placeholders::_1, 3));
Before C++17 you could use std::bind1st(), which was deprecated in C++11.
std::transform(myv1.begin(), myv1.end(), myv1.begin(),
std::bind1st(std::multiplies<T>(), 3));
For the placeholders;
#include <functional>
If you can use a valarray instead of a vector, it has builtin operators for doing a scalar multiplication.
v *= 3;
If you have to use a vector, you can indeed use transform to do the job:
transform(v.begin(), v.end(), v.begin(), _1 * 3);
(assuming you have something similar to Boost.Lambda that allows you to easily create anonymous function objects like _1 * 3 :-P)
Modern C++ solution for your question.
#include <algorithm>
#include <vector>
std::vector<double> myarray;
double myconstant{3.3};
std::transform(myarray.begin(), myarray.end(), myarray.begin(), [&myconstant](auto& c){return c*myconstant;});
I think for_each is very apt when you want to traverse a vector and manipulate each element according to some pattern, in this case a simple lambda would suffice:
std::for_each(myv1.begin(), mtv1.end(), [](int &el){el *= 3; });
note that any variable you want to capture for the lambda function to use (say that you e.g. wanted to multiply with some predetermined scalar), goes into the bracket as a reference.
If you had to store the results in a new vector, then you could use the std::transform() from the <algorithm> header:
#include <algorithm>
#include <vector>
int main() {
const double scale = 2;
std::vector<double> vec_input{1, 2, 3};
std::vector<double> vec_output(3); // a vector of 3 elements, Initialized to zero
// ~~~
std::transform(vec_input.begin(), vec_input.end(), vec_output.begin(),
[&scale](double element) { return element *= scale; });
// ~~~
return 0;
}
So, what we are saying here is,
take the values (elements) of vec_input starting from the beginning (vec_input.begin()) to the end (vec_input.begin()),
essentially, with the first two arguments, you specify a range of elements ([beginning, end)) to transform,
range
pass each element to the last argument, lambda expression,
take the output of lambda expression and put it in the vec_output starting from the beginning (vec_output.begin()).
the third argument is to specify the beginning of the destination vector.
The lambda expression
captures the value of scale factor ([&scale]) from outside by reference,
takes as its input a vector element of type double (passed to it by std::transform())
in the body of the function, it returns the final result,
which, as I mentioned above, will be consequently stored in the vec_input.
Final note: Although unnecessary, you could pass lambda expression per below:
[&scale](double element) -> double { return element *= scale; }
It explicitly states that the output of the lambda expression is a double. However, we can omit that, because the compiler, in this case, can deduce the return type by itself.
I know this not STL as you want, but it is something you can adapt as different needs arise.
Below is a template you can use to calculate; 'func' would be the function you want to do: multiply, add, and so on; 'parm' is the second parameter to the 'func'. You can easily extend this to take different func's with more parms of varied types.
template<typename _ITStart, typename _ITEnd, typename _Func , typename _Value >
_ITStart xform(_ITStart its, _ITEnd ite, _Func func, _Value parm)
{
while (its != ite) { *its = func(*its, parm); its++; }
return its;
}
...
int mul(int a, int b) { return a*b; }
vector< int > v;
xform(v.begin(), v.end(), mul, 3); /* will multiply each element of v by 3 */
Also, this is not a 'safe' function, you must do type/value-checking etc. before you use it.