Eigen binaryExpr with eigen type output - c++

I'm having a problem while trying to use binaryExpr. It is the first use I'm making of it so I have been following the Eigen documentation
For my use I need a functor with Eigen type inputs and outputs but this does not want to compile and I do not understand why. I've looked up the explanation in the code but I didn't think this would apply here because I use floats and an array of floats
// We require Lhs and Rhs to have "compatible" scalar types.
// It is tempting to always allow mixing different types but remember that this is often impossible in the vectorized paths.
// So allowing mixing different types gives very unexpected errors when enabling vectorization, when the user tries to
// add together a float matrix and a double matrix.
Here is a short example of the use I would need that gets me the same compilation error:
#include <eigen3/Eigen/Dense>
using namespace std;
using namespace Eigen;
struct myBinaryFunctor {
EIGEN_EMPTY_STRUCT_CTOR(myBinaryFunctor)
typedef Vector2f result_type;
Vector2f operator()(const Matrix<float,9,1>& a,const float& f) const
{
float x = a.head(4).sum()*f;
float y = a.tail(5).sum()/f;
return Vector2f(x,y);
}
};
int main()
{
constexpr int n = 3;
Matrix<Matrix<float,9,1>,n,n> Ma;
Matrix<float,n,n> F;
Matrix<Vector2f,n,n> R;
for(size_t i = 0, sizeMa = Ma.size(); i<sizeMa; i++)
{
Ma(i).setOnes();
}
F.setConstant(n,n,2);
R = Ma.binaryExpr(F,myBinaryFunctor());
return 0;
}
The compilation output is :
/usr/local/include/eigen3/Eigen/src/Core/CwiseBinaryOp.h:107: erreur : static assertion failed: YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY
EIGEN_CHECK_BINARY_COMPATIBILIY(BinaryOp,typename Lhs::Scalar,typename Rhs::Scalar);
^
If you have a solution that could make this work this would be a huge help for me :) If not I would still enjoy an explanation to understand what is happening. Thanks a lot.

Adding:
namespace Eigen {
template<>
struct ScalarBinaryOpTraits<Matrix<float,9,1>,float,myBinaryFunctor> {
typedef Vector2f ReturnType;
};
}
will do the job. This is because implicit scalar conversion are explicitly disallowed within Eigen, so you must explicit say that two different scalar types are compatible. For instance adding a VectorXd to a VectorXf is disallowed.
Nonetheless, it seems to me that your abusing Eigen's features here.

Related

Automatic Differentiation of functions of complex variables

I was wondering if it is possible to apply boost's automatic differentiation library:
#include <boost/math/differentiation/autodiff.hpp>
to functions which return std::complex<double> values?
For instance, consider the multivariate complex valued function:
#include <complex>
std::complex<double> complex_function(double a, double c){
// Assuming a < 0
return exp(sqrt(std::complex(a, 0.0))) + sin(c);
}
How can I take the derivative wrt to a or c using Boost's autodiff? Is that even possible?
is [it] possible to apply boost's automatic differentiation library to functions which return std::complex<double> values?
Not at the present time.
A version that did might look something like this:
// THIS DOES NOT COMPILE - FOR DISCUSSION ONLY
#include <boost/math/differentiation/autodiff.hpp>
#include <iostream>
#include <complex>
namespace ad = boost::math::differentiation;
template <typename T0, typename T1>
auto complex_function(T0 a, T1 c){
// Assuming a < 0
return exp(sqrt(complex(a, 0.0))) + sin(c); // DOES NOT COMPILE
}
int main() {
auto const a = ad::make_fvar<double, 2>(-3);
auto const c = 0.0;
auto const answer = complex_function(a, c);
return 0;
}
This requires complex to be defined specific to autodiff::fvar template types, similar to how other mathematical functions (exp, sqrt, etc.) have overloads in the autodiff library, and are called via ADL.
As #user14717 pointed out in the comments, it is a special case of vector-valued autodiff since the return value isn't a single truncated Taylor polynomial, but rather a tuple of them.

Eigen: function signature which accepts general matrix expression of fixed size and type

The Eigen documentation is filled with examples illustrating how one should write a general function accepting a matrix:
template <typename Derived>
void print_cond(const MatrixBase<Derived>& a)
The reason to use MatrixBase as opposed to Matrix is that all dense Eigen matrix expressions derive from MatrixBase. So, for instance, if I pass a block of a matrix
print_cond ( A.block(...));
Then using the signature const MatrixBase<Derived>& a avoids creating a temporary. Conversely, if we had declared the function with the signature
template <typename T, int rows, int cols>
void print_cond(const Matrix<T,rows,cols>& a)
then Eigen would have to convert the block type to a matrix before passing it to the function, meaning that an unnecessary temporary would have to be created.
Please correct me if this understanding is incorrect...
With that in mind, one of the benefits of the second approach is that we can get compile time checks on the dimensions of the matrix (assuming they are fixed, not dynamic).
What I can't find in the documentation is an example with the generality of the first approach (which helps avoid temporary creation), but which has complie time checks on the type and dimensions of the matrix. Could somebody please tell me how to do that?
Just for completeness, Marc and ggael are suggesting something like this
#include <iostream>
#include "Eigen/Dense"
using namespace Eigen;
using T = double;
const int rows = 5;
const int cols = 3;
template<typename Derived>
void print_cond(const MatrixBase <Derived> &a) {
/* We want to enforce the shape of the input at compile-time */
static_assert(rows == Derived::RowsAtCompileTime);
static_assert(cols == Derived::ColsAtCompileTime);
/* Now that we are guaranteed that we have the
* correct dimensions, we can do something... */
std::cout << a;
}
int main() {
print_cond(Matrix<T, rows, cols>::Ones());
/* These will not compile */
// print_cond(Matrix<T, rows + 1, cols>::Ones());
// print_cond(Matrix<T, rows, cols + 1>::Ones());
// print_cond(Matrix<T, rows + 1, cols + 1>::Ones());
return 0;
}

Dynamic, multidimensional, rectangular, numeric arrays in C++ (as in NumPy or Matlab)

In Matlab or NumPy it's very easy to create numerical arrays which are rectangular, multidimensional and dynamic. Those classes also have nice indexing functionality. Furthermore, they have data stored in one linear buffer.
I'm looking for something similiar in C++, syntax could be for example:
DoubleArray arr(size_x, size_y);
arr[x][y] = 5;
double * ptr = arr.getRawData() // returns the underlying linear storage
I think C++ does not offer anything built-in to do so. The only library I know is Eigen, but it has the drawback that matrices/arrays are always 2-dimensional.
Is there a good and easy way to achieve what I want? Most important is that I do not have to mess around manually with indexing, and that data is stored in one buffer (vs. vector of vectors).
I would guess that C++ does not offer a built-in multidim array, because it is rather easy to write one but how it should be implemented depends on your requirements. I was curious on how one could get such a multidimensional array and came up with this:
template <int DIMS>
struct multidimarray {
typedef int value_type;
value_type* data;
int* dimensions;
multidimarray(int* dims,value_type* d) : dimensions(dims),data(d) {}
multidimarray<DIMS-1> operator[](int index){
int s = 1;
for (int i=1;i<DIMS;i++){ s *= dimensions[i];}
return multidimarray<DIMS-1>(dimensions+1,data+s*index);
}
};
template <> struct multidimarray<1> {
typedef int value_type;
value_type* data;
int* dimensions;
multidimarray(int* dims,value_type* d) : dimensions(dims),data(d) {}
value_type operator[](int index){ return *(data+index); }
};
It is not the most efficient implementation, at least the size of the subarrays should not be computed for each access. Also it would be more convenient to use, if a wrapper was added that handles creation and deletion of the data. However, it seems to work (no guarantee, not more tested than with the following code):
#include <vector>
#include <iostream>
int main(){
int imax = 4;
int jmax = 4;
int kmax = 3;
std::vector<int> d;
d.push_back(imax); d.push_back(jmax);d.push_back(kmax);
std::vector<int> data;
for (int i=0;i<100;i++){data.push_back(i);}
multidimarray<3> md = multidimarray<3>(&d[0],&data[0]);
for (int i=0;i<imax;i++){
for (int j=0;j<jmax;j++){
for (int k=0;k<kmax;k++){
std::cout << md[i][j][k] << std::endl;
}
}
}
}
Sorry for the lack of auto and brace initialization, but this is pre-C++11.
Oh well, and I just realized that the multidimarray<1>::operator[] should of course return a reference instead of a value.
As I mentioned above, requirements may wildly differ for your specific application. Nevertheless, I hope this helps ;)

Eigen, how to access the underlying array of a MatrixBase<Derived>

I need to access the array that contains the data of a MatrixBase Eigen matrix.
The Eigen library has the data() method which returns a pointer to an array, however it is only accessible from a Matrix type. The MatrixBase doesn't have a similar method, even though the MatrixBase class is supposed to act as a template and the actual type should be just a Matrix. If I try to access MatrixBase.data() I get a compile time error:
template <typename ScalarA, typename Index, typename DerivedB, typename DerivedC>
void uscgemv(float alpha,
const USCMatrix<ScalarA,Index> &a,
const MatrixBase<DerivedB> &b,
const MatrixBase<DerivedC> &c_const)
{
//...some code
float * bMat = b.data();
///more code
}
This code produces the following compile time error.
error: ‘const class Eigen::MatrixBase<Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<float>, Eigen::Matrix<float, -1, 1> > >’ has no member named ‘data’
float * bMat = b.data();
So I have to resort to gimmicks such as...
float * bMat;
int bRows = b.rows();
int bCols = b.cols();
mallocPinnedMemory(&bMat, bRows*bCols*sizeof(float));
Eigen::Map<Matrix<float, Dynamic, Dynamic> > bmat_temp(bMat, bRows, bCols);
bmat_temp = b; //THis is SLOW, we should avoid it.
Then I can access the bMat array...
Those copies back-and-forth are the biggest cost in the gpu matrix multiplication, as I essentially I have to make an extra copy, before even coping to the device...
I can't use Eigen-magma, as this is sparse matrix-in-a-weird-format to a dense matrix (or sometimes vector) multiplication so I can't use any of the automatic gpu functions there. Also I would much rather not declare the matrices as something else, because that would require changing A LOT of lines of code across the whole program (which I didn't write).
EDIT: A static cast solution was proposed:
float * bMat = (static_cast<Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> >(b)).data();
However I get segfault the first time I try to access an element of the array bMat.
EDIT 2: I'm looking for a zero copy way to access the underlying arrays. I need to only be able to read b, but I also need to able to write to c. Currently c is unconst-d with the following macro:
#define UNCONST(t,c,uc) Eigen::MatrixBase<t> &uc = const_cast<Eigen::MatrixBase<t>&>(c);
EDIT 3: After cross posting to Eigen Forums it would seem I can't do better than the suggested answer.
MatrixBase is the base class of any dense expression. It does not necessarily correspond to an object with storage. For instance, can be the abstract representation of A+B, or in your case the abstract representation of a vector with constant values. You can make uscgemv accepts only expression having appropriate storage using the Ref<> class, e.g.:
template <typename ScalarA, typename Index>
void uscgemv(float alpha,
const USCMatrix<ScalarA,Index> &a,
Ref<const VectorXf> b,
Ref<VectorXf> c);
If the third argument does not match the storage of a VectorXf then it will be evaluated for you. Then you can safely call b.data(). To keep the scalar type of b generic, you can still declare it as MatrixBase<DerivedB>& and then copy it into a Ref<const Matrix<typename DerivedB::Scalar, DerivedB::RowsAtCompileTime, DerivedB::ColsAtCompileTime> >:
typedef Ref<const Matrix<typename DerivedB::Scalar, DerivedB::RowsAtCompileTime, DerivedB::ColsAtCompileTime> > RefB;
RefB actual_b(b);
actual_b.data();
I guess the issue is this: you are not allowed to get a pointer to data of a MatrixBase<Derived>, since the latter can be any kind of expression in Eigen, like a product of matrices for example. To get a pointer you probably have to first implicitly convert the MatrixBase<Derived> into a Matrix<Scalar, Dynamic, Dynamic>, then use the data() member of the latter.
So you can create a deep copy of the expression, i.e. use something like
Eigen::Matrix<typename Derived::Scalar, Eigen::Dynamic, Eigen::Dynamic tmp = b;
then use
tmp.data()
This code works now
#include <Eigen/Dense>
#include <iostream>
template<typename Derived>
void use_data\
(const Eigen::MatrixBase<Derived>& mat)
{
Eigen::Matrix<typename Derived::Scalar, Eigen::Dynamic, Eigen::Dynamic>tmp = mat();
typename Derived::Scalar* p = tmp.data();
std::cout << std::endl;
for(std::size_t i = 0; i < tmp.size(); i++)
std::cout << *(p+i) << " ";
}
int main()
{
Eigen::MatrixXd A = Eigen::MatrixXd::Random(2, 2);
Eigen::MatrixXd B = Eigen::MatrixXd::Random(2, 2);
// now A*B is an expression, of type MatrixBase<EigenSum....>
use_data(A + B);
}
There are an easy solution to solve your question, combine EigenMap, &a(0, 0) and const_cast you could resue the buffer of the MatrixBase.
Example :
template<typename Derived1,
typename Derived2>
void example(Eigen::MatrixBase<Derived1> const &input,
Eigen::MatrixBase<Derived2> const &output)
{
static_assert(std::is_same<Derived1::Scalar, Derived2::Scalar>::value,
"Data type of matrix input, weight, bias and output should be the same");
using Scalar = typename Derived3::Scalar;
using MatType = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;
using Mapper = Eigen::Map<const MatType, Eigen::Aligned>;
//in the worst case, you can do const_cast<Scalar *> on
//&bias(0, 0).That is, if you cannot explicitly define the Map
//type as const
Mapper Map(&input(0, 0), input.size());
output.colwise() += Map;
}
}
I try it on windows 8, vc2013 32bits, Eigen version is 3.2.5, no segmentation fault occur(yet), every things looks perfectly fine. I also check the address of the Map, it is same as the original input. You can verify it with another example
#include <Eigen/Dense>
#include <iostream>
template<typename Derived>
void example_2(Eigen::MatrixBase<Derived> &input)
{
using Scalar = decltype(input[0]);
Eigen::Map<Derived> map(&input(0, 0),
input.rows(),
input.cols());
map(0, 0) = 300;
}
int main()
{
Eigen::MatrixXd mat(2, 2);
mat<<0, 1, 2, 3;
example_2(mat);
std::cout<<mat<<"\n\n";
return 0;
}
The first element of mat will be "300"

How to return more than one value from a C++ function?

I am interested if I can return more then one value from a function. For example consider such a function: extended euclidean algorithm. The basic step is described by this
Input is nonnegative integers a and b;
output is a triplet (d,i,j) such that d=gcd(a,b)=i*a+j*b.
Just to clarify my question's goal I will write a short recursive code:
if (b==0) return (a,1,0)
q=a mod b;
let r be such that a=r*b+q;
(d,k,l)=extendedeuclidean(b,q);
return (d,l,k-l*r);
How does one return a triplet?
You could create a std::tuple or boost::tuple (if you don't use C++0x) from your triple pair and return that.
As has been suggested by Tony The Tiger you can use tuple. It is included in C++11 standard and new compilers already support it. It is also implemented in boost.
For my ibm xlC compiler tuple is in std::tr1 namespace (tried it for MSVC10 — it's in std namespace).
#include <cstdio>
#include <tuple>
// for MSVC
using namespace std;
// for xlC
//using namespace std::tr1;
// for boost
// using namespace boost;
typedef tuple<int, float, char> MyTuple;
MyTuple f() {
return MyTuple(1, 2.0f, '3');
}
int main() {
MyTuple t = f();
printf("%i, %f, %c\n", get<0>(t), get<1>(t), get<2>(t));
}
xlC compilation for TR1:
xlC -D__IBMCPP_TR1__ file.cpp
xlC compilation for boost:
xlC file.cpp -I/path/to/boost/root
Just create an appropriate data structure holding the three values and return that.
struct extmod_t {
int d;
int i;
int j
extmod_t(int d, int i, int j) : d(d), i(i), j(j) { }
};
…
extmod_t result = extendedeuclidean(b, q);
return extmod_t(result.d, l, k - l * r);
Either create a class that encapsulates the triplet and then return the instance of this class, or use 3 by-reference parameters.
I usually find that when I need to return two parameters from a function, it is useful to use the STL std::pair.
You could always stack the pairs inside one another (e.g. std::pair <int, std::pair <int, int> >) and help your self with typedef-s or defines to make it more accessible, but when ever I try doing this my code ends up messy and unpractical for re-use.
For more than two parameters, however, I recommend making you own specific data structure that holds the information you need (if you are returning multiple values there's a high likelihood that they are strongly logically connected somehow and that you might end up using the same structure again).
E.g. I needed a function that returned the slope of the line (1 param) and that was fine. Then I needed to expand it to return both parameters of the parametric representation of the line (y = k*x + l). Two parameters, still fine. Then I remembered that the line can be vertical and that I should add another parameter to indicate that (no parametric representation then)... At this point, it became too complicated to try and make do with existing datatypes, so I typed up my own Line structure and ended up using the same structure all over my project later.