Dynamic parameterization of Armadillo matrix dimensions in C++ - c++

The title summarizes the goal that is more exactly to dynamically retrieve the number of dimensions of MATLAB arrays passed to armadillo matrices.
I would like to change the second and third arguments of mY() and mD() to parametric ones below.
// mat(ptr_aux_mem, n_rows, n_cols, copy_aux_mem = true, strict = false)
arma::mat mY(&dY[0], 2, 168, false);
arma::mat mD(&dD[0], 2, 168, false);
This must be definitely a common use case, but I still could not find a nice way of achieving it for the general case when the number of dimensions of the arrays feeding from MATLAB could be arbitrary (n > 2).
For the matrix (two dimensional) case, I could possibly hack my way around but I feel like that is not elegant enough (probably not efficient either).
IMHO, the way to go must be:
matlab::data::TypedArray<double> has getDimensions() member function which retrieves matlab::data::ArrayDimensions that is fundamentally a std::vector<size_t>.
Indexing the first and second element of the vector retrieved by getDimensions() one can retrieve the number of rows and columns, for instance like below.
unsigned int mYrows = matrixY.getDimensions()[0];
unsigned int mYcols = matrixY.getDimensions()[1];
However, with my current setup, I cannot get to call getDimensions() through pointers/references in the foo() function of sub.cpp. If it is feasible, I would neither like to create additional temporary objects nor passing other arguments to foo(). How it possible that way?
Intuition keeps telling me that there must be an elegant solution that way too. Maybe using multiple indirection?
I would highly appreciate any help, hints or constructive comments from more knowledgeable SO members. Thank you in advance.
Setup:
Two C++ source files and a header file:
main.cpp
contains the general IO interface between MATLAB and C++
feeds two double arrays and two double const doubles into C++
it does some Armadillo based looping (this part is not that important therefore omitted) by calling foo()
returns outp which is a “just a plain” scalar double
Nothing fancy or complicated.
sub.cpp
This is only for the foo() looping part.
sub.hpp
Just a simple header file.
// main.cpp
// MATLAB API Header Files
#include "mex.hpp"
#include "mexAdapter.hpp"
// Custom header
#include "sub.hpp"
// Overloading the function call operator, thus class acts as a functor
class MexFunction : public matlab::mex::Function {
public:
void operator()(matlab::mex::ArgumentList outputs,
matlab::mex::ArgumentList inputs){
matlab::data::ArrayFactory factory;
// Validate arguments
checkArguments(outputs, inputs);
matlab::data::TypedArray<double> matrixY = std::move(inputs[0]);
matlab::data::TypedArray<double> matrixD = std::move(inputs[1]);
const double csT = inputs[2][0];
const double csKy = inputs[3][0];
buffer_ptr_t<double> mY = matrixY.release();
buffer_ptr_t<double> mD = matrixD.release();
double* darrY = mY.get();
double* darrD = mD.get();
// data type of outp is "just" a plain double, NOT a double array
double outp = foo(darrY, darrD, csT, csKy);
outputs[0] = factory.createScalar(outp);
void checkArguments(matlab::mex::ArgumentList outputs, matlab::mex::ArgumentList inputs){
// Create pointer to MATLAB engine
std::shared_ptr<matlab::engine::MATLABEngine> matlabPtr = getEngine();
// Create array factory, allows us to create MATLAB arrays in C++
matlab::data::ArrayFactory factory;
// Check input size and types
if (inputs[0].getType() != ArrayType::DOUBLE ||
inputs[0].getType() == ArrayType::COMPLEX_DOUBLE)
{
// Throw error directly into MATLAB if type does not match
matlabPtr->feval(u"error", 0,
std::vector<Array>({ factory.createScalar("Input must be double array.") }));
}
// Check output size
if (outputs.size() > 1) {
matlabPtr->feval(u"error", 0,
std::vector<Array>({ factory.createScalar("Only one output is returned.") }));
}
}
};
// sub.cpp
#include "sub.hpp"
#include "armadillo"
double foo(double* dY, double* dD, const double T, const double Ky) {
double sum = 0;
// Conversion of input parameters to Armadillo types
// mat(ptr_aux_mem, n_rows, n_cols, copy_aux_mem = true, strict = false)
arma::mat mY(&dY[0], 2, 168, false);
arma::mat mD(&dD[0], 2, 168, false);
// Armadillo calculations
for(int t=0; t<int(T); t++){
// some armadillo based calculation
// each for cycle increments sum by its return value
}
return sum;
}
// sub.hpp
#ifndef SUB_H_INCLUDED
#define SUB_H_INCLUDED
double foo(double* dY, double* dD, const double T, const double Ky);
#endif // SUB_H_INCLUDED

One way is to convert it to arma matrix using a function
template<class T>
arma::Mat<T> getMat( matlab::data::TypedArray<T> A)
{
matlab::data::TypedIterator<T> it = A.begin();
matlab::data::ArrayDimensions nDim = A.getDimensions();
return arma::Mat<T>(it.operator->(), nDim[0], nDim[1]);
}
and call it by
arma::mat Y = getMat<double>(inputs[0]);
arma::mat D = getMat<double>(inputs[1]);
...
double outp = foo(Y,D, csT, csKy);
and change foo() to
double foo( arma::mat& dY, arma::mat& dD, const double T, const double Ky)

Related

Eigen: function signature which accepts general matrix expression of fixed size and type

The Eigen documentation is filled with examples illustrating how one should write a general function accepting a matrix:
template <typename Derived>
void print_cond(const MatrixBase<Derived>& a)
The reason to use MatrixBase as opposed to Matrix is that all dense Eigen matrix expressions derive from MatrixBase. So, for instance, if I pass a block of a matrix
print_cond ( A.block(...));
Then using the signature const MatrixBase<Derived>& a avoids creating a temporary. Conversely, if we had declared the function with the signature
template <typename T, int rows, int cols>
void print_cond(const Matrix<T,rows,cols>& a)
then Eigen would have to convert the block type to a matrix before passing it to the function, meaning that an unnecessary temporary would have to be created.
Please correct me if this understanding is incorrect...
With that in mind, one of the benefits of the second approach is that we can get compile time checks on the dimensions of the matrix (assuming they are fixed, not dynamic).
What I can't find in the documentation is an example with the generality of the first approach (which helps avoid temporary creation), but which has complie time checks on the type and dimensions of the matrix. Could somebody please tell me how to do that?
Just for completeness, Marc and ggael are suggesting something like this
#include <iostream>
#include "Eigen/Dense"
using namespace Eigen;
using T = double;
const int rows = 5;
const int cols = 3;
template<typename Derived>
void print_cond(const MatrixBase <Derived> &a) {
/* We want to enforce the shape of the input at compile-time */
static_assert(rows == Derived::RowsAtCompileTime);
static_assert(cols == Derived::ColsAtCompileTime);
/* Now that we are guaranteed that we have the
* correct dimensions, we can do something... */
std::cout << a;
}
int main() {
print_cond(Matrix<T, rows, cols>::Ones());
/* These will not compile */
// print_cond(Matrix<T, rows + 1, cols>::Ones());
// print_cond(Matrix<T, rows, cols + 1>::Ones());
// print_cond(Matrix<T, rows + 1, cols + 1>::Ones());
return 0;
}

Eigen binaryExpr with eigen type output

I'm having a problem while trying to use binaryExpr. It is the first use I'm making of it so I have been following the Eigen documentation
For my use I need a functor with Eigen type inputs and outputs but this does not want to compile and I do not understand why. I've looked up the explanation in the code but I didn't think this would apply here because I use floats and an array of floats
// We require Lhs and Rhs to have "compatible" scalar types.
// It is tempting to always allow mixing different types but remember that this is often impossible in the vectorized paths.
// So allowing mixing different types gives very unexpected errors when enabling vectorization, when the user tries to
// add together a float matrix and a double matrix.
Here is a short example of the use I would need that gets me the same compilation error:
#include <eigen3/Eigen/Dense>
using namespace std;
using namespace Eigen;
struct myBinaryFunctor {
EIGEN_EMPTY_STRUCT_CTOR(myBinaryFunctor)
typedef Vector2f result_type;
Vector2f operator()(const Matrix<float,9,1>& a,const float& f) const
{
float x = a.head(4).sum()*f;
float y = a.tail(5).sum()/f;
return Vector2f(x,y);
}
};
int main()
{
constexpr int n = 3;
Matrix<Matrix<float,9,1>,n,n> Ma;
Matrix<float,n,n> F;
Matrix<Vector2f,n,n> R;
for(size_t i = 0, sizeMa = Ma.size(); i<sizeMa; i++)
{
Ma(i).setOnes();
}
F.setConstant(n,n,2);
R = Ma.binaryExpr(F,myBinaryFunctor());
return 0;
}
The compilation output is :
/usr/local/include/eigen3/Eigen/src/Core/CwiseBinaryOp.h:107: erreur : static assertion failed: YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY
EIGEN_CHECK_BINARY_COMPATIBILIY(BinaryOp,typename Lhs::Scalar,typename Rhs::Scalar);
^
If you have a solution that could make this work this would be a huge help for me :) If not I would still enjoy an explanation to understand what is happening. Thanks a lot.
Adding:
namespace Eigen {
template<>
struct ScalarBinaryOpTraits<Matrix<float,9,1>,float,myBinaryFunctor> {
typedef Vector2f ReturnType;
};
}
will do the job. This is because implicit scalar conversion are explicitly disallowed within Eigen, so you must explicit say that two different scalar types are compatible. For instance adding a VectorXd to a VectorXf is disallowed.
Nonetheless, it seems to me that your abusing Eigen's features here.

C++ numerical integrators to solve systems of ode's

I recently started using C++ and I just created a class that allows the integration of a user-defined system of ode's. It uses two different integrators in order to compare its performance. Here is the general layout of the code:
class integrators {
private:
double ti; // initial time
double *xi; // initial solution
double tf; // end time
double dt; // time step
int n; // number of ode's
public:
// Function prototypes
double f(double, double *, double *); // function to integrate
double rk4(int, double, double, double, double *, double *);
double dp8(int, double, double, double, double *, double *);
};
// 4th Order Runge-Kutta function
double integrators::rk4(int n, double ti, double tf, double dt, double *xi, double *xf) {
// Function statements
}
// 8th Order Dormand-Prince function
double integrators::dp8(int n, double ti, double tf, double dt, double *xi, double *xf) {
// Function statements
}
// System of first order differential equations
double integrators::f(double t, double *x, double *dx) {
// Function statements
}
int main() {
// Initial conditions and time related parameters
const int n = 4;
double t0, tmax, dt;
double x0[n], xf[n];
x0[0] = 0.0;
x0[1] = 0.0;
x0[2] = 1.0;
x0[3] = 2.0;
// Calling class integrators
integrators example01;
integrators example02;
// First integrator
example02.dp8(n, t0, tmax, dt, x0, xf);
// Second integrator
example01.rk4(n, t0, tmax, dt, x0, xf);
}
The problem is that the array containing the initial conditions x0 in main, changes after executing the first integrator and I cannot use the same initial conditions for the second integrator, unless I define another array with the same initial conditions (x0_rk4 and x0_dp8). Is there a more professional way to keep this array constant in order to use it in both integrators?
The easiest way is to make a local copy of array inside integrating functions.
Change the way you are passing 'n' to function to 'const int n', so you can make something like double currentSolution[n]; inside and copy elements from initial array to new one. This approach will save your initial array intact, unless you will "accidently" modify it somewhere.
To prevent this probability of accident modification we need to go deeper and use one of stl containers. I think you will be fine with std::valarray<T>.
Change the way you are passing it to const std::valarray<double>& and again make non-const local copy.
No, not really. But there exist a more elegant solution:
std::array<double, n> x0_rk4 = { 0.0, 0.0, 1.0, 2.0 };
auto x0_dp8 = x0_rk4; // copy!
You will have to use x0_rk4.data() to access the underlying array. Note that it would be better if you used std::array and other modern C++ features instead of raw pointers and the like.

Replace Numerical Recipe's dmatrix with a C++ class

I'm revamping an old application that use Numerical Recipes' dmatrix quite extensively. Since one of the reasons I'm working on the application is because its code is about to be opened, I want to replace all of the Numerical Recipes code with code that can be freely distributed.
dmatrix is a function that returns a matrix of doubles. The called supplies the lower and upper bound for each index, like so:
double **mat = dmatrix(1,3,1,3);
mat now has 3 rows, from 1 to 3, and 3 columns, from 1 to 3, so that mat[1][1] is the first element and mat[3][3] is the last.
I looked at various C++ matrix implementations, none of them allowed me to specify the lower bound of each dimension. Is there something I can use, or do I have to write yet another matrix class for this?
I believe that you can easily make a wrapper of some other matrix implementation to add the lower bound feature. Example (untested):
class Matrix {
OtherMatrix m;
int lowerX, lowerY;
public:
Matrix(int lx, int hx, int ly, int hy) :
m(hx-lx, hy-ly),
lowerX(lx), lowerY(ly) { }
MatrixCol operator[] (int x) {
return {this, x};
}
};
class MatrixCol {
friend class Matrix;
Matrix* mm;
int x;
public:
double& operator[] (int y) {
return mm->m[x - mm->lowerX, y - mm->lowerY];
}
};
This may require a little more robust implementation depending on your use case. But this is the basic idea, expand from it.

Eigen, how to access the underlying array of a MatrixBase<Derived>

I need to access the array that contains the data of a MatrixBase Eigen matrix.
The Eigen library has the data() method which returns a pointer to an array, however it is only accessible from a Matrix type. The MatrixBase doesn't have a similar method, even though the MatrixBase class is supposed to act as a template and the actual type should be just a Matrix. If I try to access MatrixBase.data() I get a compile time error:
template <typename ScalarA, typename Index, typename DerivedB, typename DerivedC>
void uscgemv(float alpha,
const USCMatrix<ScalarA,Index> &a,
const MatrixBase<DerivedB> &b,
const MatrixBase<DerivedC> &c_const)
{
//...some code
float * bMat = b.data();
///more code
}
This code produces the following compile time error.
error: ‘const class Eigen::MatrixBase<Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<float>, Eigen::Matrix<float, -1, 1> > >’ has no member named ‘data’
float * bMat = b.data();
So I have to resort to gimmicks such as...
float * bMat;
int bRows = b.rows();
int bCols = b.cols();
mallocPinnedMemory(&bMat, bRows*bCols*sizeof(float));
Eigen::Map<Matrix<float, Dynamic, Dynamic> > bmat_temp(bMat, bRows, bCols);
bmat_temp = b; //THis is SLOW, we should avoid it.
Then I can access the bMat array...
Those copies back-and-forth are the biggest cost in the gpu matrix multiplication, as I essentially I have to make an extra copy, before even coping to the device...
I can't use Eigen-magma, as this is sparse matrix-in-a-weird-format to a dense matrix (or sometimes vector) multiplication so I can't use any of the automatic gpu functions there. Also I would much rather not declare the matrices as something else, because that would require changing A LOT of lines of code across the whole program (which I didn't write).
EDIT: A static cast solution was proposed:
float * bMat = (static_cast<Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> >(b)).data();
However I get segfault the first time I try to access an element of the array bMat.
EDIT 2: I'm looking for a zero copy way to access the underlying arrays. I need to only be able to read b, but I also need to able to write to c. Currently c is unconst-d with the following macro:
#define UNCONST(t,c,uc) Eigen::MatrixBase<t> &uc = const_cast<Eigen::MatrixBase<t>&>(c);
EDIT 3: After cross posting to Eigen Forums it would seem I can't do better than the suggested answer.
MatrixBase is the base class of any dense expression. It does not necessarily correspond to an object with storage. For instance, can be the abstract representation of A+B, or in your case the abstract representation of a vector with constant values. You can make uscgemv accepts only expression having appropriate storage using the Ref<> class, e.g.:
template <typename ScalarA, typename Index>
void uscgemv(float alpha,
const USCMatrix<ScalarA,Index> &a,
Ref<const VectorXf> b,
Ref<VectorXf> c);
If the third argument does not match the storage of a VectorXf then it will be evaluated for you. Then you can safely call b.data(). To keep the scalar type of b generic, you can still declare it as MatrixBase<DerivedB>& and then copy it into a Ref<const Matrix<typename DerivedB::Scalar, DerivedB::RowsAtCompileTime, DerivedB::ColsAtCompileTime> >:
typedef Ref<const Matrix<typename DerivedB::Scalar, DerivedB::RowsAtCompileTime, DerivedB::ColsAtCompileTime> > RefB;
RefB actual_b(b);
actual_b.data();
I guess the issue is this: you are not allowed to get a pointer to data of a MatrixBase<Derived>, since the latter can be any kind of expression in Eigen, like a product of matrices for example. To get a pointer you probably have to first implicitly convert the MatrixBase<Derived> into a Matrix<Scalar, Dynamic, Dynamic>, then use the data() member of the latter.
So you can create a deep copy of the expression, i.e. use something like
Eigen::Matrix<typename Derived::Scalar, Eigen::Dynamic, Eigen::Dynamic tmp = b;
then use
tmp.data()
This code works now
#include <Eigen/Dense>
#include <iostream>
template<typename Derived>
void use_data\
(const Eigen::MatrixBase<Derived>& mat)
{
Eigen::Matrix<typename Derived::Scalar, Eigen::Dynamic, Eigen::Dynamic>tmp = mat();
typename Derived::Scalar* p = tmp.data();
std::cout << std::endl;
for(std::size_t i = 0; i < tmp.size(); i++)
std::cout << *(p+i) << " ";
}
int main()
{
Eigen::MatrixXd A = Eigen::MatrixXd::Random(2, 2);
Eigen::MatrixXd B = Eigen::MatrixXd::Random(2, 2);
// now A*B is an expression, of type MatrixBase<EigenSum....>
use_data(A + B);
}
There are an easy solution to solve your question, combine EigenMap, &a(0, 0) and const_cast you could resue the buffer of the MatrixBase.
Example :
template<typename Derived1,
typename Derived2>
void example(Eigen::MatrixBase<Derived1> const &input,
Eigen::MatrixBase<Derived2> const &output)
{
static_assert(std::is_same<Derived1::Scalar, Derived2::Scalar>::value,
"Data type of matrix input, weight, bias and output should be the same");
using Scalar = typename Derived3::Scalar;
using MatType = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;
using Mapper = Eigen::Map<const MatType, Eigen::Aligned>;
//in the worst case, you can do const_cast<Scalar *> on
//&bias(0, 0).That is, if you cannot explicitly define the Map
//type as const
Mapper Map(&input(0, 0), input.size());
output.colwise() += Map;
}
}
I try it on windows 8, vc2013 32bits, Eigen version is 3.2.5, no segmentation fault occur(yet), every things looks perfectly fine. I also check the address of the Map, it is same as the original input. You can verify it with another example
#include <Eigen/Dense>
#include <iostream>
template<typename Derived>
void example_2(Eigen::MatrixBase<Derived> &input)
{
using Scalar = decltype(input[0]);
Eigen::Map<Derived> map(&input(0, 0),
input.rows(),
input.cols());
map(0, 0) = 300;
}
int main()
{
Eigen::MatrixXd mat(2, 2);
mat<<0, 1, 2, 3;
example_2(mat);
std::cout<<mat<<"\n\n";
return 0;
}
The first element of mat will be "300"