Write arbitrary Eigen object to row-major plain storage - c++

I am writing a module to write data to a file which uses by convention only row-major storage. I would like my function to be able to allow both column-major and row-major Eigen objects as input.
Currently I first use Eigen to copy a column-major object to a row-major object, before I write. My code works well for most cases, but for Eigen::VectorXi compiling fails with an assertion that I don't understand. How do I solve this? Can I avoid creating many cases?
The code (writing is mimicked by outputting a std::vector):
#include <vector>
#include <iostream>
#include <Eigen/Eigen>
template <class T, int Rows, int Cols, int Options, int MaxRows, int MaxCols>
std::vector<T> write(const Eigen::Matrix<T,Rows,Cols,Options,MaxRows,MaxCols>& matrix)
{
std::vector<T> data(static_cast<size_t>(matrix.size()));
if (matrix.IsRowMajor) {
std::copy(matrix.data(), matrix.data()+matrix.size(), data.begin());
return data;
} else {
Eigen::Matrix<T, Rows, Cols, Eigen::RowMajor, MaxRows, MaxCols> tmp = matrix;
return write(tmp);
}
}
int main()
{
Eigen::VectorXi matrix = Eigen::VectorXi::LinSpaced(10, 0, 9);
std::vector<int> output = write(matrix);
}
The compilation error:
In file included from test.cpp:3:
In file included from /usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/Eigen:1:
In file included from /usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/Dense:1:
In file included from /usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/Core:457:
/usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/src/Core/PlainObjectBase.h:903:7: error: static_assert failed "INVALID_MATRIX_TEMPLATE_PARAMETERS"
EIGEN_STATIC_ASSERT((EIGEN_IMPLIES(MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1, (Options&RowMajor)==RowMajor)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/src/Core/util/StaticAssert.h:33:40: note: expanded from macro 'EIGEN_STATIC_ASSERT'
#define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG);
^ ~
/usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/src/Core/PlainObjectBase.h:535:7: note: in instantiation of member function 'Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 1, 1, -1, 1>
>::_check_template_params' requested here
_check_template_params();
^
/usr/local/Cellar/eigen/3.3.7/include/eigen3/Eigen/src/Core/Matrix.h:377:9: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<int, -1, 1, 1, -1, 1>
>::PlainObjectBase<Eigen::Matrix<int, -1, 1, 0, -1, 1> >' requested here
: Base(other.derived())
^
test.cpp:14:79: note: in instantiation of function template specialization 'Eigen::Matrix<int, -1, 1, 1, -1, 1>::Matrix<Eigen::Matrix<int, -1, 1, 0, -1, 1> >' requested here
Eigen::Matrix<T, Rows, Cols, Eigen::RowMajor, MaxRows, MaxCols> tmp = matrix;
^
test.cpp:23:31: note: in instantiation of function template specialization 'write<int, -1, 1, 0, -1, 1>' requested here
std::vector<int> output = write(matrix);
^
1 error generated.

Understanding the static assertion
Unfortunately the assertion is really not self-explanatory and the only thing you can get from it is the hint, that something is wrong with your template parameters. If we look into Eigen's source code we find the following beginning on line 903:
EIGEN_STATIC_ASSERT((EIGEN_IMPLIES(MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1, (Options&RowMajor)==RowMajor)
&& EIGEN_IMPLIES(MaxColsAtCompileTime==1 && MaxRowsAtCompileTime!=1, (Options&RowMajor)==0)
&& ((RowsAtCompileTime == Dynamic) || (RowsAtCompileTime >= 0))
&& ((ColsAtCompileTime == Dynamic) || (ColsAtCompileTime >= 0))
&& ((MaxRowsAtCompileTime == Dynamic) || (MaxRowsAtCompileTime >= 0))
&& ((MaxColsAtCompileTime == Dynamic) || (MaxColsAtCompileTime >= 0))
&& (MaxRowsAtCompileTime == RowsAtCompileTime || RowsAtCompileTime==Dynamic)
&& (MaxColsAtCompileTime == ColsAtCompileTime || ColsAtCompileTime==Dynamic)
&& (Options & (DontAlign|RowMajor)) == Options),
INVALID_MATRIX_TEMPLATE_PARAMETERS)
Even though the compiler indicates that
EIGEN_IMPLIES(MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1, (Options&RowMajor)==RowMajor)
causes the error, the following line really does:
EIGEN_IMPLIES(MaxColsAtCompileTime==1 && MaxRowsAtCompileTime!=1, (Options&RowMajor)==0)
Understanding what triggers the assertion
You provide Eigen::VectorXi as an input for write. Eigen::VectorXi is really just a typedef for
Eigen::Matrix<int, Eigen::Dynamic, 1, Eigen::ColMajor, Eigen::Dynamic, 1>
Therefore the line
Eigen::Matrix<T, Rows, Cols, Eigen::RowMajor, MaxRows, MaxCols> tmp = matrix;
in write expands to
Eigen::Matrix<int, Eigen::Dynamic, 1, Eigen::RowMajor, Eigen::Dynamic, 1> tmp = matrix;
which triggers the assertion, since a matrix with MaxColsAtCompileTime==1 and MaxRowsAtCompileTime!=1 must not be RowMajor.
Solve your problem
The problem now is that even though you can check if your input matrix is a vector, row-major or column-major, you cannot declare
Eigen::Matrix<T, Rows, Cols, Eigen::RowMajor, MaxRows, MaxCols>
if it is no legal to do so at compile-time (and it isn't due to the static assertion).
You have the following options to make your code work:
1. if constexpr (C++17)
C++17 offers a way for detecting at compile-time if a certain conditional branch will be taken or not. The downside of this approach (beside the requirement for a C++17 compiler) is that you can only test for constant expressions.
In the concrete example this looks like this:
template <class T, int Rows, int Cols, int Options, int MaxRows, int MaxCols>
std::vector<T> write(const Eigen::Matrix<T, Rows, Cols, Options, MaxRows, MaxCols>& matrix)
{
typedef Eigen::Matrix<T, Rows, Cols, Options, MaxRows, MaxCols> MatrixType;
std::vector<T> data(static_cast<size_t>(matrix.size()));
if constexpr (MatrixType::MaxRowsAtCompileTime == 1 ||
MatrixType::MaxColsAtCompileTime ==1 ||
(MatrixType::Options&Eigen::RowMajor) == Eigen::RowMajor) {
std::copy(matrix.data(), matrix.data() + matrix.size(), data.begin());
return data;
} else {
Eigen::Matrix<T, Rows, Cols, Eigen::RowMajor, MaxRows, MaxCols> tmp = matrix;
return write(tmp);
}
}
2. SFINAE
You can dispatch the call to write at compile-time using SFINAE by using std::enable_if. The following example uses a slightly modified version of your original code but everything should be clear from context:
// matrix is either a vector or in row-major
template <typename Derived>
std::vector<typename Derived::Scalar> write(const Eigen::MatrixBase<Derived>& matrix,
typename std::enable_if<Derived::MaxRowsAtCompileTime == 1 ||
Derived::MaxColsAtCompileTime == 1 ||
(Derived::Options & Eigen::RowMajor) == Eigen::RowMajor,
Derived>::type* = 0)
{
std::vector<typename Derived::Scalar> data(
static_cast<size_t>(matrix.size()));
std::copy(matrix.derived().data(), matrix.derived().data() + matrix.size(),
data.begin());
return data;
}
// matrix is neither a vector nor in row-major
template <typename Derived>
std::vector<typename Derived::Scalar> write(const Eigen::MatrixBase<Derived>& matrix,
typename std::enable_if<Derived::MaxRowsAtCompileTime != 1 &&
Derived::MaxColsAtCompileTime != 1 &&
(Derived::Options & Eigen::RowMajor) == 0,
Derived>::type* = 0)
{
Eigen::Matrix<typename Derived::Scalar, Derived::RowsAtCompileTime,
Derived::ColsAtCompileTime, Eigen::RowMajor,
Derived::MaxRowsAtCompileTime, Derived::MaxColsAtCompileTime> tmp = matrix;
return write(tmp);
}
This works using a C++11 compiler.
Other options would be to specialise the template but it will get even more lengthy than the SFINAE approach.
Some test cases:
Eigen::Matrix<int, 3, 3, Eigen::RowMajor> m;
m << 1, 2, 3,
1, 2, 3,
1, 2, 3;
std::vector<int> output = write(m);
for (const auto& element : output) {
std::cout << element << " ";
}
Output: 1 2 3 1 2 3 1 2 3
Eigen::Matrix<int, 3, 3, Eigen::ColMajor> m;
m << 1, 2, 3,
1, 2, 3,
1, 2, 3;
std::vector<int> output = write(m);
for (const auto& element : output) {
std::cout << element << " ";
}
Output: 1 2 3 1 2 3 1 2 3
Eigen::VectorXi m = Eigen::VectorXi::LinSpaced(10, 0, 9);
std::vector<int> output = write(m);
for (const auto& element : output) {
std::cout << element << " ";
}
Output: 0 1 2 3 4 5 6 7 8 9
Eigen::RowVectorXi m = Eigen::RowVectorXi::LinSpaced(10, 0, 9);
std::vector<int> output = write(m);
for (const auto& element : output) {
std::cout << element << " ";
}
Output: 0 1 2 3 4 5 6 7 8 9

A simpler solution is to let Eigen::Ref does all the job for you:
Ref<const Matrix<T,Rows,Cols,Cols==1?ColMajor:RowMajor,MaxRows,MaxCols>,0, InnerStride<1> > row_maj(matrix);
Then row_maj will be guaranteed to be sequentially stored in row-major order. If matrix is compatible, then no copy occurs. No branch, no SFINAE, etc.
Here matrix can be any expression, not only a Matrix<...> but also sub-matrices, Map, another Ref, etc.
To handle any expressions, just replace Rows and the likes with XprType::RowsAtCompileTime where XprType is the type of matrix.
template <class XprType>
std::vector<typename XprType::Scalar> write(const Eigen::MatrixBase<XprType>& matrix)
{...}

Related

C++ (Eigen) - Coefficientwise boolean between two differently sized vector

Suppose I have two Eigen Arrays defined as
Array<float, 10, 1> a;
Array<float, 4, 1> b;
Now, I would like to get an Array of Array<bool, 10, 4> result; reading true for every one of the 10 entries of a where it is larger than every of the four entries of b. One way to achieve that I thought of was:
Array<float, 10, 1> a;
Array<float, 4, 1> b;
// ... fill the arrays
Array<bool, 10, 4> result = a.replicate(1 , 4) > (b.transpose()).replicate(10, 1);
In reality, the matrix dimensions are much larger though. This approach works, but I have two questions.
First: if I replace Array<bool, 10, 4> result = ... by auto result = ...,using the result is much slower for some reason (I use it after to do some calculations based on the outcome of the bool).
Second: While this works, I wonder if this is the most efficient approach as I don't know whether replicate requires copying. I could consider iterating over one of the two dimensions
Array<float, 10, 1> a;
Array<float, 4, 1> b;
// ... fill the arrays
Array<bool, 10, 4> result;
for(int i = 0; i < 4; i++){
result.col(i) = a > b(i, 0);
}
which would remove the necessity of using replicate at the cost of adding an iteration.
Any help is greatly appreciated!
EDIT:
Here's the chunk of code of interest. It gets called very often and so speedup is of great concern. Right now, this eats up 90% of the overall execution time. It is part of line intersection detection for thousands of lines. My approach was to write all the checks in matrix expressions to avoid iterating over all line pairs. To do that, I build a matrix with n rows for all n lines and m columns for all lines the other n lines could collide with. Then, the line intersection formula can be applied coefficientwise over the big matrices which I hope brings speedup.
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x1 = rays.col(0).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x2 = (rays.col(2) + rays.col(0)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y1 = rays.col(1).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y2 = (rays.col(3) + rays.col(1)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x3 = obstacles.col(0).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x4 = obstacles.col(2).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y3 = obstacles.col(1).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y4 = obstacles.col(3).transpose().replicate(num_rays*num_ships, 1);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_den = (x1-x2)*(y3-y4) -(y1-y2)*(x3-x4);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t = (x1-x3)*(y3-y4) -(y1-y3)*(x3-x4)/t_den;
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> u = ((x1-x3)*(y1-y2) - (y1-y3)*(x1-x2));
Eigen::Array<bool, Eigen::Dynamic, Eigen::Dynamic> col_r = 0 <= t && 0 <= u && u <= t_den;
t_rays = col_r.select(t, 1000).rowwise().minCoeff();
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x1_ = ship_b_boxs.col(0).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x2_ = (ship_b_boxs.col(2)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y1_ = ship_b_boxs.col(1).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y2_ = (ship_b_boxs.col(3)).replicate(1, 4*obs.size());
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x3_ = x3(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> x4_ = x4(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y3_ = y3(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> y4_ = y4(Eigen::seq(0, 4*num_ships-1), Eigen::all);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_den_ = (x1_-x2_)*(y3_-y4_) -(y1_-y2_)*(x3_-x4_);
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> t_ = (x1_-x3_)*(y3_-y4_) -(y1_-y3_)*(x3_-x4_)/t_den_;
Eigen::Array<float, Eigen::Dynamic, Eigen::Dynamic> u_ = ((x1_-x3_)*(y1_-y2_) - (y1_-y3_)*(x1_-x2_));
Eigen::Array<bool, Eigen::Dynamic, Eigen::Dynamic> col_s = (0 <= t_ && t_ <= 1 && 0 <= u_ && u_ <= t_den_).rowwise().maxCoeff();
with 85% taken by inlined'Eigen::Array::Array'. I can see that probably now all the temporary array constructions take a lot of time. However, storing these temporary variables is beneficial as they are used more than once. Is there any way to speed this up?
First: if I replace Array<bool, 10, 4> result = ... by auto result =
...,using the result is much slower for some reason (I use it after to
do some calculations based on the outcome of the bool).
As mentioned by #Homer512 in a comment, do not use auto in Eigen expressions
Second: While this works, I wonder if this is the most efficient
approach as I don't know whether replicate requires copying
No, if you use replicate within an expression and you do not actively store it in some intermediate variable or otherwise force the expression to be evaluated, there is no copying involved. As in typical Eigen style replicate only returns an expression of the replication, see the doc, an actual object containing a full replica is only created if absolutely necessary.
In short, this involves copies:
Array<bool, 10, 4> a4r = a.replicate(1 , 4);
Array<bool, 10, 4> b4r = b.transpose().replicate(10, 1);
Array<bool, 10, 4> result = a4r > b4r;
while your expression does not:
Array<bool, 10, 4> result = a.replicate(1 , 4) > b.transpose().replicate(10, 1);
So I really think it's the most efficient way to do it, or close to it.
How is this magic possible? It is well explained in the doc.
Side note:
if your matrices are big you should consider switching to dynamic size, see Eigen's own recommendations "Fixed vs. Dynamic size"
on the other hand, if you do know the sizes at compile time, you could consider the templeate version of replicate: a.replicate<1,4>() > b.transpose().replicate<10,1>();
If you still do not believe there is no copying involved, you can check the source code, as of version 3.4.0
Eigen/src/Core/Replicate.h
template<typename MatrixType,int RowFactor,int ColFactor> class Replicate
: public internal::dense_xpr_base< Replicate<MatrixType,RowFactor,ColFactor> >::type
{
[...]
template<typename OriginalMatrixType>
EIGEN_DEVICE_FUNC
inline Replicate(const OriginalMatrixType& matrix, Index rowFactor, Index colFactor)
: m_matrix(matrix), m_rowFactor(rowFactor), m_colFactor(colFactor)
{
EIGEN_STATIC_ASSERT((internal::is_same<typename internal::remove_const<MatrixType>::type,OriginalMatrixType>::value),
THE_MATRIX_OR_EXPRESSION_THAT_YOU_PASSED_DOES_NOT_HAVE_THE_EXPECTED_TYPE)
}
EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR
inline Index rows() const { return m_matrix.rows() * m_rowFactor.value(); }
EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR
inline Index cols() const { return m_matrix.cols() * m_colFactor.value(); }
EIGEN_DEVICE_FUNC
const _MatrixTypeNested& nestedExpression() const
{
return m_matrix;
}
protected:
MatrixTypeNested m_matrix;
const internal::variable_if_dynamic<Index, RowFactor> m_rowFactor;
const internal::variable_if_dynamic<Index, ColFactor> m_colFactor;
};
Eigen/src/Core/DenseBase.h
const Replicate<Derived, Dynamic, Dynamic> replicate(Index rowFactor, Index colFactor) const
{
return Replicate<Derived, Dynamic, Dynamic>(derived(), rowFactor, colFactor);
}
Eigen/src/Core/CoreEvaluators.h
template<typename ArgType, int RowFactor, int ColFactor>
struct unary_evaluator<Replicate<ArgType, RowFactor, ColFactor> >
: evaluator_base<Replicate<ArgType, RowFactor, ColFactor> >
{
[...]
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE
explicit unary_evaluator(const XprType& replicate)
: m_arg(replicate.nestedExpression()),
m_argImpl(m_arg),
m_rows(replicate.nestedExpression().rows()),
m_cols(replicate.nestedExpression().cols())
{}
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE
CoeffReturnType coeff(Index row, Index col) const
{
// try to avoid using modulo; this is a pure optimization strategy
const Index actual_row = internal::traits<XprType>::RowsAtCompileTime==1 ? 0
: RowFactor==1 ? row
: row % m_rows.value();
const Index actual_col = internal::traits<XprType>::ColsAtCompileTime==1 ? 0
: ColFactor==1 ? col
: col % m_cols.value();
return m_argImpl.coeff(actual_row, actual_col);
}
[...]
protected:
const ArgTypeNested m_arg;
evaluator<ArgTypeNestedCleaned> m_argImpl;
const variable_if_dynamic<Index, ArgType::RowsAtCompileTime> m_rows;
const variable_if_dynamic<Index, ArgType::ColsAtCompileTime> m_cols;
};
I know that the code is heavy and difficult to interpret if you are not familiar with this kind of mechanisms, that's why I've removed all but the most relevant part. What you need to retain is this: replicate only creates an expression of type Replicate which simply stores a reference to the original object, the number of row copies, and the number of column copies; then, when you want to retrieve a coefficient from the expression, the evaluator computes the appropriate index in the original matrix using modulo and returns the corresponding element.

convert T* array (Jet* or float*) to cv::Mat<CV_32f>

I am using ceres-solver with AutoDiffCostFunction. My cost function takes as parameter 1x3 vector and outputs 1x1 residual.
How can I create opencv Mat out of my T* parameter vector? It may be either Jet or float.
I tried following code, but get error "cannot conver from Jet to float"
struct ErrorFunc
{
template <typename T>
bool operator()(const T * const Kparams, T * residual) const // Kparams - [f, u, v]
{
cv::Mat K = cv::Mat::eye(3, 3, CV_32F);
K.at<float>(0, 0) = float(Kparams[0]); // error
K.at<float>(0, 2) = float(Kparams[1]); // error
K.at<float>(1, 1) = float(Kparams[0]); // error
K.at<float>(1, 2) = float(Kparams[2]); // error
Mat Hdot = K.inv() * H * K;
cv::decomposeHomographyMat(Hdot, K, rot, tr, norm); //want to call this opencv function
residual[0] = calcResidual(norm);
return true;
}
Mat H;
}
There is a way to get Eigen matrix out of T* matrix:
const Eigen::Matrix< T, 3, 3, Eigen::RowMajor> hom = Eigen::Map< const Eigen::Matrix< T, 3, 3, Eigen::RowMajor> >(Matrix)
but I want to call cv::decomposeHomographyMat . How can I do this?
You cannot use an OpenCV method in a ceres::AutoDiffCostFunction in this way. The OpenCV method is not templated with type T as required by ceres to do the automatic differentiation. The float cast cannot be done because the ceres jet of Jacobians is a vector and not a scalar.
You have two options:
1) Use numerical differentiation: see http://ceres-solver.org/nnls_tutorial.html#numeric-derivatives
2) Use a templated library (e.g. Eigen http://eigen.tuxfamily.org/index.php?title=Main_Page) to rewrite the required homography decomposition

Generic Constexpr Lookup Table C++11

I'm trying to construct a generic lookup table that takes a generator function and creates the table at compile time.Here is the code for the table and generation:
#ifndef CONSTEXPR_LOOKUPTABLE_H
#define CONSTEXPR_LOOKUPTABLE_H
#include <cstddef>
/* Generate a range */
template <std::size_t... Is>
struct seq{};
template <std::size_t N, std::size_t... Is>
struct gen_seq : gen_seq<N - 1, N - 1, Is...>{};
template <std::size_t... Is>
struct gen_seq<0, Is...> : seq<Is...>{};
/*
The lookup table consisting of values to be
computed at compile-time
*/
template<std::size_t N, class T>
struct LookUpTable{
std::size_t indexes[N];
T values[N];
static constexpr std::size_t length = N;
};
/*
Generate the table from a generator function
*/
template <class Lambda, std::size_t... Is>
constexpr auto LookUpTableGenerator(seq<Is...>, Lambda f) ->
LookUpTable<sizeof...(Is), decltype(f(Is)...)>
{
return {{ Is... }, { f(Is)... }};
}
template <std::size_t N, class Lambda>
constexpr auto LookUpTableGenerator(Lambda f) ->
decltype(LookUpTableGenerator(gen_seq<N>{}, f))
{
return LookUpTableGenerator(gen_seq<N>{}, f);
}
#endif
Here is the main function:
#include <iostream>
#include <memory>
#include <vector>
#include <string>
#include "ConstExprLookupTable.h"
typedef unsigned short ushort;
typedef unsigned char byte;
/*
There are only 10 digits (0 - 9)
*/
static constexpr ushort DIGITS = 10;
/*
A table to prevent repeated division calculation:
table[i][j] = (i + j) / 10;
*/
static constexpr ushort carry_table[DIGITS][DIGITS] = \
{
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1},
{0, 0, 0, 0, 0, 0, 0, 0, 1, 1},
{0, 0, 0, 0, 0, 0, 0, 1, 1, 1},
{0, 0, 0, 0, 0, 0, 1, 1, 1, 1},
{0, 0, 0, 0, 0, 1, 1, 1, 1, 1},
{0, 0, 0, 0, 1, 1, 1, 1, 1, 1},
{0, 0, 0, 1, 1, 1, 1, 1, 1, 1},
{0, 0, 1, 1, 1, 1, 1, 1, 1, 1},
{0, 1, 1, 1, 1, 1, 1, 1, 1, 1}
};
static constexpr double myFunc(double x)
{
return x / DIGITS;
}
int main()
{
constexpr std::size_t length = 100;
auto table = LookUpTableGenerator<length>(myFunc);
for (auto v : table.values){
std::cout << v << " ";
}
std::cout << "\n";
}
However, this generates the following compile-time errors:
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
LookUpTable<sizeof...(Is), decltype(f(Is)...)>
^
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:46: error: template argument 2 is invalid
ConstExprLookupTable.h:33:1: error: invalid use of template-name ‘LookUpTable’ without an argument list
LookUpTable<sizeof...(Is), decltype(f(Is)...)>
^
ConstExprLookupTable.h:33:12: error: expected initializer before ‘<’ token
LookUpTable<sizeof...(Is), decltype(f(Is)...)>
^
virt.cpp: In function ‘int main()’:
virt.cpp:88:53: error: no matching function for call to ‘LookUpTableGenerator(double (&)(double))’
auto table = LookUpTableGenerator<length>(myFunc);
My questions are (finally!):
1) Is this possible to do? When I replace the class T parameter of the lookup table with a concrete type (like a double), the code compiles fine.
2) I think the error is here:
template <class Lambda, std::size_t... Is>
constexpr auto LookUpTableGenerator(seq<Is...>, Lambda f) ->
LookUpTable<sizeof...(Is), decltype(f(Is)...)>
{
return {{ Is... }, { f(Is)... }};
}
It seems to not like the decltype. What should the decltype be in this case?
There are lots of contexts where a pack can be expanded, but decltype isn't one of them. You'd have to just wrap your pack in some metafunction that pulls out the type. Something as easy as:
template <typename T, typename... >
using first = T;
And then use it:
template <class Lambda, std::size_t... Is>
constexpr auto LookUpTableGenerator(seq<Is...>, Lambda f) ->
LookUpTable<sizeof...(Is), first<decltype(f(Is))...>>
Though since all the Is are the same type anyway (size_t), you could just use that directly:
template <class Lambda, std::size_t... Is>
constexpr auto LookUpTableGenerator(seq<Is...>, Lambda f) ->
LookUpTable<sizeof...(Is), decltype(f(std::declval<size_t>()))>

OpenCV 2.3.1. cv::Mat to std::vector cast

I do have a trouble converting cv::Mat to std::vector:
cv::Mat m = cv::Mat_<int>::eye(3, 3);
std::vector<int> vec = m;
gives me the following:
OpenCV Error: Assertion failed (dims == 2 && (size[0] == 1 || size[1] == 1 || size[0]*size[1] == 0)) in create, file /build/buildd-opencv_2.3.1-11-i386-tZNeKk/opencv-2.3.1/modules/core/src/matrix.cpp, line 1225
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd-opencv_2.3.1-11-i386-tZNeKk/opencv-2.3.1/modules/core/src/matrix.cpp:1225: error: (-215) dims == 2 && (size[0] == 1 || size[1] == 1 || size[0]*size[1] == 0) in function create
from mat.hpp:
template<typename _Tp> inline Mat::operator vector<_Tp>() const
{
vector<_Tp> v;
copyTo(v);
return v;
}
and later on the following code in copyTo is executed:
//mat.hpp
template<typename _Tp> inline _OutputArray::_OutputArray(vector<_Tp>& vec) : _InputArray(vec) {}
template<typename _Tp> inline _InputArray::_InputArray(const vector<_Tp>& vec)
: flags(STD_VECTOR + DataType<_Tp>::type), obj((void*)&vec) {}
// operations.hpp
template<typename _Tp> inline Size_<_Tp>::Size_()
: width(0), height(0) {}
and then I get an exceptions.
Any idea? Is it a bug? Probably, I do not understand something...
Thank You in advance!
It seems like you are trying to convert a two-dimensional 3x3 matrix into a one-dimensional vector. Not sure what result you're expecting from that, but you probably want to convert a row of the matrix into a vector. You can use this by giving the vector constructor a pointer to the row data:
int *p = eye.ptr<int>(0); // pointer to row 0
std::vector<int> vec(p, p+eye.cols); // construct a vector using pointer
Very Well Then!
cv::Mat is stored as an array of bytes!
So, if You want to represent your matrix as vector, You may do something like this:
cv::Mat m = cv::Mat_<int>::eye(3, 3);
int* data = reinterpret_cast<int*>(m.data);
int len = m.rows * m.cols;
std::vector<int> vec(len);
std::copy(data + 0, data + len, vec.begin());
From the error message, it looks like you can only convert matrices where one dimension is 1 to std::vector, i.e. only row or column vectors (mathematically speaking):
dims == 2 && (size[0] == 1 || size[1] == 1)
Which kind of makes sense...

Eigen Assertion error at run time

I am compiling a program that uses several Eigen::MatrixXd methods, and while I get no errors when compiling it, running it I get the following error:
darwin-pi2: /usr/include/Eigen/src/Core/Assign.h:498: Derived& Eigen::DenseBase<Derived>::lazyAssign(const Eigen::DenseBase<OtherDerived>&) [with OtherDerived = Eigen::Matrix<double, -1, -1>; Derived = Eigen::Matrix<double, 15, 15, 0, 15, 15>]: Assertion `rows() == other.rows() && cols() == other.cols()' failed.
I guess it is something related to Eigen matrices, but I do not understand what Assertion rows() == other.rows() && cols() == other.cols()' failed means.
Because Eigen::MatrixXd has dimensions determined at runtime, the compile-time size checks are all disabled and deferred until runtime.
In this case, it looks like you're assigning from a dynamic-size matrix to a 15x15 one. Try double-checking and debugging the size of that dynamic one.
In matlab, the index of a matrix m starts from 1. But in eigen, it starts from 0. Show a simple example.
#include <iostream>
#include <Eigen/Dense>
using Eigen::MatrixXd;
int main()
{
MatrixXd m(2,2);
m(0,0) = 3; // INDEX starts from 0, not 1
m(1,0) = 2.5;
m(0,1) = -1;
m(1,1) = m(1,0) + m(0,1);
std::cout << m << std::endl;
}
For more information, click the docs.