This is my code:
MatrixXd A(3,3);
A<<1,2,3,4,5,6,7,8,9;
MatrixXd b(3,3);
b = (A.array() == A.array()).matrix();
cout<<b<<endl;
It show that something wrong with (A.array() == A.array()).matrix().
This is the error message:
In file included from /home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/Core:254:0,
from /home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/Dense:1,
from /home/biss/Desktop/self-driving-car/term2/kalman-Filter/cpp_normal_op/clion_c/main.cpp:2:
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/Assign.h: In instantiation of ‘Derived& Eigen::DenseBase<Derived>::lazyAssign(const Eigen::DenseBase<OtherDerived>&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; Derived = Eigen::Matrix<double, -1, -1>]’:
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/PlainObjectBase.h:414:30: required from ‘Derived& Eigen::PlainObjectBase<Derived>::lazyAssign(const Eigen::DenseBase<OtherDerived>&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; Derived = Eigen::Matrix<double, -1, -1>]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/Assign.h:527:123: required from ‘static Derived& Eigen::internal::assign_selector<Derived, OtherDerived, false, false>::run(Derived&, const OtherDerived&) [with Derived = Eigen::Matrix<double, -1, -1>; OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/PlainObjectBase.h:653:72: required from ‘Derived& Eigen::PlainObjectBase<Derived>::_set_noalias(const Eigen::DenseBase<OtherDerived>&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; Derived = Eigen::Matrix<double, -1, -1>]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/PlainObjectBase.h:638:114: required from ‘void Eigen::PlainObjectBase<Derived>::_set_selector(const OtherDerived&, const Eigen::internal::false_type&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; Derived = Eigen::Matrix<double, -1, -1>]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/PlainObjectBase.h:630:20: required from ‘Derived& Eigen::PlainObjectBase<Derived>::_set(const Eigen::DenseBase<OtherDerived>&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; Derived = Eigen::Matrix<double, -1, -1>]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/Matrix.h:172:24: required from ‘Eigen::Matrix<_Scalar, _Rows, _Cols, _Options, _MaxRows, _MaxCols>& Eigen::Matrix<_Scalar, _Rows, _Cols, _Options, _MaxRows, _MaxCols>::operator=(const Eigen::MatrixBase<OtherDerived>&) [with OtherDerived = Eigen::MatrixWrapper<const Eigen::CwiseBinaryOp<Eigen::internal::scalar_cmp_op<double, (Eigen::internal::ComparisonName)0u>, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> >, const Eigen::ArrayWrapper<Eigen::Matrix<double, -1, -1> > > >; _Scalar = double; int _Rows = -1; int _Cols = -1; int _Options = 0; int _MaxRows = -1; int _MaxCols = -1]’
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/cpp_normal_op/clion_c/main.cpp:23:7: required from here
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/util/StaticAssert.h:32:40: error: static assertion failed: YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY
#define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG);
^
/home/biss/Desktop/self-driving-car/term2/kalman-Filter/Eigen/src/Core/Assign.h:500:3: note: in expansion of macro ‘EIGEN_STATIC_ASSERT’
EIGEN_STATIC_ASSERT(SameType,YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY)
^
CMakeFiles/clion_c.dir/build.make:62: recipe for target 'CMakeFiles/clion_c.dir/main.cpp.o' failed
make[3]: *** [CMakeFiles/clion_c.dir/main.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/clion_c.dir/all' failed
make[2]: *** [CMakeFiles/clion_c.dir/all] Error 2
CMakeFiles/Makefile2:79: recipe for target 'CMakeFiles/clion_c.dir/rule' failed
make[1]: *** [CMakeFiles/clion_c.dir/rule] Error 2
Makefile:118: recipe for target 'clion_c' failed
make: *** [clion_c] Error 2
However, If I change my code:
MatrixXd A(3,3);
A<<1,2,3,4,5,6,7,8,9;
MatrixXd b(3,3);
b = (A.array() * A.array()).matrix();
cout<<b<<endl;
It can run ok.
1 4 9
16 25 36
49 64 81
If I want to do this operation:(A.array() == A.array()).matrix(), what should I do?
Lets break it down a bit. (A.array() == A.array()) represents the (2D) array with a boolean showing element-wise equality. If you were to write
std::cout << (A.array() == A.array());
you would get
1 1 1
1 1 1
1 1 1
as you're asking if A equals itself and it happens not to have any NANs. The error message that you got says error: static assertion failed: YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY. Which means exactly that. You cannot assign a MatrixXf to a MatrixXd either. There is no implicit casting. So, to make it work, you want to write
b = (A.array() == A.array()).cast<double>().matrix();
which explicitly casts the booleans to doubles. I'm pretty sure that's not exactly what you want to do, but that is what is written in you're question (hopefully because it's an incomplete MCVE).
Related
I would like to use MatrixXd class for meshes with offsets (0.5, 0) and (0, 0.5). In mathematical formulas, velocity is calculated between cells i,i+1, and this is written as vel(i+0.5,j). I would like to introduce syntax like this one:
#include <Eigen/Dense>
int main() {
Eigen::MatrixXd m = Eigen::MatrixXd::Zero(5,5);
// Want to use similar syntax:
// m(0, 1.5) = 1.0;
// and
// m(3.5, 1) = 2.0;
// Instead of:
m(0, 2) = 1.0;
m(4, 1) = 2.0;
}
Using EIGEN_MATRIXBASE_PLUGIN like this one:
inline Scalar& operator()(int r, int c) {
return Base::operator()(r, c);
}
inline Scalar& operator()(double r, int c) {
return Base::operator()(int(r + 0.5), c);
}
inline Scalar& operator()(int r, double c) {
return Base::operator()(r, int(c + 0.5));
}
However, this approach:
Works only for only X-axis or only Y-axis offset, not both at the same time.
Works only for specific offset hardcoded into plugin.
Breaks some internal Eigen convections, which can be demostrated by trying to compile BiCG example with IncompleteLUT preconditioner:
int n = 10000;
VectorXd x(n), b(n);
SparseMatrix<double> A(n,n);
/* ... fill A and b ... */
BiCGSTAB<SparseMatrix<double>,IncompleteLUT<double>> solver;
solver.compute(A);
x = solver.solve(b);
Causes following errors:
term does not evaluate to a function taking 1 arguments
'Eigen::SparseMatrix<double,1,int>::insertBackByOuterInnerUnordered': function does not take 1 arguments
Adding operator()(double offset_col, double offset_row) to adress second issue like this:
double r_offset = -0.5, c_offset = -0.5;
inline void set_r_offset(double val) { r_offset = val; }
inline void set_c_offset(double val) { c_offset = val; }
inline double get_r_offset() { return r_offset; }
inline double get_c_offset() { return c_offset; }
inline Scalar& operator()(double r, double c) {
// double r_offset = -0.5, c_offset = -0.5;
return Base::operator()(int(r - r_offset), int(c - c_offset));
}
This causes illegal free:
==6035== Invalid free() / delete / delete[] / realloc()
==6035== at 0x4C30D3B: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==6035== by 0x4E4224A: aligned_free (Memory.h:177)
==6035== by 0x4E4224A: conditional_aligned_free<true> (Memory.h:230)
==6035== by 0x4E4224A: conditional_aligned_delete_auto<double, true> (Memory.h:416)
==6035== by 0x4E4224A: resize (DenseStorage.h:406)
==6035== by 0x4E4224A: resize (PlainObjectBase.h:293)
==6035== by 0x4E4224A: resize_if_allowed<Eigen::Matrix<double, -1, -1>, Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> >, double, double> (AssignEvaluator.h:720)
==6035== by 0x4E4224A: call_dense_assignment_loop<Eigen::Matrix<double, -1, -1>, Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> >, Eigen::internal::assign_op<double, double> > (AssignEvaluator.h:734)
==6035== by 0x4E4224A: run (AssignEvaluator.h:879)
==6035== by 0x4E4224A: call_assignment_no_alias<Eigen::Matrix<double, -1, -1>, Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> >, Eigen::internal::assign_op<double, double> > (AssignEvaluator.h:836)
==6035== by 0x4E4224A: call_assignment<Eigen::Matrix<double, -1, -1>, Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> >, Eigen::internal::assign_op<double, double> > (AssignEvaluator.h:804)
==6035== by 0x4E4224A: call_assignment<Eigen::Matrix<double, -1, -1>, Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> > > (AssignEvaluator.h:782)
==6035== by 0x4E4224A: _set<Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> > > (PlainObjectBase.h:710)
==6035== by 0x4E4224A: operator=<Eigen::CwiseNullaryOp<Eigen::internal::scalar_constant_op<double>, Eigen::Matrix<double, -1, -1> > > (Matrix.h:225)
==6035== by 0x11044C: main (Runner.cpp:16)
==6035== Address 0x2e642f73726573 is not stack'd, malloc'd or (recently) free'd
If offsets are not introduced as class members, but are local variables in operator(), no errors are detected by valgrind.
Is it possible to implement new MatrixXd::operator()(double, double) with settable offsets?
EDIT:
Operator() is defined in a parent class DenseCoeffsBase:
EIGEN_DEVICE_FUNC
EIGEN_STRONG_INLINE CoeffReturnType operator()(Index row, Index col) const
{
eigen_assert(row >= 0 && row < rows()
&& col >= 0 && col < cols());
return coeff(row, col);
}
Perhaps, I see one problem with your operator which returns reference to temporary object of Scalar:
inline Scalar& operator()(double r, double c) {
// double r_offset = -0.5, c_offset = -0.5;
return Base::operator()(int(r - r_offset), int(c - c_offset));
}
So you should return Scalar by copy.
Could you share code of Base::operator()(int(r - r_offset), int(c - c_offset));?
auto linear_square = linear * linear;
auto linear_square_sum = linear_square.sum().sqrt();
If I cout the value of linear_square_sum, I can see a number. Now I want to get the scalar value of linear_square.sum().sqrt(), so I give it the type Eigen::Tensor to be like the following code,
auto linear_square = linear * linear;
Eigen::Tensor<T,0> linear_square_sum = linear_square.sum().sqrt();
unfortunately, it gives some error. The actual type of linear_square.sum().sqrt() is
see here
and the error is
external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h:127:5:
error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE
external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h:
In instantiation of 'Eigen::TensorEvaluator,
Device>::TensorEvaluator(const XprType&, const Device&) [with
LeftArgType = Eigen::TensorFixedSize, 0, long int>; RightArgType =
const Eigen::TensorCwiseUnaryOp, const Eigen::TensorReductionOp, const
Eigen::DimensionList, const Eigen::TensorCwiseBinaryOp, const
Eigen::TensorChippingOp<0, Eigen::TensorMap, 16, Eigen::MakePointer>
, const Eigen::TensorChippingOp<0, Eigen::TensorMap, 16, Eigen::MakePointer> > >, Eigen::MakePointer> >; Device =
Eigen::DefaultDevice; Eigen::TensorEvaluator, Device>::XprType =
Eigen::TensorAssignOp, 0, long int>, const Eigen::TensorCwiseUnaryOp,
const Eigen::TensorReductionOp, const Eigen::DimensionList, const
Eigen::TensorCwiseBinaryOp, const Eigen::TensorChippingOp<0,
Eigen::TensorMap, 16, Eigen::MakePointer> >, const
Eigen::TensorChippingOp<0, Eigen::TensorMap, 16, Eigen::MakePointer> >
, Eigen::MakePointer> > >]': external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h:88:41:
required from 'static void Eigen::internal::TensorExecutor::run(const
Expression&, const Device&) [with Expression = const
Eigen::TensorAssignOp, 0, long int>, const Eigen::TensorCwiseUnaryOp,
const Eigen::TensorReductionOp, const Eigen::DimensionList, const
Eigen::TensorCwiseBinaryOp, const Eigen::TensorChippingOp<0,
Eigen::TensorMap, 16, Eigen::MakePointer> >, const
Eigen::TensorChippingOp<0, Eigen::TensorMap, 16, Eigen::MakePointer> >
, Eigen::MakePointer> > >; Device = Eigen::DefaultDevice; bool Vectorizable = false; bool Tileable = false]'
external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFixedSize.h:328:65:
required from 'Eigen::TensorFixedSize::TensorFixedSize(const
Eigen::TensorBase&) [with OtherDerived = Eigen::TensorCwiseUnaryOp,
const Eigen::TensorReductionOp, const Eigen::DimensionList, const
Eigen::TensorCwiseBinaryOp, const Eigen::TensorChippingOp<0,
Eigen::TensorMap, 16, Eigen::MakePointer> >, const
Eigen::TensorChippingOp<0, Eigen::TensorMap, 16, Eigen::MakePointer> >
, Eigen::MakePointer> >; Scalar_ = Eigen::half; Dimensions_ = Eigen::Sizes<0>; int Options_ = 0; IndexType = long int]'
tensorflow/core/kernels/training_ops.cc:2871:13: required from 'void
tensorflow::SparseApplyGFtrlOp::Compute(tensorflow::OpKernelContext*)
[with Device = Eigen::ThreadPoolDevice; T = Eigen::half; Tindex = long
long int; bool has_l2_shrinkage = false]'
tensorflow/core/kernels/training_ops.cc:4684:1: required from here
external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h:127:5:
error: static assertion failed: YOU_MADE_A_PROGRAMMING_MISTAKE Target
//tensorflow/tools/pip_package:build_pip_package failed to build
I'm trying to implement the activation function tanh on my CNN, but it doesn't work, the result is always "NaN". So i created a simple application where i have a randomized matrix and try to apply the tanh(x) function thus to understand where's the problem?
Here's my implementation :
Eigen::MatrixXd A = Eigen::MatrixXd::Random(10,1000);
Eigen::MatrixXd result, deriv;
result = A.array().tanh();
deriv = 1.0 - result*result;
and the only result to this is this error :
no match for ‘operator-’ (operand types are ‘double’ and ‘const Eigen::Product<Eigen::Matrix<double, -1, -1>, Eigen::Matrix<double, -1, -1>, 0>’)
deriv = (1.0 - result*result );
~~~~^~~~~~~~~~~~~~~
Could you please help me ?
The product result*result does not have the right dimensions for a matrix multiplication. We can use result*result.transpose() instead (unless a coefficient-wise multiplication is intended, in which case one could use result.array()*result.array()).
To subtract the values of the resulting matrix from a matrix full of ones, the .array() method can be used:
deriv = 1. - (result*result.transpose()).array();
I used openCV to create a matrix of ones
like this :
cv::Mat sum;
Eigen::MatrixXd SUM, Acv;
cv::eigen2cv(A,Acv)
sum=Mat::ones(Acv.rows,Acv.cols, CV_32FC1);
cv::cv2eigen(sum,SUM);
so :
deriv = SUM - result*result;
and now, here's another problem :(
/usr/include/eigen3/Eigen/src/Core/CwiseBinaryOp.h :110 : Eigen::CwiseBinaryOp<BinaryOp, Lhs, Rhs>::CwiseBinaryOp(const Lhs&, const Rhs&, const BinaryOp&) [with BinaryOp = Eigen::internal::scalar_difference_op<double, double>; LhsType = const Eigen::Matrix<double, -1, -1>; RhsType = const Eigen::Product<Eigen::Matrix<double, -1, -1>, Eigen::Matrix<double, -1, -1>, 0>; Eigen::CwiseBinaryOp<BinaryOp, Lhs, Rhs>::Lhs = Eigen::Matrix<double, -1, -1>; Eigen::CwiseBinaryOp<BinaryOp, Lhs, Rhs>::Rhs = Eigen::Product<Eigen::Matrix<double, -1, -1>, Eigen::Matrix<double, -1, -1>, 0>]: l'assertion « aLhs.rows() == aRhs.rows() && aLhs.cols() == aRhs.cols() » a échoué.
I can't get this to compile:
Eigen::Map<Eigen::Matrix<const T, EA::ColsAtCompileTime, 1>> x(vec);
auto result = a_ * x - b_; // a(60r,1200c) * x(1200r,1c) - b(60r,1c)
The two errors (about 1000 lines each) eventually conclude that the * and - operators can't be "overloaded" (their term, not mine).
a_ is of this type: typedef Eigen::Map<Eigen::Matrix<double, ROWS, COLS>> EA;
b_ is of this type: typedef Eigen::Map<Eigen::Matrix<double, ROWS, 1>> EB;
T is the Ceres Solver Jet type. The errors seem to bespeak a column/row mismatch rather than a type problem. I could be wrong, though; the output is entirely too verbose. Did I misunderstand how the rows and columns work with Eigen matrix operators?
Update: I followed the "fatal-errors" suggestion:
In file included from /usr/include/eigen3/Eigen/Core:437:0,
from /usr/local/include/ceres/jet.h:165,
from /usr/local/include/ceres/internal/autodiff.h:145,
from /usr/local/include/ceres/autodiff_cost_function.h:132,
from /usr/local/include/ceres/ceres.h:37,
from /home/brannon/Workspace/Solver/music_solver.cpp:3:
/usr/include/eigen3/Eigen/src/Core/PlainObjectBase.h: In instantiation of ‘class Eigen::PlainObjectBase<Eigen::Matrix<const double, 1200, 1, 0, 1200, 1> >’:
/usr/include/eigen3/Eigen/src/Core/Matrix.h:178:7: required from ‘class Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>’
/usr/include/eigen3/Eigen/src/Core/Map.h:24:32: required from ‘struct Eigen::internal::traits<Eigen::Map<Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>, 0, Eigen::Stride<0, 0> > >’
/usr/include/eigen3/Eigen/src/Core/util/ForwardDeclarations.h:32:54: required from ‘struct Eigen::internal::accessors_level<Eigen::Map<Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>, 0, Eigen::Stride<0, 0> > >’
/usr/include/eigen3/Eigen/src/Core/util/ForwardDeclarations.h:113:75: required from ‘class Eigen::Map<Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>, 0, Eigen::Stride<0, 0> >’
/home/brannon/Workspace/Solver/music_solver.cpp:18:72: required from ‘bool MusicCostFunctor<MATRIX_A, MATRIX_B>::operator()(const T*, T*) const [with T = double; MATRIX_A = Eigen::Map<Eigen::Matrix<double, 60, 1200, 0, 60, 1200>, 0, Eigen::Stride<0, 0> >; MATRIX_B = Eigen::Map<Eigen::Matrix<double, 60, 1, 0, 60, 1>, 0, Eigen::Stride<0, 0> >]’
/usr/local/include/ceres/internal/variadic_evaluate.h:175:19: required from ‘static bool ceres::internal::VariadicEvaluate<Functor, T, N0, 0, 0, 0, 0, 0, 0, 0, 0, 0>::Call(const Functor&, const T* const*, T*) [with Functor = MusicCostFunctor<Eigen::Map<Eigen::Matrix<double, 60, 1200, 0, 60, 1200>, 0, Eigen::Stride<0, 0> >, Eigen::Map<Eigen::Matrix<double, 60, 1, 0, 60, 1>, 0, Eigen::Stride<0, 0> > >; T = double; int N0 = 1200]’
/usr/local/include/ceres/autodiff_cost_function.h:208:17: required from ‘bool ceres::AutoDiffCostFunction<CostFunctor, kNumResiduals, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9>::Evaluate(const double* const*, double*, double**) const [with CostFunctor = MusicCostFunctor<Eigen::Map<Eigen::Matrix<double, 60, 1200, 0, 60, 1200>, 0, Eigen::Stride<0, 0> >, Eigen::Map<Eigen::Matrix<double, 60, 1, 0, 60, 1>, 0, Eigen::Stride<0, 0> > >; int kNumResiduals = 1; int N0 = 1200; int N1 = 0; int N2 = 0; int N3 = 0; int N4 = 0; int N5 = 0; int N6 = 0; int N7 = 0; int N8 = 0; int N9 = 0]’
/home/brannon/Workspace/Solver/music_solver.cpp:115:1: required from here
/usr/include/eigen3/Eigen/src/Core/PlainObjectBase.h:585:27: error: ‘static Eigen::PlainObjectBase<Derived>::MapType Eigen::PlainObjectBase<Derived>::Map(Eigen::PlainObjectBase<Derived>::Scalar*) [with Derived = Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>; Eigen::PlainObjectBase<Derived>::MapType = Eigen::Map<Eigen::Matrix<const double, 1200, 1, 0, 1200, 1>, 0, Eigen::Stride<0, 0> >; Eigen::PlainObjectBase<Derived>::Scalar = const double]’ cannot be overloaded
static inline MapType Map(Scalar* data)
^~~
You need to tell Eigen how to mix your scalar types through Eigen:: ScalarBinaryOpTraits. See similar questions with solutions there:
https://forum.kde.org/viewtopic.php?f=74&t=141467
Transform matrix of 3D positions with corresponding transformation matrix
After looking again at this example:
https://groups.google.com/d/msg/ceres-solver/7ZH21XX6HWU/Z3E-k2fbAwAJ
I realized that I put the const in the wrong spot. It's supposed to be Map<const... rather than <const T.
I am trying to compile my code, which has matrix multiplication, with intel C++ compiler. For the matrix multiplication, I am using Eigen library. This is the sample code. I am using VS2013 with the latest version of Eigen library.
#define EIGEN_USE_MKL_ALL
#include <Eigen/Dense>
using namespace Eigen;
int main()
{
Matrix<double, 1, 200, RowMajor> y_pred;
y_pred.setRandom(); // Eigen library function
double learning_rate = 0.5;
cout << learning_rate * y_pred << endl;
return 1;
}
When I am using intel C++ compiler I get the following error:
1>error : more than one operator "*" matches these operands:
1> function "Eigen::operator*(const double &, const Eigen::MatrixBase<Eigen::Matrix<double, 1, 200, 1, 1, 200>> &)"
1> function "Eigen::operator*(const std::complex<double> &, const Eigen::MatrixBase<Eigen::Matrix<double, 1, 200, 1, 1, 200>> &)"
1> function "Eigen::internal::operator*(const float &, const Eigen::Matrix<std::complex<float>, -1, -1, 0, -1, -1> &)"
1> function "Eigen::internal::operator*(const float &, const Eigen::Matrix<std::complex<float>, -1, 1, 0, -1, 1> &)"
1> function "Eigen::internal::operator*(const float &, const Eigen::Matrix<std::complex<float>, 1, -1, 1, 1, -1> &)"
1> function "Eigen::internal::operator*(const float &, const Eigen::Matrix<Eigen::scomplex, -1, -1, 1, -1, -1> &)"
1> operand types are: float * Eigen::Matrix<double, 1, 200, 1, 1, 200>
1> y_pred = learning_rate * y_pred;
You can explicitly perform a scalar computation:
cout << learning_rate * y_pred.array() << endl;