Error while creating object from templated class - c++

I've been trying to find a way to sample random vectors from a multivariate normal distribution in C++, having both the mean vector and the covariance matrix, much like Matlab's mvnrnd function works. I've found relevant code for a class that implements this on this page, but I've been having some problems compiling it. I've created a header file that is being included on my main.cpp, and I'm trying to create an object of the EigenMultivariateNormal class:
MatrixXd MN(10,1);
MatrixXd CVM(10,10);
EigenMultivariateNormal <double,int> (&MN,&CVM) mvn;
The problem is I get a template-related error when compiling:
error: type/value mismatch at argument 2 in template parameter list for ‘template<class _Scalar, int _size> class EigenMultivariateNormal’
error: expected a constant of type ‘int’, got ‘int’
error: expected ‘;’ before ‘mvn’
I only have a superficial idea on how to work with templates, and I am by no means a cpp expert, so I was wondering what exactly am I doing wrong? Apparently I should have a const somewhere on my code.

That code's a bit old. Here's a newer, possibly improved version. There are probably still some bad things. For example, I think it should be changed to use the MatrixBase instead of an actual Matrix. That might let it optimize and better decide when it needs to allocate storage space or not. This also uses the namespace internal which is probably frowned on, but it seems necessary to make use of Eigen's NullaryExpr which seems like the right thing to do. There's usage of the dreaded mutable keyword. That's necessary because of what Eigen thinks should be const when used in a NullaryExpr.
It's also a little annoying to rely on boost. It seems that in C++11 the necessary functions have become standard. Below the class code, there's a short usage sample.
The class eigenmultivariatenormal.hpp
#ifndef __EIGENMULTIVARIATENORMAL_HPP
#define __EIGENMULTIVARIATENORMAL_HPP
#include <Eigen/Dense>
#include <boost/random/mersenne_twister.hpp>
#include <boost/random/normal_distribution.hpp>
/*
We need a functor that can pretend it's const,
but to be a good random number generator
it needs mutable state. The standard Eigen function
Random() just calls rand(), which changes a global
variable.
*/
namespace Eigen {
namespace internal {
template<typename Scalar>
struct scalar_normal_dist_op
{
static boost::mt19937 rng; // The uniform pseudo-random algorithm
mutable boost::normal_distribution<Scalar> norm; // The gaussian combinator
EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)
template<typename Index>
inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};
template<typename Scalar>
boost::mt19937 scalar_normal_dist_op<Scalar>::rng;
template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
/**
Find the eigen-decomposition of the covariance matrix
and then store it for sampling from a multi-variate normal
*/
template<typename Scalar, int Size>
class EigenMultivariateNormal
{
Matrix<Scalar,Size,Size> _covar;
Matrix<Scalar,Size,Size> _transform;
Matrix< Scalar, Size, 1> _mean;
internal::scalar_normal_dist_op<Scalar> randN; // Gaussian functor
public:
EigenMultivariateNormal(const Matrix<Scalar,Size,1>& mean,const Matrix<Scalar,Size,Size>& covar)
{
setMean(mean);
setCovar(covar);
}
void setMean(const Matrix<Scalar,Size,1>& mean) { _mean = mean; }
void setCovar(const Matrix<Scalar,Size,Size>& covar)
{
_covar = covar;
// Assuming that we'll be using this repeatedly,
// compute the transformation matrix that will
// be applied to unit-variance independent normals
/*
Eigen::LDLT<Eigen::Matrix<Scalar,Size,Size> > cholSolver(_covar);
// We can only use the cholesky decomposition if
// the covariance matrix is symmetric, pos-definite.
// But a covariance matrix might be pos-semi-definite.
// In that case, we'll go to an EigenSolver
if (cholSolver.info()==Eigen::Success) {
// Use cholesky solver
_transform = cholSolver.matrixL();
} else {*/
SelfAdjointEigenSolver<Matrix<Scalar,Size,Size> > eigenSolver(_covar);
_transform = eigenSolver.eigenvectors()*eigenSolver.eigenvalues().cwiseMax(0).cwiseSqrt().asDiagonal();
/*}*/
}
/// Draw nn samples from the gaussian and return them
/// as columns in a Size by nn matrix
Matrix<Scalar,Size,-1> samples(int nn)
{
return (_transform * Matrix<Scalar,Size,-1>::NullaryExpr(Size,nn,randN)).colwise() + _mean;
}
}; // end class EigenMultivariateNormal
} // end namespace Eigen
#endif
Here's a simple program that uses it:
#include <fstream>
#include "eigenmultivariatenormal.hpp"
#ifndef M_PI
#define M_PI REAL(3.1415926535897932384626433832795029)
#endif
/**
Take a pair of un-correlated variances.
Create a covariance matrix by correlating
them, sandwiching them in a rotation matrix.
*/
Eigen::Matrix2d genCovar(double v0,double v1,double theta)
{
Eigen::Matrix2d rot = Eigen::Rotation2Dd(theta).matrix();
return rot*Eigen::DiagonalMatrix<double,2,2>(v0,v1)*rot.transpose();
}
void main()
{
Eigen::Vector2d mean;
Eigen::Matrix2d covar;
mean << -1,0.5; // Set the mean
// Create a covariance matrix
// Much wider than it is tall
// and rotated clockwise by a bit
covar = genCovar(3,0.1,M_PI/5.0);
// Create a bivariate gaussian distribution of doubles.
// with our chosen mean and covariance
Eigen::EigenMultivariateNormal<double,2> normX(mean,covar);
std::ofstream file("samples.txt");
// Generate some samples and write them out to file
// for plotting
file << normX.samples(1000).transpose() << std::endl;
}
And here's a plot showing the results.
Using the SelfAdjointEigenSolver is probably a lot slower than a Cholesky decomposition, but it is stable, even if the covariance matrix is singular. If you know that your covariance matrices will always be full, then you could use that instead. However, if you create the distribution rarely and sample from it often, then that's probably not a big deal.

template<class _Scalar, int _size> class EigenMultivariateNormal is specialized template class. The first class _Scalar ask for a type but int _size ask for an int.
You should call it with a constant int instead of the type int as you did.
Secondly, your syntax to instance a new class EigenMultivariateNormal is wrong.
Try this instead:
EigenMultivariateNormal<double, 10> mvn (&MN, &CVM); // with 10 is the size

Related

Calculating Inverse_chi_squared_distribution using boost

I'm trying to implement a function to calculate inverse_chi_squared_distribution, boost has a container named inverse_chi_squared_distribution, however, when I try to create an instance of the class I get this error too few template arguments for class template 'inverse_chi_squared_distribution'.
I'm on wsl:ubuntu-18.04 and other boost functions/containers work fine.
Here's the code generating the error:
boost::math::inverse_chi_squared_distribution<double> invChi(degValue);
Not exactly sure how to calculate it even if this instance is created (was gonna just hit and miss till I get it) so help using this to calculate the function would be much appreciated, thanks.
OK, do you want the inverse of a chi-squared distribution (ie its quantile) or do you want the "inverse chi squared distribution" which is a distribution in it's own right, also with an inverse/quantile!
If the former, then assuming v degrees of freedom, and probability p then this would do it:
#include <boost/math/distributions/chi_squared.hpp>
double chi_squared_quantile(double v, double p)
{
return quantile(boost::math::chi_squared(v), p);
}
If the latter, then example usages might be:
#include <boost/math/distributions/inverse_chi_squared.hpp>
double inverse_chi_squared_quantile(double v, double p)
{
return quantile(boost::math::inverse_chi_squared(v), p);
}
double inverse_chi_squared_pdf(double v, double x)
{
return pdf(boost::math::inverse_chi_squared(v), x);
}
double inverse_chi_squared_cdf(double v, double x)
{
return cdf(boost::math::inverse_chi_squared(v), x);
}
There are other options - you can calculate with types other than double then you would use boost::math::inverse_chi_squared_distribution<MyType> in place of the convenience typedef inverse_chi_squared.

How to use Eigen::Matrix4d as message type in C++ Actor Framework?

I'd like to use the Eigen::Matrix4d class as a message in the CAF. But I can't seem to write a good Inspector for it.
The error is as follows:
usr/local/include/caf/write_inspector.hpp:133:7: error: static assertion failed:
T is neither inspectable nor default-applicable
I've tried passing the content of the Matrix4d element per element and tried some more elaborate approaches with Boost (boost_serialization_eigen.h), but I just keep getting the same error.
#include <iostream>
#include <caf/all.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
using namespace caf;
using namespace std;
using namespace Eigen;
CAF_BEGIN_TYPE_ID_BLOCK(custom_types, first_custom_type_id)
CAF_ADD_TYPE_ID(custom_types, (Matrix4d))
CAF_END_TYPE_ID_BLOCK(custom_types)
#include <iostream>
template <class Inspector>
typename Inspector::result_type inspect(Inspector& f, Matrix4d& m) {
return f(m.data());
}
void caf_main(actor_system& system) {
Eigen::Matrix4d Trans; // Your Transformation Matrix
Trans.setIdentity(); // Set to Identity to make bottom row of Matrix 0,0,0,1
Trans(0, 0) = 42;
std::cout << Trans << endl;
// Spawn the actor
}
// creates a main function for us that calls our caf_main
CAF_MAIN(id_block::custom_types)
I realize this may be a broad question, but any pointers in the right direction are appreciated.
(assuming CAF 0.17)
The inspection DSL doesn't really seem to cover this particular case very well. Probably the best solution for CAF 0.17 at the moment would be specializing on data_processor and calling consume_range:
template <class Derived>
auto inspect(data_processor<Derived>& f, Matrix4d& x) {
auto range = make_span(x.data(), x.size());
return f.consume_range(range);
}
This won't work with CAF's automatic string conversion, but you can provide a to_string overload as needed.

C++: advanced class model for matrix calculator

I'm making a matrix calculator as a project for my C++ class in college and I'm not sure how to design the classes for it. My problem is that one of the traits of this program has to be that sparse and dense matrices should be stored in different ways for memory efficiency (dense as typical 2D array or vector, sparse in CSR format for example), but I need to handle both of the types in same way.
So far I was thinking of something like have abstract class 'MatrixWrapper', which should contain all the shared algorithms for adding, multiplying, GEM, and so on. And then have classes 'MatrixDense' and 'MatrixSparse', which would both inherit from 'MatrixWrapper' and therefor have same interface (shown in code below). But that's where I got stuck, because with this approach when I tried implementing the algorithms in 'MatrixWrapper' I didn't know with which of the two matrices I'd be working. I'm just not sure how to solve this or even may approach is correct.
class MatrixWrapper {
public:
// shared algorithms
/* for example
void addMatrix ( const ??? &x ) {
...
}
*/
}
class MatrixDense : public MatrixWrapper {
public:
//constructor, destructor, ...
private:
vector< vector<double> > matrix;
}
class MatrixSparse : public MatrixWrapper {
public:
//constructor, destructor, ...
private:
struct CSR {
...
};
CSR matrix;
}
I was maybe thinking about adding 2D array to the 'MatrixWrapper' along side with abstract method setValue() and then in 'MatrixSparse' and 'MatrixDense' every time just setting the values of this array using this method and then just working with that 2D array in 'MatrixWrapper', but I'm not sure how to implement that or even if that's the right approach.
Implement all binary operators using non-member functions. Either global functions, or functions inside an unrelated class:
// Option 1
void add(
MatrixWrapper& result,
const MatrixWrapper& operand1,
const MatrixWrapper& operand2);
// Option 2
struct WrapperForMatrixOperations // I don't know why you might want this class to exist
{
static // or maybe not static
void add(
MatrixWrapper& result,
const MatrixWrapper& operand1,
const MatrixWrapper& operand2);
};
The reason is, your algorithm will probably return a "dense" matrix when adding a dense and a sparse matrix:
dense + sparse = dense
sparse + sparse = sparse
sparse + dense = dense <- problem!
dense + dense = dense
This cannot work if it is implemented as a non-const member function.
You should also decide how you want to create your matrices - maybe each binary operation should allocate a new matrix and return it by shared_ptr?

How to use random number in user defined tensorflow op?

How to use random number in user defined tensorflow op?
I am writing a op in cpp which need random number in the Compute function.
but It seems I should not use cpp random library directly, since that cannot control by tf.set_random_seed.
My current code is something like the following, what should I do in function some_interesting_random_function ?
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/common_shape_fns.h"
#include <iostream>
#include <typeinfo>
#include <random>
using namespace tensorflow;
REGISTER_OP("MyRandom")
.Output("random: int32")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->Scalar());
return Status::OK();
});
int some_interesting_random_function(){
return 10;
}
class MyRandomOp : public OpKernel {
public:
explicit MyRandomOp(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
Tensor* res;
TensorShape shape;
int dims[] = {};
TensorShapeUtils::MakeShape(dims, 0, &shape);
OP_REQUIRES_OK(context, context->allocate_output(0, shape,
&res));
auto out1 = res->flat<int32>();
out1(0) = some_interesting_random_function();
}
};
REGISTER_KERNEL_BUILDER(Name("MyRandom").Device(DEVICE_CPU), MyRandomOp);
The core of all random number generation in TensorFlow is PhiloxRandom, generally accessed through its wrapper GuardedPhiloxRandom. As explained in tf.set_random_seed, there are graph-level and op-level seeds, both of which may or may not be set. If you want to have this in your op too, you need to do a couple of things. First, your op should be declared with two optional attributes, seed and seed2; see the existing ops in random_ops.cc. Then, in Python, you have some user API wrapping your op that makes these two values using tensorflow.python.framework.random_seed, which you have to import as tensorflow.python.framework import random_seed, and do seed1, seed2 = random_seed.get_seed(seed); this will correctly create the two seed values using the graph's seed and an optional seed parameter to the function (see random_ops.py). These seed1 and seed2 values are then passed as seed and seed2 attributes to your op, obviously. If you do all that, then GuardedPhiloxRandom will take care of properly initializing the random number generator using the right seeds.
Now, to the kernel implementation. In addition to the things I mentioned above, you will need to combine two things: the struct template FillPhiloxRandom, declared in core/kernels/random_op.h, which will help you fill a tensor with random data; and a Distribution, which is just an object that can be called with a random number generator to produce a value (see existing implementations in core/lib/random/random_distributions.h). Now it is mostly a matter of looking at how it is done in core/kernels/random_op.cc, and copy the bits you need. Most kernels in there are based on PhiloxRandomOp (which is not publicly declared, but you can copy or adapt). This essentially holds a random number generator, allocates space in the output tensor (it assumes the first input is the desired shape) and calls FillPhiloxRandom to do the work. If this is the kind of op you are trying to create (generate some data according to some distribution), then you are all set! Your code could look something like this:
// Required for thread pool device
#define EIGEN_USE_THREADS
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/register_types.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/kernels/random_op.h"
#include "tensorflow/core/util/guarded_philox_random.h"
// Helper function to convert an 32-bit integer to a float between [0..1).
// Copied from core/lib/random/random_distributions.h
PHILOX_DEVICE_INLINE float Uint32ToFloat(uint32 x) {
// IEEE754 floats are formatted as follows (MSB first):
// sign(1) exponent(8) mantissa(23)
// Conceptually construct the following:
// sign == 0
// exponent == 127 -- an excess 127 representation of a zero exponent
// mantissa == 23 random bits
const uint32 man = x & 0x7fffffu; // 23 bit mantissa
const uint32 exp = static_cast<uint32>(127);
const uint32 val = (exp << 23) | man;
// Assumes that endian-ness is same for float and uint32.
float result;
memcpy(&result, &val, sizeof(val));
return result - 1.0f;
}
// Template class for your custom distribution
template <class Generator, typename RealType>
class MyDistribution;
// Implementation for tf.float32
template <class Generator>
class MyDistribution<Generator, float> {
public:
// The number of elements that will be returned (see below).
static const int kResultElementCount = Generator::kResultElementCount;
// Cost of generation of a single element (in cycles) (see below).
static const int kElementCost = 3;
// Indicate that this distribution may take variable number of samples
// during the runtime (see below).
static const bool kVariableSamplesPerOutput = false;
typedef Array<float, kResultElementCount> ResultType;
typedef float ResultElementType;
PHILOX_DEVICE_INLINE
ResultType operator()(Generator* gen) {
typename Generator::ResultType sample = (*gen)();
ResultType result;
for (int i = 0; i < kResultElementCount; ++i) {
float r = Uint32ToFloat(sample[i]);
// Example distribution logic: produce 1 or 0 with 50% probability
result[i] = 1.0f * (r < 0.5f);
}
return result;
}
};
// Could add implementations for other data types...
// Base kernel
// Copied from core/kernels/random_op.cc
static Status AllocateOutputWithShape(OpKernelContext* ctx, const Tensor& shape,
int index, Tensor** output) {
TensorShape tensor_shape;
TF_RETURN_IF_ERROR(ctx->op_kernel().MakeShape(shape, &tensor_shape));
return ctx->allocate_output(index, tensor_shape, output);
}
template <typename Device, class Distribution>
class PhiloxRandomOp : public OpKernel {
public:
typedef typename Distribution::ResultElementType T;
explicit PhiloxRandomOp(OpKernelConstruction* ctx) : OpKernel(ctx) {
OP_REQUIRES_OK(ctx, generator_.Init(ctx));
}
void Compute(OpKernelContext* ctx) override {
const Tensor& shape = ctx->input(0);
Tensor* output;
OP_REQUIRES_OK(ctx, AllocateOutputWithShape(ctx, shape, 0, &output));
auto output_flat = output->flat<T>();
tensorflow::functor::FillPhiloxRandom<Device, Distribution>()(
ctx, ctx->eigen_device<Device>(),
// Multiplier 256 is the same as in FillPhiloxRandomTask; do not change
// it just here.
generator_.ReserveRandomOutputs(output_flat.size(), 256),
output_flat.data(), output_flat.size(), Distribution());
}
private:
GuardedPhiloxRandom generator_;
};
// Register kernel
typedef Eigen::ThreadPoolDevice CPUDevice;
template struct functor::FillPhiloxRandom<
CPUDevice, MyDistribution<tensorflow::random::PhiloxRandom, float>>;
REGISTER_KERNEL_BUILDER(
Name("MyDistribution")
.Device(DEVICE_CPU)
.HostMemory("shape")
.TypeConstraint<float>("dtype"),
PhiloxRandomOp<CPUDevice, MyDistribution<tensorflow::random::PhiloxRandom, float>>);
// Register kernels for more types, can use macros as in core/kernels/random_op.cc...
There are a few extra bits and pieces here. First you need to understand that PhiloxRandom generally produces four unsigned 32-bit integers on each step, and you have to make your random values from these. Uint32ToFloat is a helper to get a float between zero and one from one of this numbers. There are a few constants in there too. kResultElementCount is the number of values your distribution produces on each step. If you produce one value per random number form the generator, you can set it too Generator::kResultElementCount, like here (which is 4). However, for example if you want to produce double values (that is, tf.float64), you may want to use two 32-bit integers per value, so maybe you would produce Generator::kResultElementCount / 2 in that case. kElementCost is supposed to indicate how many cycles it takes your distribution to produce an element. I do not know how this is measured by the TensorFlow team, but it is just a hint to distribute the generation work among tasks (used by FillPhiloxRandom), so you can just guess something, or copy it from a similarly expensive distribution. kVariableSamplesPerOutput determines whether each call to your distribution may produce a different number of outputs; again, when this is false (which should be the common case), FillPhiloxRandom will make the value generation more efficient. PHILOX_DEVICE_INLINE (defined in core/lib/random/philox_random.h) is a compiler hint to inline the function. You can add then additional implementations and kernel registrations for other data types and, if you are supporting it, for DEVICE_GPU GPUDevice (with typedef Eigen::GpuDevice GPUDevice) or even DEVICE_SYCL (with typedef Eigen::SyclDevice SYCLDevice), if you want. And about that, EIGEN_USE_THREADS is just to enable the thread pool execution device in Eigen, to make CPU implementation multi-threaded.
If your use case is different, though (for example, you want to generate some random numbers and do some other computation in addition to that), FillPhiloxRandom may not be useful to you (or it may be, but then you also need to do something else). Having a look at core/kernels/random_op.cc and the headers of the different classes should help you figure out how to use them for your problem.

OpenCV Matrix of user-defined type

Is there a way to have a matrix of user-defined type in OpenCV 2.x? Something like :
cv::Mat_<KalmanRGBPixel> backgroundModel;
I know cv::Mat<> is meant for image and mathematic, but I want to hold data in a matrix form. I don't plan to use inverse, transpose, multiplication, etc., it's only to store data. I want it to be in matrix form because the pixel_ij of each frame of a video will be linked to backgroundModel_ij.
I know there is a DataType<_Tp> class in core.hpp that needs to be defined for my type but I'm not sure how to do it.
EDIT : KalmanRGBPixel is only a wrapper for cv::KalmanFilter class. As for now, it's the only member.
... some functions ...
private:
cv::KalmanFilter kalman;
Thanks for your help.
I have a more long winded answer for anybody wanting to create a matrix of custom objects, of whatever size.
You will need to specialize the DataType template but instead of having 1 channel, you make the channels the same size of your custom object. You may also need to override a few functions to get expected functionality, but back to that later.
First, here is an example of my custom type template specialization:
typedef HOGFilter::Sample Sample;
namespace cv {
template<> class DataType<Sample>
{
public:
typedef HOGFilter::Sample value_type;
typedef HOGFilter::Sample channel_type;
typedef HOGFilter::Sample work_type;
typedef HOGFilter::Sample vec_type;
enum {
depth = CV_8U,
channels = sizeof(HOGFilter::Sample),
type = CV_MAKETYPE(depth, channels),
};
};
}
Second.. you may want to override some functions to get expected functionality:
// Special version of Mat, a matrix of Samples. Using the power of opencvs
// matrix manipulation and multi-threading capabilities
class SampleMat : public cv::Mat_<Sample>
{
typedef cv::Mat_<Sample> super;
public:
SampleMat(int width = 0, int height = 0);
SampleMat &operator=(const SampleMat &mat);
const Sample& at(int x, int y = 0);
};
The typedef of super isnt required but helps with readability in the cpp.
Notice I have overriden the constructor with width/hight parameters. This is because we have to instantiate the mat this way if we want a 2D matrix.
SampleMat::SampleMat(int width, int height)
{
int count = width * height;
for (int i = 0; i < count; ++i)
{
HOGFilter::Sample sample;
this->push_back(sample);
}
*dynamic_cast<Mat_*>(this) = super::reshape(channels(), height);
}
The at<_T>() override is just for cleaner code:
const Sample & SampleMat::at(int x, int y)
{
if (y == 0)
return super::at<Sample>(x);
return super::at<Sample>(cv::Point(x, y));
}
In the OpenCV documentation it is explained how to add custom types to OpenCV matrices. You need to define the corresponding cv::DataType.
https://docs.opencv.org/master/d0/d3a/classcv_1_1DataType.html
The DataType class is basically used to provide a description of such primitive data types without adding any fields or methods to the corresponding classes (and it is actually impossible to add anything to primitive C/C++ data types). This technique is known in C++ as class traits. It is not DataType itself that is used but its specialized versions […] The main purpose of this class is to convert compilation-time type information to an OpenCV-compatible data type identifier […]
(Yes, finally I answer the question itself in this thread!)
If you don't want to use the OpenCV functionality, then Mat is not the right type for you.
Use std::vector<std::vector<Type> > instead. You can give the size during initialization:
std::vector<std::vector<Type> > matrix(42, std::vector<Type>(23));
Then you can access with []-operator. No need to screw around with obscure cv::Mats here.
If you would really need to go for an OpenCV-Matrix, you are right in that you have to define the DataType. It is basically a bunch of traits. You can read about C++ Traits on the web.
You can create a CV mat that users your own allocated memory by specifying the address to the constructor. If you also want the width and height to be correct you will need to find an openCV pixel type that is the same number of bytes.