To template or not template? (Image class) - c++

I have the following use case.
I need to create an Image class. An image is defined by:
the number of pixels (width * height),
the pixel type (char, short, float, double)
the number of channels (single channel, 3 channels (RGB), 4 channels (RGBA)
All combinations of the above types shall be possible.
Furthermore,
I have some algorithms that operate over those images. These algorithms use templates for the pixel type.
I need to interface with totally generic file formats (e.g. TIFF). In these file formats, the pixel data is saved as a binary stream.
My question is the following: should I use a templated Image class, or a generic interface? Example:
// 'Generic' Image interface
class Image {
...
protected:
// Totally generic data container
uint8_t* data;
};
// Template Image interface
template <typename PixelType>
class Image {
...
protected:
// Template data container
PixelType* data;
};
Using Template Image Class
My problem now is that, if I use the templated Imageclass, my file Input/Output will be messy, as when I open an Image file, I don't know a-priori what the Image type will be, so I don't know what template type to return.
This would probably be the optimal solution, if I could figure out a way of creating a generic function that would read an Image from a file and return a generic object, something similar to
ImageType load(const char* filename);
but since ImageType would have to be a template, I don't know how and if I could do this.
Using Generic Image Class
However, if I use a generic Image class, all my algorithms will need a wrapper function with a if/switch statement like:
Image applyAlgorithmWrapper(const Image& source, Arguments args) {
if (source.channels() == 1) {
if (source.type() == IMAGE_TYPE_UCHAR) {
return FilterFunction<unsigned char>(source, args);
}
else if (source.type() == IMAGE_TYPE_FLOAT) {
return FilterFunction<float>(source, args);
} else if ...
} else if (source.channels() == 3) {
if (source.type() == IMAGE_TYPE_UCHAR) {
return FilterFunction<Vec3b>(source, args);
}
...
}
(NOTE: Vec3b is a generic 3 byte structure like
struct Vec3b {
char r, g, b;
};

In my opinion a templated class is the preferred solution.
It will offer you all the advantages of templates which basically mean that your codebase would be cleaner and simpler to understand and maintain.
What you say is a problem when using a templated class is not much of a problem. When a user would like to read an image, he/she should know the data type in which he/she would like to receive the output of the image file. Hence, a user should do it like this :
Image<float>* img;
LoadFromTIFF(*img, <filename>);
This is very similar to the way it is done in libraries such as ITK. In your module which you will perhaps write to read from TIFF module, you will perform this type-casting to ensure that you return the type that has been declared by the user.
When manually creating an image, the user should do something like :
Image<float>*img;
img->SetSize(<width>, <height>);
img->SetChannels(<enum_channel_type>);
It is all much simpler in the long run than having a non-templated class.
You could take a look at the source code of ITK to get an idea of how this can be implemented in the most generic sense, as ITK is a highly templated library.
EDIT (Addendum)
If you do not want the user to have apriori control over the image data type, you should consider using SMinSampleValue and SMaxSampleValue tags in the TIFF header. These headers are there in any modern TIFF file (Version 6.0). They are intended to have a TYPE that matches the sample datatype in the TIFF file. That I believe would solve your problem

To make the right decision (based on facts rather than opinion) about template versus non-template, my strategy is to measure and compare for both solutions (templates and non-templates). I like to measure the following indicators:
number of lines of code
performances
compilation time
as well as other more subjective measures such as:
ease of maintenance
how much time does it take to a freshman to understand the code
I developed a quite large software [1], and based on these measures, my image class is not a template. I know other imaging library that offers both options [2] (but I do not know what mechanisms they have for that / whether the code remains very legible). I also had some algorithms operating with points of various dimensions (2d, 3d, ... nd), and for these ones making the algorithm a template resulted in a performance gain that made it worth it.
In short, to make the right decision, have clear criteria, clear way of measuring them, and try both options on a toy example.
[1] http://alice.loria.fr/software/graphite/doc/html/
[2] http://opencv.org/

Templates. And a variant. And an 'interface helper', if you don't yet have C++14. Let me explain.
Whenever you have a limited set of specializations for a given operation, you can model them as classes satisfying an interface or concept. If these can be expressed as one template class, then do so. It helps your users when they only want a given specialization and all you need is a factory when you read from untyped source (e.g. file). Note that you need a factory anyway, it's just that the return type is well-defined normally. And this is where we come to...
Variants. Whenever you don't know your return type, but you know at compile time the set of possible return types, use a variant. Typedef your variant so it 'looks like' a base class (note that there no inheritance or virtual functions involved), then use a visitor. A particularly easy way to write a visitor in C++14 is a generic lambda that captures everything by reference. In essence, from that point in your code, you have the specific type. Therefore, take the specific/templated classes as function arguments.
Now, a boost::variant<> (or std::variant<> if you have it) cannot have member functions. Either you reside to 'C-API style' generic functions (that are possibly just delegating to the member functions) and symmetric operators; or you have a helper class that's created from your variant type. If your CR allows it, you might descend from variant - note, some consider this terrible style, others accept it as the library writer's intention (because, had the writers wanted to forbid inheritance, they had written final).
Code sketch, do not try to compile:
enum PixelFormatEnum { eUChar, eVec3d, eDouble };
template<PixelFormatEnum>
struct PixelFormat;
template<>
struct PixelFormat<eUChar>
{
typedef unsigned char type;
};
// ...
template<PixelFormatEnum pf>
using PixelFormat_t = typename PixelFormat<pf>::type;
template<PixelFormatEnum pf>
struct Image
{
std::vector<std::vector<PixelFormat_t<pf> > > pixels; // or anything like that
// ...
};
typedef boost::variant< Image<eUChar>, Image<eVec3d>, Image<eDouble> > ImageVariant;
template<typename F>
struct WithImageV : boost::static_visitor<void>
{
// you could do this better, e.g. with compose(f, bsv<void>), but...
F f_;
template<PixelFormatEnum e>
void operator()(const Image<e>& img)
{
f_(img);
}
}
template<typename F>
void WithImage(const ImageVariant& imgv, F&& f)
{
WithImageV v{f};
boost::apply_visitor(v, img);
}
std::experimental::optional<ImageVariant> ImageFactory(std::istream& is)
{
switch (read_pixel_format(is))
{
case eUChar: return Image<eUchar>(is);
// ...
default: return std::experimental::nullopt;
}
}
struct MyFavoritePixelOp : public boost::static_visitor<int>
{
template<PixelFormatEnum e>
int operator()(PixelFormat_t<e> pixel) { return pixel; }
template<>
int operator()(PixelFormat_t<eVec3d> pixel) { return pixel.r + pixel.g + pixel.b; }
};
int f_for_variant(const ImageVariant& imgv)
{
// this is slooooow. Use it only if you have to, e.g., for loading.
// Move the apply_visitor out of the loop whenever you can (here you could).
int sum = 0;
for (auto&& row : imgv.pixels)
for (auto&& pixel : row)
sum += boost::apply_visitor(MyFavoritePixelOp(), pixel);
return sum;
}
template<PixelTypeEnum e>
int f_for_type(const Image<e>& img)
{
// this is faster
int sum = 0;
for (auto&& row : img)
for (auto&& pixel : row)
sum += MyFavoritePixelOp()(pixel);
return sum;
}
int main() {
// ...
if (auto imgvOpt = ImageFactory(is))
{
// 1 - variant
int res = f_for_variant(*imgvOpt);
std::cout << res;
// 2 - template
WithImage(*imgvOpt, [&](auto&& img) {
int res2 = f_for_type(img);
std::cout << res2;
});
}
}

Related

Not being able to store a variant in a pointer in c++

My groupmate and I are doing a school assignment and we are having some trouble trying to save a value.
I am 100% aware that some of the things done in the code is not the normal way of doing, and that there are better ways, but part of the assignment is to use the concepts taught in class.
Problem:
I have a Train class which I would like to assign to a platform, such that I know if there is a train on the platform.
template<typename T>
class Train
{
public:
//Some constructors, getters and other stuff
private:
std::string reg_nr_;
}
Because the Train is a template there are different template argument classes so it is possible to give it a type
struct IC3{};
struct IC4{};
To instantiate a Train it is done in main in the following way, as is the way it is intended to be used.
Train<IC3> tester_train("testing_train");
Train<IC4> some_other_tester_train("some_other_tester_train");
Platform pl;
pl.train_arriving(tester_train);
pl.train_leaving(tester_train);
pl.train_arriving(some_other_tester_train);
class Platform
{
public:
using Trains = std::variant<TrainWrapper<Train<Arriva>>, TrainWrapper<Train<IC3>>, TrainWrapper<Train<IC4>>>;
template<typename U>
void train_arriving(U& t)
{
train_ = TrainWrapper<U>{ u };
}
void train_leaving()
{
train_ = ???????; //Should be set to nothing
}
private:
Trains train_;
}
template<typename T>
struct TrainWrapper
{
T& t;
};
Now for the real part of the problem.
I have a Train template class which takes an argument and transform the Train in different types which are not compatible. This presents a problem of being able to pass these different train types into the platform and saving them.
To solve the problem of different passing in different types a std::variant has been used. This presented a new problem of not being able to retrieve the data easily, which is why the TrainWrapper is used, such that the Train is stored in a common type.
Tinkering around with the variant (and finally asking our teacher) my groupmate and I arrived Trains expression seen in Platform. We know for a fact (using a local variable) that we are able to save a TrainWrapper in a Trains object. Our problem however is that we would like to be able to change the Train on the Platform. We therefore thought a pointer would be the way forward, but doing this we run into conversion errors like: Error C2440 '=': cannot convert from 'TrainWrapper<Train<Arriva>>' to 'Platform::Trains *'. We have tried a bunch of different pointer variations, but it is always the same conversion error.
So our question more or less is: How are we able to save Train on the platform so we are able to remove it again?
You might use std::monostate to signal empty:
class Platform
{
public:
using Trains = std::variant<std::monostate,
TrainWrapper<Train<Arriva>>,
TrainWrapper<Train<IC3>>,
TrainWrapper<Train<IC4>>>;
template<typename U>
void train_arriving(U& t)
{
train_ = TrainWrapper<U>{ u };
}
template<typename U>
void train_leaving(U& )
{
train_ = std::monostate{};
}
private:
Trains train_;
};
Demo

auto type deduction coercion for templated class?

I have 2 issues in a template class I'm building. I've included example code below. First question is whether I can coerce the auto type deducted for a templated class. i.e.:
auto p = myvar;
where myvar is T<...>, could I force auto to detect Q<...>? This is simplified. Read on for a more clear explanation.
Edited for clarity: Let me explain what I'm doing. And I'd also like to indicate that this style code is working on a large-scale project perfectly well. I am trying to add some features and functions and in addition to smooth out some of the more awkward behaviors.
The code uses templates to perform work on n-dimensional arrays. The template has a top-level class, and a storage class underneath. Passing the storage class into the top level class allows for a top level class which inherits the storage class. So I start with NDimVar, and I have NDimStor. I end up with
NDimVar<NDimStor>
The class contains NO DATA except for the buffer of data:
class NDimStor<size_t... dimensions> {
int buffer[Size<dimensions...>()]
}
This makes the address of the class == the address of the buffer. This is key to the whole implementation. Is this an incorrect assumption? (I can see this works on my system without any issues, but perhaps this isn't always the case.)
When I create NDimVar<NDimStor<10,10>> I end up with a 10x10 array.
I have functions for getting pieces of the array, for example:
NDimVar<NDimStor<dimensions...>>::RemoveDim & get(int index);
This creates a new 1d array of 10 elements out of the 2d 10x10 array:
NDimVar<NdimStor<10>>
In order to return this as a reference, I use a reinterpret_cast at the location of the data I want. So in this example, get(3) would perform:
return reinterpret_cast<NDimVar≤NDimStor<dimensions...>>::RemoveDim&>(buffer[index * DimensionSumBelow<0>()]);
DimensionSumBelow<0> returns the sum of elements at dimensions 1+, i.e. 10. So &buffer[30] is the address of the referenced 1d NDimVar.
All of this works very well.
The only issue I have is that I would like to add on overlays. For example, be able to return a reference to a new class:
NDimVar<NDimPermute<NDimStor<10,10>,1,0>>
that points to the same original location along with a permutation behavior (swapping dimensions). This also works well. But I would like for:
auto p = myvar.Permute<1,0>()
to create a new copy of myvar with permuted data. This would work if I said:
NDimVar<NDimStor<10,10>> p = myvar.Permute<1,0>().
I feel that there is some auto type deduction stuff I could do in order to coerce the auto type returned, but I'm not sure. I haven't been able to figure it out.
Thanks again,
Nachum
What I want is:
1. Create temporary overlay classes on my storage, e.g. A_top<A_storage> can return a type called A_top<A_overlay<A_storage>> without creating a new object, it just returns a reference to this type. This changes the way the storage is accessed. The problem is upon a call to auto. I don't want this type to be instantiated directly. Can I modify the return to auto to be an original A_top?
#include <iostream>
using namespace std;
class A_storage {
public:
float arr[10];
A_storage () {
}
float & el (int index) {
return arr[index];
}
};
template <typename T> class A_overlay : T {
private:
A_overlay () {
cout << "A_overlay ()" << endl;
}
A_overlay (const A_overlay &) {
cout << "A_overlay (&)" << endl;
}
public:
using T::arr;
float & el (int index) {
return arr[10 - index];
}
};
template <typename T> class A_top;
template <typename T> class A_top : public T {
public:
A_top () {
}
A_top<A_overlay<A_storage>> & get () {
return reinterpret_cast<A_top<A_overlay<A_storage>>&>(*this);
}
};
using A = A_top<A_storage>;
int main (void) {
A a;
auto c = a.get(); // illegal - can i auto type deduce to A_top<A_storage>?
return 0;
}
If a function accepts (A_top<A_storage> &) as a parameter, how can I create a conversion function that can cast A_top<A_overlay<A_storage>>& to A_top<A_storage>& ?
Thanks,
Nachum
First, your design doesn't look right to me, and I'm not sure if the behaviour is actually well-defined or not. (Probably not.)
In any case, the problem is not with auto. The error is caused by the fact that the copy constructor of A_overlay is private, while you need it to copy A_top<A_overlay<A_storage>> returned by a.get() to auto c.
(Note that the auto in this case obviously gets deduced to A_top<A_overlay<A_storage>>, I assume you made a typo when said that it's A_top<A_storage>.)
Also note that A_storage in A_top::get() should be replaced with T, even if it doesn't change anything in your snippet because you only have T == A_storage.
If a function accepts (A_top &) as a parameter, how can I create a conversion function that can cast A_top> to A_top& ?
Ehm, isn't it just this:
return reinterpret_cast<A_top<A_storage>&>(obj);
reinterpret_cast should almost never be used. It essentially remove any compiler validation that the types are related. And doing unrelated cast is essentially undefined behavior as it essentially assume that derived classes are always at offset 0...
It does not make any sense to write such code. It is not maintainable and hard to understand what you are trying to achieve. It look like you want to pretend that your A_top<A_storage> object is a A_top<A_overlay<A_storage>> object instead. If this is what you want to do, then declare A alias as that type.
In your code, it look like you want to invert the indexing so that item at position 10 is returned when you ask item at position 0 and vice versa. Do you really think, that it is obvious from your obfuscated code? Never write such bad code.
Something like
class A_overlay {
public:
float & el (int index) { return arr[10 - index]; }
private:
A_storage arr;
};
would make much more sense than your current code.
No cast needed.
Easy to understand.
Well defined behavior.
You might keep your job.
And obviously, you would update the following line as appropriate:
using A = A_top<A_storage>;
Also, if A_top has no useful purpose, then why not using A_overlay directly? And why are you using template if A_storage is not a template? Do you really want to reuse such mess elsewhere in your code base.
Obviously, your code inheritance does not respect IS-A relationship if your write such code. So it is clearly a bad design!

What is the "correct OOP" way to deal with a storage pool of items of mixed types?

This was inspired by a comment to my other question here:
How do you "not repeat yourself" when giving a class an accessible "name" in C++?
nvoight: "RTTI is bad because it's a hint you are not doing good OOP. Doing your own homebrew RTTI does not make it better OOP, it just means you are reinventing the wheel on top of bad OOP."
So what is the "good OOP" solution here? The problem is this. The program is in C++, so there are also C++ specific details mentioned below. I have a "component" class (actually, a struct), which is subclassed into a number of different derived classes containing different kinds of component data. It's part of an "entity component system" design for a game. I'm wondering about the storage of the components. In particular, the current storage system has:
a "component manager" which stores an array, actually a hash map, of a single type of component. The hash map allows for lookup of a component by the entity ID of the entity it belongs to. This component manager is a template which inherits from a base, and the template parameter is the type of component to manage.
a full storage pack which is a collection of these component managers, implemented as an array of pointers to the component manager base class. This has methods to insert and extract an entity (on insertion, the components are taken out and put into the managers, on removal, they are extracted and collected into a new entity object), as well as ones to add new component managers, so if we want to add a new component type to the game, all we have to do is put another command to insert a component manager for it.
It's the full storage pack that prompted this. In particular, it offers no way of accessing a particular type of component. All the components are stored as base class pointers with no type information. What I thought of was using some kind of RTTI and storing the component managers in a map which maps type names and thus allows for lookup and then the proper downcasting of the base class pointer to the appropriate derived class (the user would call a template member on the entity storage pool to do this).
But if this RTTI means bad OOP, what would be the correct way to design this system so no RTTI is required?
Disclaimer/resources: my BCS thesis was about the design and implementation of a C++14 library for compile-time Entity-Component-System pattern generation. You can find the library here on GitHub.
This answer is meant to give you a broad overview of some techniques/ideas you can apply to implement the Entity-Component-System pattern depending on whether or not component/system types are known at compile-time.
If you want to see implementation details, I suggest you to check out my library (linked above) for an entirely compile-time based approach. diana is a very nice C library that can give you an idea of a run-time based approach.
You have several approaches, depending on the scope/scale of your project and on the nature of your entities/components/systems.
All component types and system types are known at compile-time.
This is the case analyzed in my BCS thesis - what you can do is use advanced metaprogramming techniques (e.g. using Boost.Hana) to put all component types and system types in compile-time lists and create data structures that link everything together at compile time. Pseudocode example:
namespace c
{
struct position { vec2f _v };
struct velocity { vec2f _v };
struct acceleration { vec2f _v };
struct render { sprite _s; };
}
constexpr auto component_types = type_list
{
component_type<c::position>,
component_type<c::velocity>,
component_type<c::acceleration>,
component_type<c::render>
};
After defining your components, you can define your systems and tell them "what components to use":
namespace s
{
struct movement
{
template <typename TData>
void process(TData& data, float ft)
{
data.for_entities([&](auto eid)
{
auto& p = data.get(eid, component_type<c::position>)._v;
auto& v = data.get(eid, component_type<c::velocity>)._v;
auto& a = data.get(eid, component_type<c::acceleration>)._v;
v += a * ft;
p += v * ft;
});
}
};
struct render
{
template <typename TData>
void process(TData& data)
{
data.for_entities([&](auto eid)
{
auto& p = data.get(eid, component_type<c::position>)._v;
auto& s = data.get(eid, component_type<c::render>)._s;
s.set_position(p);
some_context::draw(s);
});
}
};
}
constexpr auto system_types = type_list
{
system_type<s::movement,
uses
(
component_type<c::position>,
component_type<c::velocity>,
component_type<c::acceleration>
)>,
system_type<s::render,
uses
(
component_type<c::render>
)>
};
All that's left is using some sort of context object and lambda overloading to visit the systems and call their processing methods:
ctx.visit_systems(
[ft](auto& data, s::movement& s)
{
s.process(data, ft);
},
[](auto& data, s::render& s)
{
s.process(data);
});
You can use all the compile-time knowledge to generate appropriate data structures for components and systems inside the context object.
This is the approach I used in my thesis and library - I talked about it at C++Now 2016: "Implementation of a multithreaded compile-time ECS in C++14".
All component types and systems types are known at run-time.
This is a completely different situation - you need to use some sort of type-erasure technique to dynamically deal with components and systems. A suitable solution is using a scripting language such as LUA to deal with system logic and/or component structure (a more efficient simple component definition language can also be handwritten, so that it maps one-to-one to C++ types or to your engine's types).
You need some sort of context object where you can register component types and system types at run-time. I suggest either using unique incrementing IDs or some sort of UUIDs to identify component/system types. After mapping system logic and component structures to IDs, you can pass those around in your ECS implementation to retrieve data and process entities. You can store component data in generic resizable buffers (or associative maps, for big containers) that can be modified at run-time thanks to component structure knowledge - here's an example of what I mean:
auto c_position_id = ctx.register_component_type("./c_position.txt");
// ...
auto context::register_component_type(const std::string& path)
{
auto& storage = this->component_storage.create_buffer();
auto file_contents = get_contents_from_path(path);
for_parsed_lines_in(file_contents, [&](auto line)
{
if(line.type == "int")
{
storage.append_data_definition(sizeof(int));
}
else if(line.type == "float")
{
storage.append_data_definition(sizeof(float));
}
});
return next_unique_component_type_id++;
}
Some component types and system types are known at compile-time, others are known at run-time.
Use approach (1), and create some sort of "bridge" component and system types that implements any type-erasure technique in order to access component structure or system logic at run-time. An std::map<runtime_system_id, std::function<...>> can work for run-time system logic processing. An std::unique_ptr<runtime_component_data> or an std::aligned_storage_t<some_reasonable_size> can work for run-time component structure.
To answer your question:
But if this RTTI means bad OOP, what would be the correct way to design this system so no RTTI is required?
You need a way of mapping types to values that you can use at run-time: RTTI is an appropriate way of doing that.
If you do not want to use RTTI and you still want to use polymorphic inheritance to define your component types, you need to implement a way to retrieve some sort of run-time type ID from a derived component type. Here's a primitive way of doing that:
namespace impl
{
auto get_next_type_id()
{
static std::size_t next_type_id{0};
return next_type_id++;
}
template <typename T>
struct type_id_storage
{
static const std::size_t id;
};
template <typename T>
const std::size_t type_id_storage<T>::id{get_next_type_id()};
}
template <typename T>
auto get_type_id()
{
return impl::type_id_storage<T>::id;
}
Explanation: get_next_type_id is a non-static function (shared between translation units) that stores a static incremental counter of type IDs. To retrieve the unique type ID that matches a specific component type you can call:
auto position_id = get_type_id<position_component>();
The get_type_id "public" function will retrieve the unique ID from the corresponding instantiation of impl::type_id_storage, that calls get_next_type_id() on construction, which in turn returns its current next_type_id counter value and increments it for the next type.
Particular care for this kind of approach needs to be taken to make sure it behaves correctly over multiple translation units and to avoid race conditions (in case your ECS is multithreaded). (More info here.)
Now, to solve your issue:
It's the full storage pack that prompted this. In particular, it offers no way of accessing a particular type of component.
// Executes `f` on every component of type `T`.
template <typename T, typename TF>
void storage_pack::for_components(TF&& f)
{
auto& data = this->_component_map[get_type_id<T>()];
for(component_base* cb : data)
{
f(static_cast<T&>(*cb));
}
}
You can see this pattern in use in my old and abandoned SSVEntitySystem library. You can see an RTTI-based approach in my old and outdated “Implementation of a component-based entity system in modern C++” CppCon 2015 talk.
Despite the good and long answer by #VittorioRomeo, I'd like to show another possible approach to the problem.
Basic concepts involved here are type erasure and double dispatching.
The one below is a minimal, working example:
#include <map>
#include <vector>
#include <cstddef>
#include <iostream>
#include <memory>
struct base_component {
static std::size_t next() noexcept {
static std::size_t v = 0;
return v++;
}
};
template<typename D>
struct component: base_component {
static std::size_t type() noexcept {
static const std::size_t t = base_component::next();
return t;
}
};
struct component_x: component<component_x> { };
struct component_y: component<component_y> { };
struct systems {
void elaborate(std::size_t id, component_x &) { std::cout << id << ": x" << std::endl; }
void elaborate(std::size_t id, component_y &) { std::cout << id << ": y" << std::endl; }
};
template<typename C>
struct component_manager {
std::map<std::size_t, C> id_component;
};
struct pack {
struct base_handler {
virtual void accept(systems *) = 0;
};
template<typename C>
struct handler: base_handler {
void accept(systems *s) {
for(auto &&el: manager.id_component) s->elaborate(el.first, el.second);
}
component_manager<C> manager;
};
template<typename C>
void add(std::size_t id) {
if(handlers.find(C::type()) == handlers.cend()) {
handlers[C::type()] = std::make_unique<handler<C>>();
}
handler<C> &h = static_cast<handler<C>&>(*handlers[C::type()].get());
h.manager.id_component[id] = C{};
}
template<typename C>
void walk(systems *s) {
if(handlers.find(C::type()) != handlers.cend()) {
handlers[C::type()]->accept(s);
}
}
private:
std::map<std::size_t, std::unique_ptr<base_handler>> handlers;
};
int main() {
pack coll;
coll.add<component_x>(1);
coll.add<component_y>(1);
coll.add<component_x>(2);
systems sys;
coll.walk<component_x>(&sys);
coll.walk<component_y>(&sys);
}
I tried to be true to the few points mentioned by the OP, so as to provide a solution that fits the real problem.
Let me know with a comment if the example is clear enough for itself or if a few more details are required to fully explain how and why it works actually.
If I understand correctly, you want a collection, such as a map, where the values are of different type, and you want to know what type is each value (so you can downcast it).
Now, a "good OOP" is a design which you don't need to downcast. You just call the mothods (which are common to the base class and the deriveratives) and the derived class performs a different operation than its parent for the same method.
If this is not the case, for example, where you need to use some other data from the child and thus you want to downcast, it means, in most cases, you didn't work hard enough on the design. I don't say it's always possible, but you need to design it in such a way the polymorphism is your only tool. That's a "good OOP".
Anyway, if you really need to downcast, you don't have to use RTTI. You can use a common field (string) in the base class, that marks the class type.

OpenCV Matrix of user-defined type

Is there a way to have a matrix of user-defined type in OpenCV 2.x? Something like :
cv::Mat_<KalmanRGBPixel> backgroundModel;
I know cv::Mat<> is meant for image and mathematic, but I want to hold data in a matrix form. I don't plan to use inverse, transpose, multiplication, etc., it's only to store data. I want it to be in matrix form because the pixel_ij of each frame of a video will be linked to backgroundModel_ij.
I know there is a DataType<_Tp> class in core.hpp that needs to be defined for my type but I'm not sure how to do it.
EDIT : KalmanRGBPixel is only a wrapper for cv::KalmanFilter class. As for now, it's the only member.
... some functions ...
private:
cv::KalmanFilter kalman;
Thanks for your help.
I have a more long winded answer for anybody wanting to create a matrix of custom objects, of whatever size.
You will need to specialize the DataType template but instead of having 1 channel, you make the channels the same size of your custom object. You may also need to override a few functions to get expected functionality, but back to that later.
First, here is an example of my custom type template specialization:
typedef HOGFilter::Sample Sample;
namespace cv {
template<> class DataType<Sample>
{
public:
typedef HOGFilter::Sample value_type;
typedef HOGFilter::Sample channel_type;
typedef HOGFilter::Sample work_type;
typedef HOGFilter::Sample vec_type;
enum {
depth = CV_8U,
channels = sizeof(HOGFilter::Sample),
type = CV_MAKETYPE(depth, channels),
};
};
}
Second.. you may want to override some functions to get expected functionality:
// Special version of Mat, a matrix of Samples. Using the power of opencvs
// matrix manipulation and multi-threading capabilities
class SampleMat : public cv::Mat_<Sample>
{
typedef cv::Mat_<Sample> super;
public:
SampleMat(int width = 0, int height = 0);
SampleMat &operator=(const SampleMat &mat);
const Sample& at(int x, int y = 0);
};
The typedef of super isnt required but helps with readability in the cpp.
Notice I have overriden the constructor with width/hight parameters. This is because we have to instantiate the mat this way if we want a 2D matrix.
SampleMat::SampleMat(int width, int height)
{
int count = width * height;
for (int i = 0; i < count; ++i)
{
HOGFilter::Sample sample;
this->push_back(sample);
}
*dynamic_cast<Mat_*>(this) = super::reshape(channels(), height);
}
The at<_T>() override is just for cleaner code:
const Sample & SampleMat::at(int x, int y)
{
if (y == 0)
return super::at<Sample>(x);
return super::at<Sample>(cv::Point(x, y));
}
In the OpenCV documentation it is explained how to add custom types to OpenCV matrices. You need to define the corresponding cv::DataType.
https://docs.opencv.org/master/d0/d3a/classcv_1_1DataType.html
The DataType class is basically used to provide a description of such primitive data types without adding any fields or methods to the corresponding classes (and it is actually impossible to add anything to primitive C/C++ data types). This technique is known in C++ as class traits. It is not DataType itself that is used but its specialized versions […] The main purpose of this class is to convert compilation-time type information to an OpenCV-compatible data type identifier […]
(Yes, finally I answer the question itself in this thread!)
If you don't want to use the OpenCV functionality, then Mat is not the right type for you.
Use std::vector<std::vector<Type> > instead. You can give the size during initialization:
std::vector<std::vector<Type> > matrix(42, std::vector<Type>(23));
Then you can access with []-operator. No need to screw around with obscure cv::Mats here.
If you would really need to go for an OpenCV-Matrix, you are right in that you have to define the DataType. It is basically a bunch of traits. You can read about C++ Traits on the web.
You can create a CV mat that users your own allocated memory by specifying the address to the constructor. If you also want the width and height to be correct you will need to find an openCV pixel type that is the same number of bytes.

Where do you find templates useful?

At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.