Iterating over a list of classes - c++

I want to be able to iterate over a list of classes that inherit from a common ancestor.
Minified version of what I want (Python-like syntax as that's the language I'm coming from):
const *Player *PLAYERS[3] = { *PlayerTypeOne, *PlayerTypeTwo, *PlayerTypeThree};
int outcome = 0;
for player in players {
if (doThingWithPlayer((&player)(), some, other, variables) == true) {
outcome++;
}
}
If this is not the preferred way of doing this sort of operation, advice on how I should continue is very welcome.
The sort of code I want to avoid is:
int outcome = 0;
PlayerTypeOne player_one();
if doThingWithPlayer(player_one, some, other, variables){
outcome++;
}
PlayerTypeTwo player_two();
if doThingWithPlayer(player_two, some, other, variables){
outcome++;
}
PlayerTypeThree player_three();
if doThingWithPlayer(player_three, some, other, variables){
outcome++;
}

You are looking for a factory design pattern:
Player *create_by_name(const std::string &what)
{
if (what == "PlayerTypeOne")
return new PlayerTypeOne;
if (what == "PlayerTypeTwo")
return new PlayerTypeTwo;
// ...
}
and so on. What you also appear to want to do is to supply parameters to each subclass's constructors.
If all subclasses take the same constructor parameters, this becomes trivial: pass the parameters to the factory, and just have them forwarded to the constructors.
If you need to support different parameters to constructors, this becomes more complicated. I would suggest that you start small, and implement a simple factory for your objects, with no constructor parameters, or with just a couple of them that are the same for all subclasses. Once you have the basic principles working, then you can worry about handling the complicated corner cases.
Then, just have an array of class names, iterate over the array, and call the factory. This should have similar results as your pseudo-Python code.

C++ provides no built-in introspection, so you can't just obtain objects that represent your classes and create instances with them.
What you can do is use metaprogramming:
// A list of types
template <class...> struct pack { };
// Calls f with one default-constructed instance of each T
template <class... Ts, class F>
void construct_each(pack<Ts...>, F &&f) {
// Classic pre-C++17 expansion trick
using ex = int[];
(void)ex{(f(Ts{}), void(), 0)..., 0};
// C++17 version
// (void)(f(Ts{}), ...);
}
// ...
using Players = pack<PlayerTypeOne, PlayerTypeTwo, PlayerTypeThree>;
void foo() {
int outcome = 0;
construct_each(Players{}, [&](auto &&player) {
if(doThingWithPlayer(player, some, other, variables))
++outcome;
});
}
See it live on Coliru

Related

Understanding mixins vs mixin templates

In the process of learning the D language, I'm trying to make a generic Matrix class which supports type promotion of the contained object.
That is, when I multiply a Matrix!(int) to a Matrix!(real) I should get a Matrix!(real) as a result.
Since there are many different kinds of type promotions, reimplementing the opBinary method for every possible combination would be really tedious and a ton of boilerplate code. So mixins/mixin templates would seem to be the answer.
What I'm failing to understand is why this first code sample works
import std.stdio;
import std.string : format;
string define_opbinary(string other_type) {
return "
Matrix opBinary(string op)(Matrix!(%s) other) {
if(op == \"*\") {
Matrix result;
if(this.columns == other.rows) {
result = new Matrix(this.rows, other.columns);
} else {
result = new Matrix(0,0);
}
return result;
} else assert(0, \"Operator \"~op~\" not implemented\");
}
".format(other_type);
}
class Matrix(T) {
T[][] storage;
size_t rows;
size_t columns;
const string type = T.stringof;
this(size_t rows, size_t columns) {
this.storage = new T[][](rows, columns);
this.rows = rows;
this.columns = columns;
}
void opIndexAssign(T value, size_t row, size_t column) {
storage[row][column] = value;
}
mixin(define_opbinary(int.stringof));
mixin(define_opbinary(uint.stringof));
}
void main()
{
Matrix!int mymat = new Matrix!(int)(2, 2);
mymat[0,0] = 5;
writeln(mymat.type);
Matrix!uint mymat2 = new Matrix!(uint)(2, 2);
writeln(mymat2.type);
auto result = mymat * mymat2;
writeln("result.rows=", result.rows);
writeln("result.columns=", result.columns);
auto result2 = mymat2 * mymat;
writeln("result.type=",result.type);
writeln("result2.type=",result2.type);
}
the dub output:
Performing "debug" build using /usr/bin/dmd for x86_64.
matrix ~master: building configuration "application"...
Linking...
Running ./matrix.exe
50
00
int
uint
result.rows=2
result.columns=2
00
00
result.type=int
result2.type=uint
but the second code sample does not work
import std.stdio;
import std.string : format;
mixin template define_opbinary(alias other_type) {
Matrix opBinary(string op)(Matrix!(other_type) other) {
if(op == "*") {
Matrix result;
if(this.columns == other.rows) {
result = new Matrix(this.rows, other.columns);
} else {
result = new Matrix(0,0);
}
return result;
} else assert(0, "Operator "~op~" not implemented");
}
}
class Matrix(T) {
T[][] storage;
size_t rows;
size_t columns;
const string type = T.stringof;
this(size_t rows, size_t columns) {
this.storage = new T[][](rows, columns);
this.rows = rows;
this.columns = columns;
}
void opIndexAssign(T value, size_t row, size_t column) {
storage[row][column] = value;
}
mixin define_opbinary!(int);
mixin define_opbinary!(uint);
}
void main()
{
Matrix!int mymat = new Matrix!(int)(2, 2);
mymat[0,0] = 5;
writeln(mymat.type);
Matrix!uint mymat2 = new Matrix!(uint)(2, 2);
writeln(mymat2.type);
auto result = mymat * mymat2;
writeln("result.rows=", result.rows);
writeln("result.columns=", result.columns);
auto result2 = mymat2 * mymat;
writeln("result.type=",result.type);
writeln("result2.type=",result2.type);
}
the dub output:
source/app.d(60,19): Error: cast(Object)mymat is not of arithmetic type, it is a object.Object
source/app.d(60,27): Error: cast(Object)mymat2 is not of arithmetic type, it is a object.Object
source/app.d(64,20): Error: cast(Object)mymat2 is not of arithmetic type, it is a object.Object
source/app.d(64,29): Error: cast(Object)mymat is not of arithmetic type, it is a object.Object
/usr/bin/dmd failed with exit code 1.
What's extremely odd is that if I remove the mixin define_opbinary!(int); call, then I only get two arithmetic complaints (only the two complaints about line 60 (auto result = mymat * mymat2;) remain).
I have a feeling that somehow the compiler sees the two mixin calls as ambiguous and removes both but I'm not sure.
Any help would be greatly appreciated.
Oh I have a lot to say about this, including that I wouldn't use either type of mixin for this - I'd just use an ordinary template instead. I'll come back to that at the end.
I am going to try to be fairly comprehensive, so apologies if I describe stuff you already know, and on the other hand, I am probably going to give some irrelevant material too in the interests of providing comprehensive background material for a deeper understanding.
First, mixin vs template mixin. mixin() takes a string, parses it into a AST node (the AST btw is the compiler's internal data structure for representing code, it stands for "abstract syntax tree". foo() is an AST node like FunctionCall { args: [] }. if(foo) {} is one like IfStatement { condition: Expression { arg: Variable { name: foo }, body : EmptyStatement } - basically objects representing each part of the code).
Then it pastes that parsed AST node into the same slot where the mixin word appeared. You can often think of this as copy/pasting code strings, but with the restriction that the string must represent a complete element here, and it must be substituteable in the same context where the mixin was without errors. So like you can't do int a = bmixin(c) to make a variable with a b in front - the mixin must represent a complete node by itself.
Once it pastes in that AST node though, the compiler treats it as if the code was all written there originally. Any names referenced will be looked up in the pasted context, etc.
A template mixin, on the other hand, actually still has a container element in the AST, which is used for name lookups. It actually works similarly to a struct or class inside the compiler - they all have a list of child declarations that remain together as a unit.
The big difference is that a template mixin's contents are automatically accessible from the parent context... usually. It follows rules similar to class inheritance, where class Foo : Bar can see Bar's members as if they are its own, but they still remain separate. You can still do like super.method(); and call it independently of the child's overrides.
The "usually" comes in because of overloading and hijacking rules. Deep dive and rationale here: https://dlang.org/articles/hijack.html
But the short of it is in an effort to prevent third party code from silently being able to change your program's behavior when they add a new function, D requires all sets of function overloads to be merged at the usage point by the programmer, and it is particularly picky about operator overloads since they already have a default behavior that any mixin is going to be modifying.
mixin template B(T) {
void foo(T t) {}
}
class A {
mixin B!int;
mixin B!string;
}
This is similar to the code you have, but with an ordinary function. If you compile and run, it will work. Now, let's add a foo overload directly to A:
mixin template B(T) {
void foo(T t) {}
}
class A {
mixin B!int;
mixin B!string;
void foo(float t) {}
}
If you try to compile this with a string argument, it will actually fail! "Error: function poi.A.foo(float t) is not callable using argument types (string)". Why won't it use the mixin one?
This is a rule of template mixins - remember the compiler still treats them as a unit, not just a pasted set of declarations. Any name present on the outer object - here, our class A - will be used instead of looking inside the template mixin.
Hence, it sees A.foo and doesn't bother looking into B to find a foo. This is kinda useful for overriding specific things from a template mixin, but can be a hassle when trying to add overloads. The solution is to add an alias line to the top-level to tell the compiler to specifically look inside. First, we need to give the mixin a name, then forward the name explicitly:
mixin template B(T) {
void foo(T t) {}
}
class A {
mixin B!int bint; // added a name here
mixin B!string bstring; // and here
alias foo = bint.foo; // forward foo to the template mixin
alias foo = bstring.foo; // and this one too
void foo(float t) {}
}
void main() {
A a = new A;
a.foo("a");
}
Now it works for float, int, and string.... but it also kinda defeats the purpose of template mixins for adding overloads. One trick you can to is to put a top-level template function in A, and it just forwards to the mixins... just they need a different name to register.
Which brings me back to your code. Like I said, D is particularly picky about operator overloads since they always override a normal behavior (even when that normal behavior is an error, like in classes). You need to be explicit about them at the top level.
Consider the following:
import std.stdio;
import std.string : format;
mixin template define_opbinary(alias other_type) {
// I renamed this to opBinaryHelper since it will not be used directly
// but rather called from the top level
Matrix opBinaryHelper(string op)(Matrix!(other_type) other) {
if(op == "*") {
Matrix result;
if(this.columns == other.rows) {
result = new Matrix(this.rows, other.columns);
} else {
result = new Matrix(0,0);
}
return result;
} else assert(0, "Operator "~op~" not implemented");
}
}
class Matrix(T) {
T[][] storage;
size_t rows;
size_t columns;
const string type = T.stringof;
this(size_t rows, size_t columns) {
this.storage = new T[][](rows, columns);
this.rows = rows;
this.columns = columns;
}
void opIndexAssign(T value, size_t row, size_t column) {
storage[row][column] = value;
}
mixin define_opbinary!(int);
mixin define_opbinary!(uint);
// and now here, we do a top-level opBinary that calls the helper
auto opBinary(string op, M)(M rhs) {
return this.opBinaryHelper!(op)(rhs);
}
}
void main()
{
Matrix!int mymat = new Matrix!(int)(2, 2);
mymat[0,0] = 5;
writeln(mymat.type);
Matrix!uint mymat2 = new Matrix!(uint)(2, 2);
writeln(mymat2.type);
auto result = mymat * mymat2;
writeln("result.rows=", result.rows);
writeln("result.columns=", result.columns);
auto result2 = mymat2 * mymat;
writeln("result.type=",result.type);
writeln("result2.type=",result2.type);
}
I pasted in the complete code, but there's actually only two changes there: the mixin template now defines a helper with a different name (opBinaryHelper), and the top-level class now has an explicit opBinary defined that forwards to said helper. (If you were to add other overloads btw, the alias trick from above may be necessary, but in this case, since it is all dispatched on if from inside the one name, it lets you merge all the helpers automatically.)
Finally, the code works.
Now, why wasn't any of this necessary with the string mixin? Well, back to the original definition: a string mixin parses it, then pastes in the AST node /as if it were originally written there/. That latter part lets it work (just at the cost of once you mixin a string, you are stuck with it, so if you don't like part of it, you must modify the library instead of just overriding a portion).
A template mixin maintains its own sub-namespace to allow for selective overriding, etc., and that triggers a foul with these stricter overloading rules.
And finally, here's the way I'd actually do it:
// this MatrixType : stuff magic means to accept any Matrix, and extract
// the other type out of it.
// a little docs: https://dlang.org/spec/template.html#alias_parameter_specialization
// basically, write a pattern that represents the type, then comma-separate
// a list of placeholders you declared in that pattern
auto opBinary(string op, MatrixType : Matrix!Other_Type, Other_Type)(MatrixType other) {
// let the compiler do the promotion work for us!
// we just fetch the type of regular multiplication between the two types
// the .init just uses the initial default value of the types as a placeholder,
// all we really care about is the type, just can't multiply types, only
// values hence using that.
alias PromotedType = typeof(T.init * Other_Type.init);
// in your version, you used `if`, but since this is a compile-time
// parameter, we can use `static if` instead and get more flexibility
// on stuff like actually changing the return value per operation.
//
// Don't need it here, but wanted to point it out anyway.
static if(op == "*") {
// and now use that type for the result
Matrix!PromotedType result;
if(this.columns == other.rows) {
result = new Matrix!PromotedType(this.rows, other.columns);
} else {
result = new Matrix!PromotedType(0,0);
}
return result;
// and with static if, we can static assert to turn that runtime
// exception into a compile-time error
} else static assert(0, "Operator "~op~" not implemented");
}
Just put that opBinary in your class and now the one function can handle all the cases - no need to list specific types, so no more need for mixin magic at all! (....well unless you need virtual overriding with child classes, but that's a whole other topic. Short tip tho, it is possible to static foreach that, which I talked about in my last SO answer here: https://stackoverflow.com/a/57599398/1457000 )
There's a few D tricks in that little function, but I tried to explain in the comments of the code. Feel free to ask if you need more clarification though - those : patterns in template are IMO one of the more advanced D compile-time reflection things, so they're not easy to get at first, but for simple cases like this, it kinda makes sense, just think of it as a declaration with placeholders.

macro that defines entire derived class from one function and certain type specifiers?

I have a class called system. A system takes some object managers and changes all objects in them in some way.
For example there might be a system that draws all images in a imageManager.
Every derived class works somewhat like this (pseudo code):
class someChildClass : public System{
private:
someObjectManager &mang1; //these are used by the update method.
someOtherObjectManager &mang2;//the update method changes these somehow
public:
someChildClass(someObjectManager &mang1, someObjectManager &mang2)
:mang1(mang1),mang2(mang2){
}
virtual void update(){
//this is pure virtual in the System base class.
//Do something with the managers here
}
}
I feel like writing everything but the update method is a waste of time and a source of errors. I wanted to write a macro that basically makes a class like this like so:
QUICKSYSTEM(thisIsTheSystemName, someObjectManager, mang1, someOtherObjectManager, mang2, ... (infinite possible Managers. So a variadic macro?)){
//this is the update function
}
}//this is the end braked for the class declaration. Its ugly but I dont know how I could do the function differently?
well I am having some problems making the macro. Everything works fine until I need to split the variadic arguments into the names and the types. I dont know if this is even possible now, since I cant go back and forth in the arguments easily or apply a easy step to them to make sure that every 2nd is the name of the variable. I would be ok with omitting the possibility for names and just had the types with some sort of automatic naming (manager1,manager2,manager3 or something like that).
If this isnt possible using a macro, what would be a better way to avoid mistakes and cut some time in the constructor and class declaration part?
Yeah, macros are really, really not the way to do this. C++ has templates, which follow C++ syntax and support C++ expressions. Macros instead use their own preprocessor language, which is almost entirely unaware of C++.
You'll want to read up a bit on std::tuple as well. It's going to be rather tricky to handle all those managers with those names. Tuples are the Standard solution for that. managers.get<0> and managers.get<someObjectManager> both work.
Variadic templates are the tool you need here:
#include <iostream>
#include <tuple>
#include <functional>
struct System { void virtual update() = 0; };
template<class... Managers>
struct ManagedSystem : System
{
std::function<void(Managers&...)> _update;
std::tuple<Managers&...> _managers;
template<class F>
ManagedSystem(F update, Managers&... managers) : _update(update), _managers(managers...) {}
void update() override { _update(std::get<Managers&>(_managers)...); }
};
int main()
{
int n = 0;
double d = 3.14;
auto reset = [](int& a, double& d) { a = 0; d = 0.0; };
ManagedSystem<int, double> ms{reset, n, d};
ms.update();
std::cout << "n = " << n << ", d = " << d << "\n";
// n = 0, d = 0
}
The idea is to define a templated-class (ManagedSystem) taking as template-parameters multiple manager types. This class inherits from Systemand provides a constructor taking:
an update functor,
and references to manager whose type is defined by the template parameters of the class.
The said managers are registered internally in an std::tuple and (with a bit of parameter pack magic fed to the update functor.
From there, you can define an inherited class from System by providing an update function and a type list. This avoids the use of ugly and type-unsafe macros in favor of the not-less ugly but type-string templates ;)

What is the "correct OOP" way to deal with a storage pool of items of mixed types?

This was inspired by a comment to my other question here:
How do you "not repeat yourself" when giving a class an accessible "name" in C++?
nvoight: "RTTI is bad because it's a hint you are not doing good OOP. Doing your own homebrew RTTI does not make it better OOP, it just means you are reinventing the wheel on top of bad OOP."
So what is the "good OOP" solution here? The problem is this. The program is in C++, so there are also C++ specific details mentioned below. I have a "component" class (actually, a struct), which is subclassed into a number of different derived classes containing different kinds of component data. It's part of an "entity component system" design for a game. I'm wondering about the storage of the components. In particular, the current storage system has:
a "component manager" which stores an array, actually a hash map, of a single type of component. The hash map allows for lookup of a component by the entity ID of the entity it belongs to. This component manager is a template which inherits from a base, and the template parameter is the type of component to manage.
a full storage pack which is a collection of these component managers, implemented as an array of pointers to the component manager base class. This has methods to insert and extract an entity (on insertion, the components are taken out and put into the managers, on removal, they are extracted and collected into a new entity object), as well as ones to add new component managers, so if we want to add a new component type to the game, all we have to do is put another command to insert a component manager for it.
It's the full storage pack that prompted this. In particular, it offers no way of accessing a particular type of component. All the components are stored as base class pointers with no type information. What I thought of was using some kind of RTTI and storing the component managers in a map which maps type names and thus allows for lookup and then the proper downcasting of the base class pointer to the appropriate derived class (the user would call a template member on the entity storage pool to do this).
But if this RTTI means bad OOP, what would be the correct way to design this system so no RTTI is required?
Disclaimer/resources: my BCS thesis was about the design and implementation of a C++14 library for compile-time Entity-Component-System pattern generation. You can find the library here on GitHub.
This answer is meant to give you a broad overview of some techniques/ideas you can apply to implement the Entity-Component-System pattern depending on whether or not component/system types are known at compile-time.
If you want to see implementation details, I suggest you to check out my library (linked above) for an entirely compile-time based approach. diana is a very nice C library that can give you an idea of a run-time based approach.
You have several approaches, depending on the scope/scale of your project and on the nature of your entities/components/systems.
All component types and system types are known at compile-time.
This is the case analyzed in my BCS thesis - what you can do is use advanced metaprogramming techniques (e.g. using Boost.Hana) to put all component types and system types in compile-time lists and create data structures that link everything together at compile time. Pseudocode example:
namespace c
{
struct position { vec2f _v };
struct velocity { vec2f _v };
struct acceleration { vec2f _v };
struct render { sprite _s; };
}
constexpr auto component_types = type_list
{
component_type<c::position>,
component_type<c::velocity>,
component_type<c::acceleration>,
component_type<c::render>
};
After defining your components, you can define your systems and tell them "what components to use":
namespace s
{
struct movement
{
template <typename TData>
void process(TData& data, float ft)
{
data.for_entities([&](auto eid)
{
auto& p = data.get(eid, component_type<c::position>)._v;
auto& v = data.get(eid, component_type<c::velocity>)._v;
auto& a = data.get(eid, component_type<c::acceleration>)._v;
v += a * ft;
p += v * ft;
});
}
};
struct render
{
template <typename TData>
void process(TData& data)
{
data.for_entities([&](auto eid)
{
auto& p = data.get(eid, component_type<c::position>)._v;
auto& s = data.get(eid, component_type<c::render>)._s;
s.set_position(p);
some_context::draw(s);
});
}
};
}
constexpr auto system_types = type_list
{
system_type<s::movement,
uses
(
component_type<c::position>,
component_type<c::velocity>,
component_type<c::acceleration>
)>,
system_type<s::render,
uses
(
component_type<c::render>
)>
};
All that's left is using some sort of context object and lambda overloading to visit the systems and call their processing methods:
ctx.visit_systems(
[ft](auto& data, s::movement& s)
{
s.process(data, ft);
},
[](auto& data, s::render& s)
{
s.process(data);
});
You can use all the compile-time knowledge to generate appropriate data structures for components and systems inside the context object.
This is the approach I used in my thesis and library - I talked about it at C++Now 2016: "Implementation of a multithreaded compile-time ECS in C++14".
All component types and systems types are known at run-time.
This is a completely different situation - you need to use some sort of type-erasure technique to dynamically deal with components and systems. A suitable solution is using a scripting language such as LUA to deal with system logic and/or component structure (a more efficient simple component definition language can also be handwritten, so that it maps one-to-one to C++ types or to your engine's types).
You need some sort of context object where you can register component types and system types at run-time. I suggest either using unique incrementing IDs or some sort of UUIDs to identify component/system types. After mapping system logic and component structures to IDs, you can pass those around in your ECS implementation to retrieve data and process entities. You can store component data in generic resizable buffers (or associative maps, for big containers) that can be modified at run-time thanks to component structure knowledge - here's an example of what I mean:
auto c_position_id = ctx.register_component_type("./c_position.txt");
// ...
auto context::register_component_type(const std::string& path)
{
auto& storage = this->component_storage.create_buffer();
auto file_contents = get_contents_from_path(path);
for_parsed_lines_in(file_contents, [&](auto line)
{
if(line.type == "int")
{
storage.append_data_definition(sizeof(int));
}
else if(line.type == "float")
{
storage.append_data_definition(sizeof(float));
}
});
return next_unique_component_type_id++;
}
Some component types and system types are known at compile-time, others are known at run-time.
Use approach (1), and create some sort of "bridge" component and system types that implements any type-erasure technique in order to access component structure or system logic at run-time. An std::map<runtime_system_id, std::function<...>> can work for run-time system logic processing. An std::unique_ptr<runtime_component_data> or an std::aligned_storage_t<some_reasonable_size> can work for run-time component structure.
To answer your question:
But if this RTTI means bad OOP, what would be the correct way to design this system so no RTTI is required?
You need a way of mapping types to values that you can use at run-time: RTTI is an appropriate way of doing that.
If you do not want to use RTTI and you still want to use polymorphic inheritance to define your component types, you need to implement a way to retrieve some sort of run-time type ID from a derived component type. Here's a primitive way of doing that:
namespace impl
{
auto get_next_type_id()
{
static std::size_t next_type_id{0};
return next_type_id++;
}
template <typename T>
struct type_id_storage
{
static const std::size_t id;
};
template <typename T>
const std::size_t type_id_storage<T>::id{get_next_type_id()};
}
template <typename T>
auto get_type_id()
{
return impl::type_id_storage<T>::id;
}
Explanation: get_next_type_id is a non-static function (shared between translation units) that stores a static incremental counter of type IDs. To retrieve the unique type ID that matches a specific component type you can call:
auto position_id = get_type_id<position_component>();
The get_type_id "public" function will retrieve the unique ID from the corresponding instantiation of impl::type_id_storage, that calls get_next_type_id() on construction, which in turn returns its current next_type_id counter value and increments it for the next type.
Particular care for this kind of approach needs to be taken to make sure it behaves correctly over multiple translation units and to avoid race conditions (in case your ECS is multithreaded). (More info here.)
Now, to solve your issue:
It's the full storage pack that prompted this. In particular, it offers no way of accessing a particular type of component.
// Executes `f` on every component of type `T`.
template <typename T, typename TF>
void storage_pack::for_components(TF&& f)
{
auto& data = this->_component_map[get_type_id<T>()];
for(component_base* cb : data)
{
f(static_cast<T&>(*cb));
}
}
You can see this pattern in use in my old and abandoned SSVEntitySystem library. You can see an RTTI-based approach in my old and outdated “Implementation of a component-based entity system in modern C++” CppCon 2015 talk.
Despite the good and long answer by #VittorioRomeo, I'd like to show another possible approach to the problem.
Basic concepts involved here are type erasure and double dispatching.
The one below is a minimal, working example:
#include <map>
#include <vector>
#include <cstddef>
#include <iostream>
#include <memory>
struct base_component {
static std::size_t next() noexcept {
static std::size_t v = 0;
return v++;
}
};
template<typename D>
struct component: base_component {
static std::size_t type() noexcept {
static const std::size_t t = base_component::next();
return t;
}
};
struct component_x: component<component_x> { };
struct component_y: component<component_y> { };
struct systems {
void elaborate(std::size_t id, component_x &) { std::cout << id << ": x" << std::endl; }
void elaborate(std::size_t id, component_y &) { std::cout << id << ": y" << std::endl; }
};
template<typename C>
struct component_manager {
std::map<std::size_t, C> id_component;
};
struct pack {
struct base_handler {
virtual void accept(systems *) = 0;
};
template<typename C>
struct handler: base_handler {
void accept(systems *s) {
for(auto &&el: manager.id_component) s->elaborate(el.first, el.second);
}
component_manager<C> manager;
};
template<typename C>
void add(std::size_t id) {
if(handlers.find(C::type()) == handlers.cend()) {
handlers[C::type()] = std::make_unique<handler<C>>();
}
handler<C> &h = static_cast<handler<C>&>(*handlers[C::type()].get());
h.manager.id_component[id] = C{};
}
template<typename C>
void walk(systems *s) {
if(handlers.find(C::type()) != handlers.cend()) {
handlers[C::type()]->accept(s);
}
}
private:
std::map<std::size_t, std::unique_ptr<base_handler>> handlers;
};
int main() {
pack coll;
coll.add<component_x>(1);
coll.add<component_y>(1);
coll.add<component_x>(2);
systems sys;
coll.walk<component_x>(&sys);
coll.walk<component_y>(&sys);
}
I tried to be true to the few points mentioned by the OP, so as to provide a solution that fits the real problem.
Let me know with a comment if the example is clear enough for itself or if a few more details are required to fully explain how and why it works actually.
If I understand correctly, you want a collection, such as a map, where the values are of different type, and you want to know what type is each value (so you can downcast it).
Now, a "good OOP" is a design which you don't need to downcast. You just call the mothods (which are common to the base class and the deriveratives) and the derived class performs a different operation than its parent for the same method.
If this is not the case, for example, where you need to use some other data from the child and thus you want to downcast, it means, in most cases, you didn't work hard enough on the design. I don't say it's always possible, but you need to design it in such a way the polymorphism is your only tool. That's a "good OOP".
Anyway, if you really need to downcast, you don't have to use RTTI. You can use a common field (string) in the base class, that marks the class type.

is this a example where abstract factory pattern is used?

Now I am developing a class for recognize a object in a photo, and this class is composed of several components (classes). For example,
class PhotoRecognizer
{
public:
int perform_recogniton()
{
pPreProcessing->do_preprocessing();
pFeatureExtractor->do_feature_extraction();
pClassifier->do_classification()
}
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
}
In this example, when we use this class to perform recognition, we invoke other classes PreProcessing, FeatureExtractor and Classifier. As you can image, there are many different methods to implement each class. For example, for the Classifier class, we can use SVMClassfier or NeuralNetworkClassifer, which is a derived class of the basic Classifier class.
class SVMClassifier: public Classifier
{
public:
void do_classification();
};
Therefore, by using different elements within PhotoRecognizer class, we can create different kinds of PhotoRecongnizer. Now, I am building a benchmark to know how to combine these elements together to create an optimal PhotoRecognizer. One solution I can think of is to use abstract factory:
class MethodFactory
{
public:
MethodFactory(){};
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
};
class Method1:public MethodFactory
{
public:
Method1():MethodFactory()
{
pPreProcessing.reset(new GaussianFiltering);
pFeatureExtractor.reset(new FFTStatictis);
pClassifier.reset(new SVMClassifier);
}
};
class Method2:public MethodFactory
{
public:
Method1():MethodFactory()
{
pPreProcessing.reset(new MedianFiltering);
pFeatureExtractor.reset(new WaveletStatictis);
pClassifier.reset(new NearestNeighborClassifier);
}
};
class PhotoRecognizer
{
public:
PhotoRecognizer(MethodFactory *p):pFactory(p)
{
}
int perform_recogniton()
{
pFactory->pPreProcessing->do_preprocessing();
pFactory->pFeatureExtractor->do_feature_extraction();
pFactory->pClassifier->do_classification()
}
MethodFactory *pFactory;
}
So when I use Method1 to perform photo recognition, I can simply do the following:
Method1 med;
PhotoRecognizer recogMethod1(&med);
med.perform_recognition()
Further more, I can even make the class PhotoRecognizer more compact:
enum RecMethod
{
Method1, Method2
};
class PhotoRecognizer
{
public:
PhotoRecognizer(RecMethod)
{
switch(RecMethod)
{
case Method1:
pFactory.reset(new Method1());
break;
...
}
}
boost::shared_ptr<MethodFactory> pFactory;
};
So here is my question: is abstract factory design pattern well justified in the situation described above? are there alternative solutions? Thanks.
As so often there is no ultimate "right" method to do it, and the answer depends a lot on how the project will be used. So if it is only for quick tests, done once and never looked back - go on and use enums if it is your heart's desire, nobody should stop you.
However, if you plan to extend the possible methods over time, I would discourage the usage of your second approach with enums. The reason is: every time you want to add a new method you have to change PhotoRecognizer class, so you have to read the code, to remember what it is doing and if somebody else should do it - it would take even more time.
The design with enums violates two first rules of SOLID (https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)):
Open-Closed-Principle (OCP): PhotoRecognizer class cannot be extended (adding a new method) without modification of its code.
Single-Responsibility-Principle (SRP): PhotoRecognizer class does not only recognize the photo, but also serves as a factory for methods.
Your first approach is better, because if you would define another Method3 you could put it into your PhotoRecognizer and use it without changing the code of the class:
//define Method3 somewhere
Method3 med;
PhotoRecognizer recogMethod3(&med);
med.perform_recognition()
What I don't like about your approach, is that for every possible combination you have to write a class (MethodX), which might result in a lot of joyless work. I would do the following:
struct Method
{
boost::shared_ptr<PreProcessing> pPreProcessing;
boost::shared_ptr<FeatureExtractor> pFeatureExtractor;
boost::shared_ptr<Classifier> pClassifier;
};
See Method as as a collection of slots for different algorithms, it here because it is convenient to pass Processing/Extractor/Classifier in this way.
And one could use a factory function:
enum PreprocessingType {pType1, pType2, ...};
enum FeatureExtractorType {feType1, feType2, ..};
enum ClassifierType {cType1, cType2, ... };
Method createMethod(PreprocessingType p, FeatureExtractionType fe, ClassifierType ct){
Method result;
swith(p){
pType1: result.pPreprocessing.reset(new Type1Preprocessing());
break;
....
}
//the same for the other two: fe and ct
....
return result
}
You might ask: "But how about OCP?" - and you would be right! One has to change the createMethod to add other (new) classes. And it might be not much comfort to you, that you still have the possibility to create a Method-object by hand, initialize the fields with the new classes and pass it to a PhotoRecognizer-constructor.
But with C++, you have a mighty tool at your disposal - the templates:
template < typename P, typename FE, typename C>
Method createMethod(){
Method result;
result.pPrepricessing.reset(new P());
result.pFeatureExtractor.reset(new FE());
result.pClassifier.reset(new C());
return result
}
And you are free to chose any combination you want without changing the code:
//define P1, FE22, C2 somewhere
Method medX=createMethod<P1, FE22, C2>();
PhotoRecognizer recogMethod3(&med);
recogMethod3.perform_recognition()
There is yet another issue: What if the class PreProcessingA can not be used with the class ClassifierB? Earlier, if there was no class MethodAB nobody could use it, but now this mistake is possible.
To handle this problem, traits can be used:
template <class A, class B>
struct Together{
static const bool can_be_used=false;
template <>
struct Together<class PreprocessingA, class ClassifierA>{
static const bool can_be_used=true;
}
template < typename P, typename FE, typename C>
Method createMethod(){
static_assert(Together<P,C>::can_be_used, "classes cannot be used together");
Method result;
....
}
Conclusion
This approach has the following advantages:
SRP, i.e. PhotoRecognizer - only recognizes, Method - only bundles the algorithm parts and createMethod - only creates a method.
OCP, i.e. we can add new algorithms without changing the code of other classes/functions
Thanks to traits, we can detect a wrong combination of part-algorithms at compile time.
No boilerplate code / no code duplication.
PS:
You could say, why not scratch the whole Method class? One could just as well use:
template < typename P, typename FE, typename C>
PhotoRecognizer{
P preprocessing;
FE featureExtractor;
C classifier;
...
}
PhotoRecognizer<P1, FE22, C2> recog();
recog.perform_recognition();
Yeah it's true. This alternative has some advantages and disadvantages, one must know more about the project to be able to make the right trade off. But as default I would go with the more SRP-principle compliant approach of encapsulating the part-algorithms into the Method class.
I've implemented an abstract factory pattern here and there. I've always regret the decision after revisiting the code for maintenance. There is no case, I can think of, where one or more factory methods wouldn't have been a better idea. Therefore, I like your second approach best. Consider ditching the method class as ead suggested. Once your testing is complete you'll have one or more factory methods that construct exactly what you want, and best of all, you and others will be able to follow the code later. For example:
std::shared_ptr<PhotoRecognizer> CreateOptimizedPhotoRecognizer()
{
auto result = std::make_shared<PhotoRecognizer>(
CreatePreProcessing(PreProcessingMethod::MedianFiltering),
CreateFeatureExtractor(FeatureExtractionMethod::WaveletStatictis),
CreateClassifier(ClassificationMethod::NearestNeighborClassifier)
);
return result;
}
Use your factory method in code like this:
auto pPhotoRecognizer = CreateOptimizedPhotoRecognizer();
Create the enumerations as you suggested. I know, I know, open/closed principle... If you keep these enumerations in one spot you won't have a problem keeping them in sync with your factory methods. First the enumerations:
enum class PreProcessingMethod { MedianFiltering, FilteringTypeB };
enum class FeatureExtractionMethod { WaveletStatictis, FeatureExtractionTypeB };
enum class ClassificationMethod { NearestNeighborClassifier, SVMClassfier, NeuralNetworkClassifer };
Here's an example of a component factory method:
std::shared_ptr<PreProcessing> CreatePreProcessing(PreProcessingMethod method)
{
std::shared_ptr<PreProcessing> result;
switch (method)
{
case PreProcessingMethod::MedianFiltering:
result = std::make_shared<MedianFiltering>();
break;
case PreProcessingMethod::FilteringTypeB:
result = std::make_shared<FilteringTypeB>();
break;
default:
break;
}
return result;
}
In order to determine the best combinations of algorithms you'll probably want to create some automated tests that run through all the possible permutations of components. One way to do this could be as straight forward as:
for (auto preProc = static_cast<PreProcessingMethod>(0); ;
preProc = static_cast<PreProcessingMethod>(static_cast<int>(preProc) + 1))
{
auto pPreProcessing = CreatePreProcessing(preProc);
if (!pPreProcessing)
break;
for (auto feature = static_cast<FeatureExtractionMethod>(0); ;
feature = static_cast<FeatureExtractionMethod>(static_cast<int>(feature) + 1))
{
auto pFeatureExtractor = CreateFeatureExtractor(feature);
if (!pFeatureExtractor)
break;
for (auto classifier = static_cast<ClassificationMethod>(0); ;
classifier = static_cast<ClassificationMethod>(static_cast<int>(classifier) + 1))
{
auto pClassifier = CreateClassifier(classifier);
if (!pClassifier)
break;
{
auto pPhotoRecognizer = std::make_shared<PhotoRecognizer>(
pPreProcessing,
pFeatureExtractor,
pClassifier
);
auto testResults = TestRecognizer(pPhotoRecognizer);
PrintConfigurationAndResults(pPhotoRecognizer, testResults);
}
}
}
}
Unless you are reusing MethodFactory, I'd recommend the following:
struct Method1 {
using PreProcessing_t = GaussianFiltering;
using FeatureExtractor_t = FFTStatictis;
using Classifier_t = SVMClassifier;
};
class PhotoRecognizer
{
public:
template<typename Method>
PhotoRecognizer(Method tag) {
pPreProcessing.reset(new typename Method::PreProcessing_t());
pFeatureExtractor.reset(new typename Method::FeatureExtractor_t());
pClassifier.reset(new typename Method::Classifier_t());
}
};
Usage:
PhotoRecognizer(Method1());

Factory method anti-if implementation

I'm applying the Factory design pattern in my C++ project, and below you can see how I am doing it. I try to improve my code by following the "anti-if" campaign, thus want to remove the if statements that I am having. Any idea how can I do it?
typedef std::map<std::string, Chip*> ChipList;
Chip* ChipFactory::createChip(const std::string& type) {
MCList::iterator existing = Chips.find(type);
if (existing != Chips.end()) {
return (existing->second);
}
if (type == "R500") {
return Chips[type] = new ChipR500();
}
if (type == "PIC32F42") {
return Chips[type] = new ChipPIC32F42();
}
if (type == "34HC22") {
return Chips[type] = new Chip34HC22();
}
return 0;
}
I would imagine creating a map, with string as the key, and the constructor (or something to create the object). After that, I can just get the constructor from the map using the type (type are strings) and create my object without any if. (I know I'm being a bit paranoid, but I want to know if it can be done or not.)
You are right, you should use a map from key to creation-function.
In your case it would be
typedef Chip* tCreationFunc();
std::map<std::string, tCreationFunc*> microcontrollers;
for each new chip-drived class ChipXXX add a static function:
static Chip* CreateInstance()
{
return new ChipXXX();
}
and also register this function into the map.
Your factory function should be somethink like this:
Chip* ChipFactory::createChip(std::string& type)
{
ChipList::iterator existing = microcontrollers.find(type);
if (existing != microcontrollers.end())
return existing->second();
return NULL;
}
Note that copy constructor is not needed, as in your example.
The point of the factory is not to get rid of the ifs, but to put them in a separate place of your real business logic code and not to pollute it. It is just a separation of concerns.
If you're desperate, you could write a jump table/clone() combo that would do this job with no if statements.
class Factory {
struct ChipFunctorBase {
virtual Chip* Create();
};
template<typename T> struct CreateChipFunctor : ChipFunctorBase {
Chip* Create() { return new T; }
};
std::unordered_map<std::string, std::unique_ptr<ChipFunctorBase>> jumptable;
Factory() {
jumptable["R500"] = new CreateChipFunctor<ChipR500>();
jumptable["PIC32F42"] = new CreateChipFunctor<ChipPIC32F42>();
jumptable["34HC22"] = new CreateChipFunctor<Chip34HC22>();
}
Chip* CreateNewChip(const std::string& type) {
if(jumptable[type].get())
return jumptable[type]->Create();
else
return null;
}
};
However, this kind of approach only becomes valuable when you have large numbers of different Chip types. For just a few, it's more useful just to write a couple of ifs.
Quick note: I've used std::unordered_map and std::unique_ptr, which may not be part of your STL, depending on how new your compiler is. Replace with std::map/boost::unordered_map, and std::/boost::shared_ptr.
No you cannot get rid of the ifs. the createChip method creats a new instance depending on constant (type name )you pass as argument.
but you may optimaze yuor code a little removing those 2 line out of if statment.
microcontrollers[type] = newController;
return microcontrollers[type];
To answer your question: Yes, you should make a factory with a map to functions that construct the objects you want. The objects constructed should supply and register that function with the factory themselves.
There is some reading on the subject in several other SO questions as well, so I'll let you read that instead of explaining it all here.
Generic factory in C++
Is there a way to instantiate objects from a string holding their class name?
You can have ifs in a factory - just don't have them littered throughout your code.
struct Chip{
};
struct ChipR500 : Chip{};
struct PIC32F42 : Chip{};
struct ChipCreator{
virtual Chip *make() = 0;
};
struct ChipR500Creator : ChipCreator{
Chip *make(){return new ChipR500();}
};
struct PIC32F42Creator : ChipCreator{
Chip *make(){return new PIC32F42();}
};
int main(){
ChipR500Creator m; // client code knows only the factory method interface, not the actuall concrete products
Chip *p = m.make();
}
What you are asking for, essentially, is called Virtual Construction, ie the ability the build an object whose type is only known at runtime.
Of course C++ doesn't allow constructors to be virtual, so this requires a bit of trickery. The common OO-approach is to use the Prototype pattern:
class Chip
{
public:
virtual Chip* clone() const = 0;
};
class ChipA: public Chip
{
public:
virtual ChipA* clone() const { return new ChipA(*this); }
};
And then instantiate a map of these prototypes and use it to build your objects (std::map<std::string,Chip*>). Typically, the map is instantiated as a singleton.
The other approach, as has been illustrated so far, is similar and consists in registering directly methods rather than an object. It might or might not be your personal preference, but it's generally slightly faster (not much, you just avoid a virtual dispatch) and the memory is easier to handle (you don't have to do delete on pointers to functions).
What you should pay attention however is the memory management aspect. You don't want to go leaking so make sure to use RAII idioms.