I'm facing problems with the design of a C++ library of mine. It is a library for reading streams that support a feature I haven't found on other "stream" implementations. It is not really important why I've decided to start writing it. The point is I have a stream class that provides two important behaviours through multiple inheritance: shareability and seekability.
Shareable streams are those that have a shareBlock(size_t length) method that returns a new stream that shares resources with its parent stream (e.g. using the same memory block used by parent stream). Seekable streams are those that are.. well, seekable. Through a method seek(), these classes can seek to a given point in the stream. Not all streams of the library are shareable and/or seekable.
A stream class that both provides implementation for seeking and sharing resources inherits interface classes called Seekable and Shareable. That's all good if I know the type of such a stream, but, sometimes, I might want a function to accept as argument a stream that simply fulfills the quality of being seekable and shareable at the same time, regardless of which stream class it actually is. I could do that creating yet another class that inherits both Seekable and Shareable and taking a reference to that type, but then I would have to make my classes that are both seekable and shareable inherit from that class. If more "behavioural classes" like those were to be added, I would need to make several modifications everywhere in the code, soon leading to unmaintainable code. Is there a way to solve this dilemma? If not, then I'm absolutely coming to understand why people are not satisfied by multiple inheritance. It almost does the job, but, just then, it doesn't :D
Any help is appreciated.
-- 2nd edit, preferred problem resolution --
At first I thought Managu's solution would be my preferred one. However, Matthieu M. came with another I preferred over Managu's: to use boost::enable_if<>. I would like to use Managu's solution if BOOST_MPL_ASSERT produced messages weren't so creepy. If there was any way to create instructive compile-time error messages, I would surely do that way. But, as I said, the methods available produce creepy messages. So I prefer the (much) lesser instructive, yet cleaner message produced when boost::enable_if<> conditions are not met.
I've created some macros to ease the task to write template functions that take arguments inheriting select class types, here they go:
// SonettoEnableIfDerivedMacros.h
#ifndef SONETTO_ENABLEIFDERIVEDMACROS_H
#define SONETTO_ENABLEIFDERIVEDMACROS_H
#include <boost/preprocessor/repetition/repeat.hpp>
#include <boost/preprocessor/array/elem.hpp>
#include <boost/mpl/bool.hpp>
#include <boost/mpl/and.hpp>
#include <boost/type_traits/is_base_and_derived.hpp>
#include <boost/utility/enable_if.hpp>
/*
For each (TemplateArgument,DerivedClassType) preprocessor tuple,
expand: `boost::is_base_and_derived<DerivedClassType,TemplateArgument>,'
*/
#define SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION(z,n,data) \
boost::is_base_and_derived<BOOST_PP_TUPLE_ELEM(2,1,BOOST_PP_ARRAY_ELEM(n,data)), \
BOOST_PP_TUPLE_ELEM(2,0,BOOST_PP_ARRAY_ELEM(n,data))>,
/*
ReturnType: Return type of the function
DerivationsArray: Boost.Preprocessor array containing tuples in the form
(TemplateArgument,DerivedClassType) (see
SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION)
Expands:
typename boost::enable_if<
boost::mpl::and_<
boost::is_base_and_derived<DerivedClassType,TemplateArgument>,
...
boost::mpl::bool_<true> // Used to nullify trailing comma
>, ReturnType>::type
*/
#define SONETTO_ENABLE_IF_DERIVED(ReturnType,DerivationsArray) \
typename boost::enable_if< \
boost::mpl::and_< \
BOOST_PP_REPEAT(BOOST_PP_ARRAY_SIZE(DerivationsArray), \
SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION,DerivationsArray) \
boost::mpl::bool_<true> \
>, ReturnType>::type
#endif
// main.cpp: Usage example
#include <iostream>
#include "SonettoEnableIfDerivedMacros.h"
class BehaviourA
{
public:
void behaveLikeA() const { std::cout << "behaveLikeA()\n"; }
};
class BehaviourB
{
public:
void behaveLikeB() const { std::cout << "behaveLikeB()\n"; }
};
class BehaviourC
{
public:
void behaveLikeC() const { std::cout << "behaveLikeC()\n"; }
};
class CompoundBehaviourAB : public BehaviourA, public BehaviourB {};
class CompoundBehaviourAC : public BehaviourA, public BehaviourC {};
class SingleBehaviourA : public BehaviourA {};
template <class MustBeAB>
SONETTO_ENABLE_IF_DERIVED(void,(2,((MustBeAB,BehaviourA),(MustBeAB,BehaviourB))))
myFunction(MustBeAB &ab)
{
ab.behaveLikeA();
ab.behaveLikeB();
}
int main()
{
CompoundBehaviourAB ab;
CompoundBehaviourAC ac;
SingleBehaviourA a;
myFunction(ab); // Ok, prints `behaveLikeA()' and `behaveLikeB()'
myFunction(ac); // Fails with `error: no matching function for
// call to `myFunction(CompoundBehaviourAC&)''
myFunction(a); // Fails with `error: no matching function for
// call to `myFunction(SingleBehaviourA&)''
}
As you can see, the error messages are exceptionally clean (at least in GCC 3.4.5). But they can be misleading. It doesn't inform you that you've passed the wrong argument type. It informs you that the function doesn't exist (and, in fact, it doesn't due to SFINAE; but that may not be exactly clear to the user). Still, I prefer those clean messages over those randomStuff ... ************** garbage ************** BOOST_MPL_ASSERT produces.
If you find any bugs in this code, please edit and correct them, or post a comment in that regard. The one major issue I find in those macros is that they're limited to some Boost.Preprocessor limits. Here, for example, I can only pass a DerivationsArray of up to 4 items to SONETTO_ENABLE_IF_DERIVED(). I think those limits are configurable though, and maybe they will even be lifted in upcoming C++1x standard, won't they? Please, correct me if I'm wrong. I don't remember if they have suggested changes to the preprocessor.
Thank you.
Just a few thoughts:
STL has this same sort of problem with iterators and functors. The solution there was basically to remove types from the equation all together, document the requirements (as "concepts"), and use what amounts to duck typing. This fits well a policy of compile-time polymorphism.
Perhaps a midground would be to create a template function which statically checks its conditions at instantiation. Here's a sketch (which I don't guarantee will compile).
class shareable {...};
class seekable {...};
template <typename StreamType>
void needs_sharable_and_seekable(const StreamType& stream)
{
BOOST_STATIC_ASSERT(boost::is_base_and_derived<shareable, StreamType>::value);
BOOST_STATIC_ASSERT(boost::is_base_and_derived<seekable, StreamType>::value);
....
}
Edit: Spent a few minutes making sure things compiled, and "cleaning up" the error messages:
#include <boost/type_traits/is_base_and_derived.hpp>
#include <boost/mpl/assert.hpp>
class shareable {};
class seekable {};
class both : public shareable, public seekable
{
};
template <typename StreamType>
void dosomething(const StreamType& dummy)
{
BOOST_MPL_ASSERT_MSG((boost::is_base_and_derived<shareable, StreamType>::value),
dosomething_requires_shareable_stream,
(StreamType));
BOOST_MPL_ASSERT_MSG((boost::is_base_and_derived<seekable, StreamType>::value),
dosomething_requires_seekable_stream,
(StreamType));
}
int main()
{
both b;
shareable s1;
seekable s2;
dosomething(b);
dosomething(s1);
dosomething(s2);
}
Take a look at boost::enable_if
// Before
template <class Stream>
some_type some_function(const Stream& c);
// After
template <class Stream>
boost::enable_if<
boost::mpl::and_<
boost::is_base_and_derived<Shareable,Stream>,
boost::is_base_and_derived<Seekable,Stream>
>,
some_type
>
some_function(const Stream& c);
Thanks to SFINAE this function will only be considered if Stream satisfies the requirement, ie here derive from both Shareable and Seekable.
How about using a template method?
template <typename STREAM>
void doSomething(STREAM &stream)
{
stream.share();
stream.seek(...);
}
You might want the Decorator pattern.
Assuming both Seekable and Shareable have common ancestor, one way I can think of is trying to downcast (of course, asserts replaced with your error-checking):
void foo(Stream *s) {
assert(s != NULL);
assert(dynamic_cast<Seekable*>(s) != NULL);
assert(dynamic_cast<Shareable*>(s) != NULL);
}
Replace 'shareable' and 'seekable' with 'in' and 'out' and find your 'io' solution. In a library similar problems should have similar solutions.
Related
This question is specifically about C++ architecture on embedded, hard real-time systems. This implies that large parts of the data-structures as well as the exact program-flow are given at compile-time, performance is important and a lot of code can be inlined. Solutions preferably use C++03 only, but C++11 inputs are also welcome.
I am looking for established design-patterns and solutions to the architectural problem where the same code-base should be re-used for several, closely related products, while some parts (e.g. the hardware-abstraction) will necessarily be different.
I will likely end up with a hierarchical structure of modules encapsulated in classes that might then look somehow like this, assuming 4 layers:
Product A Product B
Toplevel_A Toplevel_B (different for A and B, but with common parts)
Middle_generic Middle_generic (same for A and B)
Sub_generic Sub_generic (same for A and B)
Hardware_A Hardware_B (different for A and B)
Here, some classes inherit from a common base class (e.g. Toplevel_A from Toplevel_base) while others do not need to be specialized at all (e.g. Middle_generic).
Currently I can think of the following approaches:
(A): If this was a regular desktop-application, I would use virtual inheritance and create the instances at run-time, using e.g. an Abstract Factory.
Drawback: However the *_B classes will never be used in product A and hence the dereferencing of all the virtual function calls and members not linked to an address at run-time will lead to quite some overhead.
(B) Using template specialization as inheritance mechanism (e.g. CRTP)
template<class Derived>
class Toplevel { /* generic stuff ... */ };
class Toplevel_A : public Toplevel<Toplevel_A> { /* specific stuff ... */ };
Drawback: Hard to understand.
(C): Use different sets of matching files and let the build-scripts include the right one
// common/toplevel_base.h
class Toplevel_base { /* ... */ };
// product_A/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// product_B/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// build_script.A
compiler -Icommon -Iproduct_A
Drawback: Confusing, tricky to maintain and test.
(D): One big typedef (or #define) file
//typedef_A.h
typedef Toplevel_A Toplevel_to_be_used;
typedef Hardware_A Hardware_to_be_used;
// etc.
// sub_generic.h
class sub_generic {
Hardware_to_be_used the_hardware;
// etc.
};
Drawback: One file to be included everywhere and still the need of another mechnism to actually switch between different configurations.
(E): A similar, "Policy based" configuration, e.g.
template <class Policy>
class Toplevel {
Middle_generic<Policy> the_middle;
// ...
};
// ...
template <class Policy>
class Sub_generic {
class Policy::Hardware_to_be_used the_hardware;
// ...
};
// used as
class Policy_A {
typedef Hardware_A Hardware_to_be_used;
};
Toplevel<Policy_A> the_toplevel;
Drawback: Everything is a template now; a lot of code needs to be re-compiled every time.
(F): Compiler switch and preprocessor
// sub_generic.h
class Sub_generic {
#if PRODUCT_IS_A
Hardware_A _hardware;
#endif
#if PRODUCT_IS_B
Hardware_B _hardware;
#endif
};
Drawback: Brrr..., only if all else fails.
Is there any (other) established design-pattern or a better solution to this problem, such that the compiler can statically allocate as many objects as possible and inline large parts of the code, knowing which product is being built and which classes are going to be used?
I'd go for A. Until it's PROVEN that this is not good enough, go for the same decisions as for desktop (well, of course, allocating several kilobytes on the stack, or using global variables that are many megabytes large may be "obvious" that it's not going to work). Yes, there is SOME overhead in calling virtual functions, but I would go for the most obvious and natural C++ solution FIRST, then redesign if it's not "good enough" (obviously, try to determine performance and such early on, and use tools like a sampling profiler to determine where you are spending time, rather than "guessing" - humans are proven pretty poor guessers).
I'd then move to option B if A is proven to not work. This is indeed not entirely obvious, but it is, roughly, how LLVM/Clang solves this problem for combinations of hardware and OS, see:
https://github.com/llvm-mirror/clang/blob/master/lib/Basic/Targets.cpp
First I would like to point out that you basically answered your own question in the question :-)
Next I would like to point out that in C++
the exact program-flow are given at compile-time, performance is
important and a lot of code can be inlined
is called templates. The other approaches that leverage language features as opposed to build system features will serve only as a logical way of structuring the code in your project to the benefit of developers.
Further, as noted in other answers C is more common for hard real-time systems than are C++, and in C it is customary to rely on MACROS to make this kind of optimization at compile time.
Finally, you have noted under your B solution above that template specialization is hard to understand. I would argue that this depends on how you do it and also on how much experience your team has on C++/templates. I find many "template ridden" projects to be extremely hard to read and the error messages they produce to be unholy at best, but I still manage to make effective use of templates in my own projects because I respect the KISS principle while doing it.
So my answer to you is, go with B or ditch C++ for C
I understand that you have two important requirements :
Data types are known at compile time
Program-flow is known at compile time
The CRTP wouldn't really address the problem you are trying to solve as it would allow the HardwareLayer to call methods on the Sub_generic, Middle_generic or TopLevel and I don't believe it is what you are looking for.
Both of your requirements can be met using the Trait pattern (another reference). Here is an example proving both requirements are met. First, we define empty shells representing two Hardwares you might want to support.
class Hardware_A {};
class Hardware_B {};
Then let's consider a class that describes a general case which corresponds to Hardware_A.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long int64_t;
static int64_t getCPUSerialNumber() {return 0;}
};
Now let's see a specialization for Hardware_B :
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int int64_t;
static int64_t getCPUSerialNumber() {return 1;}
};
Now, here is a usage example within the Sub_generic layer :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::int64_t int64_t;
int64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And finally, a short main that executes both code paths and use both data types :
int main(int argc, const char * argv[]) {
std::cout << "Hardware_A : " << Sub_generic<Hardware_A>().doSomething() << std::endl;
std::cout << "Hardware_B : " << Sub_generic<Hardware_B>().doSomething() << std::endl;
}
Now if your HardwareLayer needs to maintain state, here is another way to implement the HardLayer and Sub_generic layer classes.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 0;
};
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 1;
};
template <typename Hardware>
class Sub_generic : public HardwareLayer<Hardware>
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And here is a last variant where only the Sub_generic implementation changes :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return hw.getCPUSerialNumber();}
private:
HwLayer hw;
};
On a similar train of thought to F, you could just have a directory layout like this:
Hardware/
common/inc/hardware.h
hardware1/src/hardware.cpp
hardware2/src/hardware.cpp
Simplify the interface to only assume a single hardware exists:
// sub_generic.h
class Sub_generic {
Hardware _hardware;
};
And then only compile the folder that contains the .cpp files for the hardware for that platform.
The benefits to this approach are:
It's simple to understand whats happening and to add a hardware3
hardware.h still serves as your API
It takes away the abstraction from the compiler (for your speed concerns)
Compiler 1 doesn't need to compile hardware2.cpp or hardware3.cpp which may contain things Compiler 1 can't do (like inline assembly, or some other specific Compiler 2 thing)
hardware3 might be much more complicated for some reason you haven't considered yet.. so giving it a whole directory structure encapsulates it.
Since this is for a hard real time embedded system, usually you would go for a C type of solution not c++.
With modern compilers I'd say that the overhead of c++ is not that great, so it's not entirely a matter of performance, but embedded systems tend to prefer c instead of c++.
What you are trying to build would resemble a classic device drivers library (like the one for ftdi chips).
The approach there would be (since it's written in C) something similar to your F, but with no compile time options - you would specialize the code, at runtime, based on somethig like PID, VID, SN, etc...
Now if you what to use c++ for this, templates should probably be your last option (code readability usually ranks higher than any advantage templates bring to the table). So you would probably go for something similar to A: a basic class inheritance scheme, but no particularly fancy design pattern is required.
Hope this helps...
I am going to assume that these classes only need to be created a single time, and that their instances persist throughout the entire program run time.
In this case I would recommend using the Object Factory pattern since the factory will only get run one time to create the class. From that point on the specialized classes are all a known type.
I have created a physics system that handles any collision object to any collision object like so:
namespace Collision
{
template <typename T, typename U>
inline void Check(T& t, U& u)
{
if(u.CheckCollision(t.GetCollider()))
{
u.HitBy(t);
t.Hit(u);
}
}
}
and there are several other helper objects to make it easy to use, but the gist is that there are dynamic objects that need to be tested against static objects and other dynamic objects, but static objects don't need to be checked.
What I would like is something like this:
void func()
{
PhysicsWorld world;
shared_ptr<CSphere> ballPhysics(new CSphere(0,0,ballSprite->Width()));
BallCommand ballBehavior;
CBounds bounds(0, 0, 640, 480);
CBox obstacle(200, 150, 10, 10);
Collision::Collidable<CBounds> boundC(bounds);
Collision::Collidable<std::shared_ptr<CSphere>, BallCommand&> ballC(ballPhysics, ballBehavior);
Collision::Collidable<CBox> obstC(obstacle);
world.addStatic(boundC);
world.addDynamic(ballC);
world.addStatic(obstC);
...
...
world.Update();
...
...
}
I'd love to deduce the containers through the add functions so using the system automatically updates the type lists. I think I get how to generate a typelist with a template function, but not how to then get it where I need it, or at what point in compilation it is complete.
If not that then some system using two typelists that then internally writes the update function to iterate through all the lists pairing them up against each other.
I've read some of the boost MPL book and read Andrei's book several times. But, I seem to get caught up in the how it works stuff and don't really translate that into how do I use it. I wish they had one more section on real world examples in the MPL book.
I've been able to get all of the pieces of a game engine to interact with rendering, physics, collisions (I separate detection from reaction), input, network, sound, etc. All in generic ways. Now I just need to hold all the things in a generic way. After all that generic work, it would be silly to require inheritance just so I can hold something in a container and I don't want to hand code every collection possibility as that is one of the great benefits of generic programming.
I saw Jalf had indicated that s/he used MPL to do something similar, but did not go into details enough for me to figure it out. If anyone knows a practical use example or where I can get more info on using the MPL I'd be grateful.
Thanks again!
Update
boost MPL and boost Fusion both seem to do what I want, but there appears to be very little in the way of good real life examples of either libraries. The documentation for MPL is little more than this template does this and good luck understanding the implications of that. Fusion is a bit better with "Here's an example but it's just the tip of the iceberg!"
A typical boost MPL example is has_xxx. They use XXX and xxx in the example making it difficult to see the difference where XXX(The required text) and Test or CheckType or any more distinguishable user type could be used in place of xxx. Plus there is no mention that none of this is in a namespace. Now I know why Scott meyers compared this to the shower scene in Psycho.
It's a shame really because what little I have gotten to compile and understand does really useful things, but is so hard to figure out I would never spend this much effort if I was on a shipping product.
If anyone knows real world examples or better references, explanations, or tutorial I would be grateful.
Update
Here's more code:
template <typename T, typename V = VictimEffect, typename M = MenaceEffect>
class Collidable
{
T m_Collider;
V m_HitBy;
M m_Hit;
public:
Collidable(T collide, V victim, M menace) : m_Collider(collide), m_HitBy(victim), m_Hit(menace) {;}
Collidable(T collide) : m_Collider(collide) {;}
Collidable(T collide, V victim) : m_Collider(collide), m_HitBy(victim) {;}
T& GetCollider()
{
return m_Collider;
}
template <typename V>
void HitBy(V& menace)
{
m_HitBy.HitBy(menace.GetCollider());
}
template <typename V>
void Hit(V& victim)
{
m_Hit.Hit(victim.GetCollider());
}
template <typename V>
bool CheckCollision(V& menace)
{
return m_Collider.CheckCollision(menace);
}
};
Then to use it I do this
Collidable<Boundary, BallCommand> boundC(boundary, ballBehavior);
Collidable<CollisionBox> ballC(circle);
Then all I need is to call collide with all my active collidable objects against all my active and passive objects.
I'm not using std::function because the addition of function names makes the code clearer to me. But maybe that's just legacy thinking.
If I understand correctly your problem is:
class manager {
public:
template<typename T>
void add(T t);
private:
/* ??? */ data;
/* other members? */
};
manager m;
some_type1 s1;
some_type2 s2;
m.add(s1);
m.add(s2);
/* m should hold its copies of s1 and s2 */
where some_type1 and some_type2 are unrelated and you're unwilling to redesign them to use dynamic polymorphism.
I don't think either MPL or Fusion will do what you want with this form. If your problem is what container to use as a member of PhysicsWorld, then no amount of compile-time computations will help: the member type is determined at instantiation time, i.e. the line manager m;.
You could rewrite the manager in a somewhat meta-programing fashion to use it this way:
typedef manager<> m0_type;
typedef typename result_of::add<m0_type, some_type1>::type m1_type;
typedef typename result_of::add<m1_type, some_type2>::type final_type;
/* compile-time computations are over: time to instantiate */
final_type m;
/* final_type::data could be a tuple<some_type1, some_type2> for instance */
m.add(s1); m.add(s2);
This is indeed the sort of things MPL+Fusion can help with. However this still remains quite anchored in the compile-time world: can you imagine writing an template<typename Iter> void insert(Iter first, Iter last) just so you can copy the contents of a container into a manager?
Allow me to assume that your requirements are such that in fact the manager has to be used in a much more runtimey fashion, like in my original formulation of your question. (I don't think that's quite a stretch of the imagination for a PhysicsWorld). There is an alternative, which I think is more appropriate, much less verbose and more maintanable: type-erasure. (The name of the technique may be a bit unfortunate and can be misleading the first time.)
A good example of type-erasure is std::function:
std::function<void()> func;
func = &some_func; /* this just looks like that internally std::function stores a void(*)() */
func = some_type(); /* but here we're storing a some_type! */
Type-erasure is a technique to bridge compile-time with runtime: in both assignments above, the arguments are unrelated types (one of which is non-class so not even remotely runtime polymorphic), but std::function handles both, provided they fulfill the contract that they can be used as f() (where f is an instance of the respective type) and that the expression has type (convertible to) void. The contract here is the compile-time aspect of type-erasure.
I'm not going to demonstrate how to implement type-erasure because there is a great Boostcon 2010 presentation on the subject. (You can watch the presentation and/or get the slides through the link). Or I (or someone else) can do it in the comments.
As a final note, implementation of type-erasure (typically) uses dynamic polymorphism. I mention that because I noticed you considered the use of typelists as a runtime object stored as a manager member. This smells like poor man's reflection, and really, poor man's dynamic polymorphism. So don't do that please. If you meant typelists as in the result of a MPL computation then disregard the node.
This is not complete And I did not get everything I want, but it's good enough for now. I'm entering the whole solution in case it helps others.
#include <boost\mpl\vector.hpp>
#include <boost\mpl\fold.hpp>
#include <boost\mpl\for_each.hpp>
#include <boost\mpl\inherit.hpp>
#include <boost\mpl\inherit_linearly.hpp>
#include <iostream>
#include <vector>
using namespace boost::mpl::placeholders;
typedef boost::mpl::vector<short, long, char, int> member_types;
template <typename T>
struct wrap
{
std::vector<T> value;
};
typedef boost::mpl::inherit_linearly<member_types, boost::mpl::inherit<wrap<_2>, _1> >::type Generate;
class print
{
Generate generated;
public:
template <typename T>
void operator()(T)
{
std::cout << *static_cast<wrap<T>&>(generated).value.begin() << std::endl;
}
template <typename T>
void Add(T const& t)
{
static_cast<wrap<T>&>(generated).value.push_back(t);
}
};
void main()
{
print p;
short s = 5;
p.Add(s);
long l = 555;
p.Add(l);
char c = 'c';
p.Add(c);
int i = 55;
p.Add(i);
boost::mpl::for_each<member_types>(p);
}
This isn't the final object I need, but now I have all the pieces to make what I want.
Update
And finally I get this.
template <typename TL>
class print
{
template <typename T>
struct wrap
{
std::vector<T> value;
};
typedef typename boost::mpl::inherit_linearly<TL, boost::mpl::inherit<wrap<_2>, _1> >::type Generate;
Generate generated;
public:
void Print()
{
boost::mpl::for_each<TL>(*this);
}
template <typename T>
void operator()(T)
{
std::cout << *static_cast<wrap<T>&>(generated).value.begin() << std::endl;
}
template <typename T>
void Add(T const& t)
{
static_cast<wrap<T>&>(generated).value.push_back(t);
}
};
Here TL is a boost::mpl container of what types should be held.
I think that provides a good starting point for expanding, but covers much of the metaprogramming parts.
I hope this helps others.
In our library we have a number of "plugins", which are implemented in their own cpp files. Each plugin defines a template function, and should instantiate this function over a whole bunch of types. The number of types can be quite large, 30-100 of them, and can change depending on some compile time options. Each instance really have to be compiled and optimized individually, the performance improves by 10-100 times. The question is what is the best way to instantiate all of these functions.
Each plugin is written by a scientist who does not really know C++, so the code inside each plugin must be hidden inside macros or some simple construct. I have a half-baked solution based on a "database" of instances:
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
ftype fp;
};
// By default we don't have any instances
template<int plugin_id, class T> S::ftype S::fp = 0;
Now a user that wants to use a plugin can check the value of
S<SOME_PLUGIN,double>::fp
to see if there is a version of this plugin for the double type. The template instantiation of fp will generate a weak reference, so the linker will use the "real" instance if we define it in a plugin implementation file. Inside the implementation of SOME_PLUGIN we will have an instantiation
template<> S<SOME_PLUGIN,double>::ftype S<SOME_PLUGIN,double>::fp =
some_plugin_implementation;
This seems to work. The question is if there is some way to automatically repeat this last statement for all types of interest. The types can be stored in a template class or generated by a template loop. I would prefer something that can be hidden by a macro. Of course this can be solved by an external code generator, but it's hard to do this portably and it interfers with the build systems of the people that use the library. Putting all the plugins in header files solves the problem, but makes the compiler explode (needing many gigabytes of memory and a very long compilation time).
I've used http://www.boost.org/doc/libs/1_44_0/libs/preprocessor/doc/index.html for such magic, in particular SEQ_FOR_EACH.
You could use a type list from Boost.MPL and then create a class template that recursively eats that list and instantiates every type. This would however make them all nested structs of that class template.
Hmm, I don't think I understand your problem correctly, so apologies if this answer is way off the mark, but could you not have a static member of S, which has a static instance of ftype, and return a reference to that, this way, you don't need to explicitly have an instance defined in your implementation files... i.e.
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
static ftype& instance()
{
static ftype _fp = T::create();
return _fp;
}
};
and instead of accessing S<SOME_PLUGIN,double>::fp, you'd do S<SOME_PLUGIN,double>::instance(). To instantiate, at some point you have to call S<>::instance(). Do you need this to happen automagically as well?
EDIT: just noticed that you have a copy constructor, for ftype, changed the above code.. now you have to define a factory method in T called create() to really create the instance.
EDIT: Okay, I can't think of a clean way of doing this automatically, i.e. I don't believe there is a way to (at compile time) build a list of types, and then instantiate. However you could do it using a mix... Hopefully the example below will give you some ideas...
#include <iostream>
#include <typeinfo>
#include <boost/fusion/include/vector.hpp>
#include <boost/fusion/algorithm.hpp>
using namespace std;
// This simply calls the static instantiate function
struct instantiate
{
template <typename T>
void operator()(T const& x) const
{
T::instance();
}
};
// Shared header, presumably all plugin developers will use this header?
template<int plugin_id, class T>
struct S
{
typedef T (*ftype)(T);
static ftype& instance()
{
cout << "S: " << typeid(S<plugin_id, T>).name() << endl;
static ftype _fp; // = T::create();
return _fp;
}
};
// This is an additional struct, each plugin developer will have to implement
// one of these...
template <int plugin_id>
struct S_Types
{
// All they have to do is add the types that they will support to this vector
static void instance()
{
boost::fusion::vector<
S<plugin_id, double>,
S<plugin_id, int>,
S<plugin_id, char>
> supported_types;
boost::fusion::for_each(supported_types, instantiate());
}
};
// This is a global register, so once a plugin has been developed,
// add it to this list.
struct S_Register
{
S_Register()
{
// Add each plugin here, you'll only have to do this when a new plugin
// is created, unfortunately you have to do it manually, can't
// think of a way of adding a type at compile time...
boost::fusion::vector<
S_Types<0>,
S_Types<1>,
S_Types<2>
> plugins;
boost::fusion::for_each(plugins, instantiate());
}
};
int main(void)
{
// single instance of the register, defining this here, effectively
// triggers calls to instanc() of all the plugins and supported types...
S_Register reg;
return 0;
}
Basically uses a fusion vector to define all the possible instances that could exist. It will take a little bit of work from you and the developers, as I've outlined in the code... hopefully it'll give you an idea...
I'm trying to determine if the following scenario is appropriate for a template, and if so how it would be done.
I have a base class, event_base. It is inherited by specific types of events.
class event_base_c {
//... members common to all events ...
// serialize the class for transmision
virtual std::string serialize(void);
};
class event_motion_c : public event_base_c {
//... members for a motion event ...
// serialize the class for transmission
virtual std::string serialize(void);
};
class event_alarm_c : public event_base_c {
//... members for a motion event ...
// serialize the class for transmission
virtual std::string serialize(void);
};
Events get serialized and sent from one various process to an event logger, which recreates the event object from the serialized data.
My question is with regards to the processes that are sending the events. We cannot include a 'send()' method in the event class. I have been told that I need to create an event_sender object that knows how to send the serialized event. So the code from one process might be:
if (motion_detected on sensor1) {
event_motion_c Event(sensor1, x, y, z);
event_sender EventSender;
EventSender.report(Event.serialize());
}
While some other process might report an alarm using similar code such as:
if (alarm) {
event_alarm_c Event(alarm_id, alarm_type);
event_sender EventSender;
EventSender.report(Event.serialize());
}
This feels like a template candidate to me, but what stops/confuses me is that the constructor for the different event classes have different number of parameters. I do not know if templates support something like that, and if they do, I don't know the syntax for doing so.
I could easily define this as a macro such as:
#define SEND_EVENT(evt_class, args...) \
{ \
evt_class Event(#args); \
event_sender EventSender; \
\
EventSender.report(Event.serialize()); \
}
Then the coder would simply use:
SEND_EVENT(event_motion_c, sensor1, x, y, z);
and
SEND_EVENT(event_alarm_c, alarm_type);
But I am hesitant to make a macro for this.
Do templates support variable numbers of parameters? And if so, how is that done?
C++ does not support variadic templates, but C++0x will, and some compilers already have support for this (including G++ with the --std=c++0x flag). Wikipedia has examples of how to use this feature.
no.
In C++ variadic templates are not supported.
But you can easily overcome that by giving template defaults:
template<class I, class J = void>
struct S;
template<class, class> struct S {}; // two parameter
template<class I> struct S<I> {}; // "single" parameter, second parameter is void
S<int, int>; // two parameter instance
S<int>; // "single" parameter instance
the default does not have to be void type, it can be anything.
sometimes the style may become too messy (if you have lots of defaults), then you can use boost preprocessor, namely:
http://www.boost.org/doc/libs/1_43_0/libs/preprocessor/doc/ref/enum_params_with_a_default.html
http://www.boost.org/doc/libs/1_43_0/libs/preprocessor/doc/ref/enum_params.html
http://www.boost.org/doc/libs/1_43_0/libs/preprocessor/doc/ref/enum_binary_params.html
Variadic templates are an in-progress C++0x feature. You've been able to at least start using them as of GCC 4.3. I don't pay much attention to Microsoft.
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.