For timeouts in event reactors and proactors I use a priority queue that also allows O(log(n)) random access removes of events (when the event is signalled/completes rather than a timeout occurring). I store std::pair<std::chrono::steady_clock::time_point, Timed *> where Timed is a class that adds has an index (pointing into the queue) to allow efficient removal when calling TimedQ::Remove(Timed *p). When I want to have an event type associated with a timeout, I derive from Timed. The queue's Top() and Pop() return a pair.
I used to have a a bunch of code using the queue such as
std::tie(timePt0, eventPtr0) = timeoutQ.Pop();
std::tie(timePt1, eventPtr1) = std::move(hold);
which worked fine before I started using a base class Timed * in the queue instead of specific event type (i.e. Timed was originally a templated type instead), as I eventually needed to support multiple different event types that can be associated with the timeouts. However, with eventPtr* being a derived type (that I can static_cast to from a Timed * returned by the queue), code like the above no longer works.
I'm wondering what's the best way to do this. Right now, it's ended up very verbose, and I'm concerned about efficiencies like temporaries being created as well:
auto v(timeoutQ.Pop());
timePt0 = v.first;
eventPtr0 = static_cast<TimedEvent *>(v.second);
std::tie(timePt1, eventPtr1) = std::move(std::make_pair(hold.first, static_cast<TimedEvent *>(hold.second)); // I didn't literally do it like this, but I'm just trying to illustrate my struggle
The only other idea I had was to template the functions that return a pair by the derived event class, but this seems poor from a code size perspective, as multiple instances of those functions will be created even though the machine code should be identical since in all cases it's a pointer that's stored.
Edit:
I also tried using this, which compiles, but I'm not sure is correct or efficient:
template<class D>
std::pair<std::chrono::steady_clock::time_point, D *> &&Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed *> &&in)
{
return std::make_pair(in.first, static_cast<D *>(in.second));
}
the initial example then would become
std::tie(timePt0, eventPtr0) = Cnvrt<std::remove_pointer<decltype(eventPtr0)>::type>(timeoutQ.Pop());
std::tie(timePt1, eventPtr1) = Cnvrt<std::remove_pointer<decltype(eventPtr1)>::type>(hold);
The Cnvrt you've shown returns a dangling reference – classic UB.
Here's a corrected C++11-compliant version that also validates D at compile-time and removes the need for the manual std::remove_pointer<...>::type at the call site:
template<typename D>
constexpr
std::pair<std::chrono::steady_clock::time_point, D>
Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed*> const& in) noexcept
{
static_assert(std::is_pointer<D>{}, "D is not a pointer type");
using derived_type = typename std::remove_pointer<D>::type;
static_assert(std::is_base_of<Timed, derived_type>{}, "D does not derive from Timed");
using ptr_type = typename std::remove_cv<D>::type;
return {in.first, static_cast<ptr_type>(in.second)};
}
// ...
std::tie(timePt0, eventPtr0) = Cnvrt<decltype(eventPtr0)>(timeoutQ.Pop());
std::tie(timePt1, eventPtr1) = Cnvrt<decltype(eventPtr1)>(hold);
Online Demo
Here is an implementation that should work on VC++ 2012:
template<typename D>
std::pair<std::chrono::steady_clock::time_point, D>
Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed*> const& in) throw()
{
static_assert(std::is_pointer<D>::value, "D is not a pointer type");
typedef typename std::remove_pointer<D>::type derived_type;
static_assert(std::is_base_of<Timed, derived_type>::value, "D does not derive from Timed");
typedef typename std::remove_cv<D>::type ptr_type;
return std::make_pair(in.first, static_cast<ptr_type>(in.second));
}
Online Demo
There is no efficiency concern here whatsoever – even your worst-case scenario, if the compiler does no optimizations at all, is just a copy of one scalar and one pointer (VC++ 2012 may copy each twice, but again, only without optimizations enabled).
Related
I have 2 issues in a template class I'm building. I've included example code below. First question is whether I can coerce the auto type deducted for a templated class. i.e.:
auto p = myvar;
where myvar is T<...>, could I force auto to detect Q<...>? This is simplified. Read on for a more clear explanation.
Edited for clarity: Let me explain what I'm doing. And I'd also like to indicate that this style code is working on a large-scale project perfectly well. I am trying to add some features and functions and in addition to smooth out some of the more awkward behaviors.
The code uses templates to perform work on n-dimensional arrays. The template has a top-level class, and a storage class underneath. Passing the storage class into the top level class allows for a top level class which inherits the storage class. So I start with NDimVar, and I have NDimStor. I end up with
NDimVar<NDimStor>
The class contains NO DATA except for the buffer of data:
class NDimStor<size_t... dimensions> {
int buffer[Size<dimensions...>()]
}
This makes the address of the class == the address of the buffer. This is key to the whole implementation. Is this an incorrect assumption? (I can see this works on my system without any issues, but perhaps this isn't always the case.)
When I create NDimVar<NDimStor<10,10>> I end up with a 10x10 array.
I have functions for getting pieces of the array, for example:
NDimVar<NDimStor<dimensions...>>::RemoveDim & get(int index);
This creates a new 1d array of 10 elements out of the 2d 10x10 array:
NDimVar<NdimStor<10>>
In order to return this as a reference, I use a reinterpret_cast at the location of the data I want. So in this example, get(3) would perform:
return reinterpret_cast<NDimVar≤NDimStor<dimensions...>>::RemoveDim&>(buffer[index * DimensionSumBelow<0>()]);
DimensionSumBelow<0> returns the sum of elements at dimensions 1+, i.e. 10. So &buffer[30] is the address of the referenced 1d NDimVar.
All of this works very well.
The only issue I have is that I would like to add on overlays. For example, be able to return a reference to a new class:
NDimVar<NDimPermute<NDimStor<10,10>,1,0>>
that points to the same original location along with a permutation behavior (swapping dimensions). This also works well. But I would like for:
auto p = myvar.Permute<1,0>()
to create a new copy of myvar with permuted data. This would work if I said:
NDimVar<NDimStor<10,10>> p = myvar.Permute<1,0>().
I feel that there is some auto type deduction stuff I could do in order to coerce the auto type returned, but I'm not sure. I haven't been able to figure it out.
Thanks again,
Nachum
What I want is:
1. Create temporary overlay classes on my storage, e.g. A_top<A_storage> can return a type called A_top<A_overlay<A_storage>> without creating a new object, it just returns a reference to this type. This changes the way the storage is accessed. The problem is upon a call to auto. I don't want this type to be instantiated directly. Can I modify the return to auto to be an original A_top?
#include <iostream>
using namespace std;
class A_storage {
public:
float arr[10];
A_storage () {
}
float & el (int index) {
return arr[index];
}
};
template <typename T> class A_overlay : T {
private:
A_overlay () {
cout << "A_overlay ()" << endl;
}
A_overlay (const A_overlay &) {
cout << "A_overlay (&)" << endl;
}
public:
using T::arr;
float & el (int index) {
return arr[10 - index];
}
};
template <typename T> class A_top;
template <typename T> class A_top : public T {
public:
A_top () {
}
A_top<A_overlay<A_storage>> & get () {
return reinterpret_cast<A_top<A_overlay<A_storage>>&>(*this);
}
};
using A = A_top<A_storage>;
int main (void) {
A a;
auto c = a.get(); // illegal - can i auto type deduce to A_top<A_storage>?
return 0;
}
If a function accepts (A_top<A_storage> &) as a parameter, how can I create a conversion function that can cast A_top<A_overlay<A_storage>>& to A_top<A_storage>& ?
Thanks,
Nachum
First, your design doesn't look right to me, and I'm not sure if the behaviour is actually well-defined or not. (Probably not.)
In any case, the problem is not with auto. The error is caused by the fact that the copy constructor of A_overlay is private, while you need it to copy A_top<A_overlay<A_storage>> returned by a.get() to auto c.
(Note that the auto in this case obviously gets deduced to A_top<A_overlay<A_storage>>, I assume you made a typo when said that it's A_top<A_storage>.)
Also note that A_storage in A_top::get() should be replaced with T, even if it doesn't change anything in your snippet because you only have T == A_storage.
If a function accepts (A_top &) as a parameter, how can I create a conversion function that can cast A_top> to A_top& ?
Ehm, isn't it just this:
return reinterpret_cast<A_top<A_storage>&>(obj);
reinterpret_cast should almost never be used. It essentially remove any compiler validation that the types are related. And doing unrelated cast is essentially undefined behavior as it essentially assume that derived classes are always at offset 0...
It does not make any sense to write such code. It is not maintainable and hard to understand what you are trying to achieve. It look like you want to pretend that your A_top<A_storage> object is a A_top<A_overlay<A_storage>> object instead. If this is what you want to do, then declare A alias as that type.
In your code, it look like you want to invert the indexing so that item at position 10 is returned when you ask item at position 0 and vice versa. Do you really think, that it is obvious from your obfuscated code? Never write such bad code.
Something like
class A_overlay {
public:
float & el (int index) { return arr[10 - index]; }
private:
A_storage arr;
};
would make much more sense than your current code.
No cast needed.
Easy to understand.
Well defined behavior.
You might keep your job.
And obviously, you would update the following line as appropriate:
using A = A_top<A_storage>;
Also, if A_top has no useful purpose, then why not using A_overlay directly? And why are you using template if A_storage is not a template? Do you really want to reuse such mess elsewhere in your code base.
Obviously, your code inheritance does not respect IS-A relationship if your write such code. So it is clearly a bad design!
I have a lot of custom datatypes in one of my projects which all share a common base class.
My data (coming from a database) has a datatype which is distinguished by an enum of the base class. My architecture allows a specific datatype to be specialized with a derived class or it can be handled by the base class.
When I construct one my specific datatypes I normally call the constructor directly:
Special_Type_X a = Special_Type_X("34.34:fdfh-78");
a.getFoo();
There is some template magic which also allows constructing it like this:
Type_Helper<Base_Type::special_type_x>::Type a = Base_Type::construct<Base_Type::special_type_x>("34.34:fdfh-78");
a.getFoo();
For some values of the type enum there might be no specialization so
Type_Helper<Base_Type::non_specialized_type_1>::Type == Base_Type
When I'm fetching data from the database the datatype isn't known at compile time so there's a third way to construct the datatypes (from a QVariant):
Base_Type a = Base_Type::construct(Base_type::whatever,"12.23#34io{3,3}");
But of course I want the correct constructor to be called, so the implementation of that method used to look like:
switch(t) {
case Base_Type::special_type_x:
return Base_Type::construct<Base_Type::special_type_x>(var);
case Base_Type::non_specialized_type_1:
return Base_Type::construct<Base_Type::non_specialized_type_1>(var);
case Base_Type::whatever:
return Base_Type::construct<Base_Type::whatever>(var);
//.....
}
This code is repetitive and since the base class can handle new types (added to the enum) as well, I came up with the following solution:
// Helper Template Method
template <Base_Type::type_enum bt_itr>
Base_Type construct_switch(const Base_Type::type_enum& bt, const QVariant& v)
{
if(bt_itr==bt)
return Base_Type::construct<bt_itr>(v);
return construct_switch<(Base_Type::type_enum)(bt_itr+1)>(bt,v);
}
// Specialization for the last available (dummy type): num_types
template <>
Base_Type construct_switch<Base_Type::num_types>(const Base_Type::type_enum& bt, const QVariant&)
{
qWarning() << "Type" << bt << "could not be constructed";
return Base_Type(); // Creates an invalid Custom Type
}
And my original switch statement is replaced with:
return construct_switch<(Base_Type::type_enum)0>(t,var);
This solution works as expected.
The compiled code is however different. While the original switch statement had a complexity of O(1) the new approach results in a O(n) complexity. The generated code recursively calls my helper method until it finds the correct entry.
Why can't the compiler optimize this properly? Are there any better ways to solve this?
Similar problem:
Replacing switch statements when interfacing between templated and non-templated code
I should mention that I would like to avoid C++11 and C++14 and stick to C++03.
This is what I call the magic switch problem -- how to take a (range of) run time values and turn it into a compile time constant.
Abstractly, you want to generate this switch statement:
switch(n) {
(case I from 0 to n-1: /* use I as a constant */)...
}
You can use parameter packs to generate code that is similar to this in C++.
I'll start with c++14-replacing boilerplate:
template<unsigned...> struct indexes {typedef indexes type;};
template<unsigned max, unsigned... is> struct make_indexes: make_indexes<max-1, max-1, is...> {};
template<unsigned... is> struct make_indexes<0, is...>:indexes<is...> {};
template<unsigned max> using make_indexes_t = typename make_indexes<max>::type;
Now we can create a compile-time sequence of unsigned integers from 0 to n-1 easily. make_indexes_t<50> expands to indexes<0,1,2,3, ... ,48, 49>. The c++14 version does so in O(1) steps, as most (all?) compilers implement std::make_index_sequence with an intrinsic. The above does it in linear (at compile time -- nothing is done at run time) recursive depth, and quadratic compile time memory. This sucks, and you can do better with work (logarithmic depth, linear memory), but do you have more than a few 100 types? If not, this is good enough.
Next, we build an array of callbacks. As I hate C legacy function pointer syntax, I'll throw in some pointless boilerplate to hide it:
template<typename T> using type = T; // pointless boilerplate that hides C style function syntax
template<unsigned... Is>
Base_Type construct_runtime_helper( indexes<Is...>, Base_Type::type_enum e, QVariant const& v ) {
// array of pointers to functions: (note static, so created once)
static type< Base_Type(const QVariant&) >* const constructor_array[] = {
(&Base_Type::construct<Is>)...
};
// find the eth entry, and call it:
return constructor_array[ unsigned(e) ](v);
}
Base_Type construct_runtime_helper( Base_Type::type_enum e, QVariant const& v ) {
return construct_runtime_helper( make_indexes_t< Base_Type::num_types >(), e, v );
}
and Bob is your Uncle1. An O(1) array lookup (with an O(n) setup, which in theory could be done prior to your executable launching) for dispatch.
1 "Bob's your Uncle" is a British Commonwealth saying that says "and everything is finished and working" roughly.
Are all the functions inline? I'd expect a reasonable compiler to optimize the if tree into a switch, but only if the ifs are in the same function. For portability, you might not want to rely on this.
You can get O(1) with an indirect function call by having construct_switch populate a std::vector<std::function<Base_Type(const QVariant&)>> with lambda functions that do the construction and then dispatch off that.
I have an implementation of a queue, something like template <typename T> queue<T> with a struct QueueItem { T data;} and I have a separate library that times the passage of data across different places (including from one producer thread to consumer thread via this queue). In order to do this, I inserted code from that timing library into the push and pop functions of the queue so that when they assign a BufferItem.data they also assign an extra member i added of type void* to some timing metadata from that library. I.e. what used to be something like:
void push(T t)
{
QueueItem i;
i.data = t;
//insert i into queue
}
became
void push(T t)
{
QueueItem i;
i.data = t;
void* fox = timinglib.getMetadata();
i.timingInfo = fox;
//insert i into queue
}
with QueueItem going from
struct QueueItem
{
T data;
}
to
struct QueueItem
{
T data;
void* timingInfo;
}
What I would like to achieve, however, is the ability to swap out of the latter struct in favor of the lighter weight struct whenever the timing library is not activated. Something like:
if timingLib.isInactive()
;//use the smaller struct QueueItem
else
;//use the larger struct QueueItem
as cheaply as possible. What would be a good way to do this?
You can't have a struct that is big and small at the same time, obviously, so you're going to have to look at some form of inheritance or pointer/reference, or a union.
A union would be ideal for you if there's "spare" data in T that could be occupied by your timingInfo. If not, then it's going to be as 'heavy' as the original.
Using inheritance is also likely to be as big as the original, as it'll add a vtable in there which will pad it out too much.
So, the next option is to store a pointer only, and have that point to the data you want to store, either the data or the data+timing. This kind of pattern is known as 'flyweight' - where common data is stored separately to the object that is manipulated. This might be what you're looking for (depending on what the timing info metadata is).
The other, more complex, alternative is to have 2 queues that you keep in sync. You store data in one, and the other one stores the associated timeing info, if enabled. If not enabled, you ignore the 2nd queue. The trouble with this is ensuring the 2 are kept in sync, but that's a organisational problem rather than a technical challenge. Maybe create a new Queue class that contains the 2 real queues internally.
I'll start by just confirming my assumption that this needs to be a runtime choice and you can't just build two different binaries with timing enabled/disabled. That approach eliminates as much overhead in any approach as possible.
So now let's assume we want different runtime behavior. There will need to be runtime decisions, so there are a couple options. If you can get away with the (relatively small) cost of polymorphism then you could make your queue polymorphic and create the appropriate instance once at startup and then its push for example either will or won't add the extra data.
However if that's not an option I believe you can use templates to help accomplish your end, although there will likely be some up-front work and it will probably increase the size of your binary with the extra code.
You start with a template to add timing to a class:
template <typename Timee>
struct Timed : public Timee
{
void* timingInfo;
};
Then a timed QueueItem would look like:
Timed<QueueItem> timed_item;
To anything that doesn't care about the timing, this class looks exactly like a QueueItem: It will automatically upcast or slice to the parent as appropriate. And if a method needs to know the timing information you either create an overload that knows what to do for a Timed<T> or do a runtime check (for the "is timing enabled" flag) and downcast to the correct type.
Next, you'll need to change your Queue instantiation to know whether it's using the base QueueItem or the Timed version. For example, a very very rough sketch of a possible mechanism:
template <typename Element>
void run()
{
Queue<Element> queue;
queue.setup();
queue.process();
}
int main()
{
if(do_timing)
{
run<Timed<QueueItem> >();
}
else
{
run<QueueItem>();
}
return 0;
}
You would "likely" need a specialization for Queue when used with Timed items unless getting the metadata is stateless in which case the Timed constructor can gather the info and self-populate itself when created. Then Queue just stays the same and relies on which instantiation you're using.
I'm trying to write an event system for my game. The callbacks that my event manager will store can be both plain functions as well as functors. I also need to be able to compare functions/functors so I know which one I need to disconnect from the event manager.
• Initially I tried using boost::function; it handles functions and functors perfectly well, except it has no operator==, so I can't remove callbacks if I want to.
class EventManager
{
typedef boost::function<void (boost::weak_ptr<Event>)> Callback;
std::map<Event::Type, std::vector<Callback>> eventHandlerMap_;
};
• I also tried using boost::signal, but that also gives me a compilation problem related to operator==:
binary '==' : no operator found which takes a left-hand operand of type 'const Functor' (or there is no acceptable conversion)
void test(int c) {
std::cout << "test(" << c << ")";
}
struct Functor
{
void operator()(int g) {
std::cout << "Functor::operator(" << g << ")";
}
};
int main()
{
boost::signal<void (int)> sig;
Functor f;
sig.connect(test);
sig.connect(f);
sig(7);
sig.disconnect(f); // Error
}
Any other suggestions about how I might implement this? Or maybe how I can make either boost:: function or boost::signal work? (I'd rather use boost:: function though, since I've heard signal is rather slow for small collections of items.)
Edit: This is the interface of that I'd like EventManager to have.
class EventManager
{
public:
void addEventHandler(Event::Type evType, Callback func);
void removeEventHandler(Event::Type evType, Callback func);
void queueEvent(boost::shared_ptr<Event> ev);
void dispatchNextEvent();
};
You'll find that most generic function wrappers do not support function equality.
Why is this? Well, just look at your functor there:
struct Functor
{
void operator()(int g) {
std::cout << "Functor::operator(" << g << ")";
}
};
This Functor has no operator==, and therefore cannot be compared for equality. So when you pass it to boost::signal by value, a new instance is created; this will compare false for pointer-equality, and has no operator to test for value-equality.
Most functors don't, in fact, have value-equality predicates. It's not useful very much. The usual way to deal with this is to have a handle to the callback instead; boost::signals does this with its connection object. For example, take a look at this example from the documentation:
boost::signals::connection c = sig.connect(HelloWorld());
if (c.connected()) {
// c is still connected to the signal
sig(); // Prints "Hello, World!"
}
c.disconnect(); // Disconnect the HelloWorld object
assert(!c.connected()); c isn't connected any more
sig(); // Does nothing: there are no connected slots
With this, HelloWorld doesn't need to have an operator==, as you're referring directly to the signal registration.
Have you ever tried libsigc and libsigc++? I started using them in linux and fell in love with them. I now use them in my Windows applications as well. I believe it is more extensible and flexible than boost. It is also a breeze to implement.
I highly recommend you consider Don Clugston's "Member Function Pointers and the Fastest Possible C++ Delegates". You can find the article and download the code from here:
http://www.codeproject.com/KB/cpp/FastDelegate.aspx
Among many other benefits, his delegates provide comparison operators (==, !=, <) out of the box. I'm currently using them for a realtime system and find them excellent in every way. I do seem to recall we had to make a minor modification to fix a compiler portability issue; but, that experience will vary based on platform etc.
Also, the article is several years old so you may want to google around for updated code/discussion regarding this delegate implementation if you run into any problems.
No matter, I found the solution. A little template magic and things become simple(r):
template<typename F>
void EventManager::removeEventHandler(Event::Type evType, F func)
{
auto compare = [func](const Callback& other) -> bool {
F const* f = other.target<F>();
if (f == nullptr) return false;
return *f == func;
};
std::vector<Callback>& callbacks = ...;
auto pend = std::remove_if(callbacks.begin(), callbacks.end(), compare);
callbacks.erase(pend, callbacks.end());
}
template<typename R, typename F, typename L>
void EventManager::removeEventHandler(
Event::Type evType, const boost::_bi::bind_t<R, F, L>& func)
{
auto compare = [&func](const Callback& other) -> bool {
auto const* f = other.target<boost::_bi::bind_t<R, F, L>>();
if (f == nullptr) return false;
return func.compare(*f);
};
std::vector<Callback>& callbacks = ...;
auto pend = std::remove_if(callbacks.begin(), callbacks.end(), compare);
callbacks.erase(pend, callbacks.end());
}
I need to handle Boost.Bind objects separately because operator== doesn't actually do comparison for Bind objects, but produce a new functor that compares the result of the other two (read more). To compare Boost.Bind you have to use the member function compare().
The type boost::_bi::bind_t seems to be an internal type of Boost (I guess that's what the underscore in namespace '_bi' means), however it should be safe to use it as all overloads of boost::function_equal also use this type (reference).
This code will work for all types of functors as long as there is an operator== defined that does comparison, or if you're using Boost.Bind. I had a superficial look into std::bind (C++0x), but that doesn't seem to be comparable, so it won't work with the code I posted above.
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.