I have a lot of custom datatypes in one of my projects which all share a common base class.
My data (coming from a database) has a datatype which is distinguished by an enum of the base class. My architecture allows a specific datatype to be specialized with a derived class or it can be handled by the base class.
When I construct one my specific datatypes I normally call the constructor directly:
Special_Type_X a = Special_Type_X("34.34:fdfh-78");
a.getFoo();
There is some template magic which also allows constructing it like this:
Type_Helper<Base_Type::special_type_x>::Type a = Base_Type::construct<Base_Type::special_type_x>("34.34:fdfh-78");
a.getFoo();
For some values of the type enum there might be no specialization so
Type_Helper<Base_Type::non_specialized_type_1>::Type == Base_Type
When I'm fetching data from the database the datatype isn't known at compile time so there's a third way to construct the datatypes (from a QVariant):
Base_Type a = Base_Type::construct(Base_type::whatever,"12.23#34io{3,3}");
But of course I want the correct constructor to be called, so the implementation of that method used to look like:
switch(t) {
case Base_Type::special_type_x:
return Base_Type::construct<Base_Type::special_type_x>(var);
case Base_Type::non_specialized_type_1:
return Base_Type::construct<Base_Type::non_specialized_type_1>(var);
case Base_Type::whatever:
return Base_Type::construct<Base_Type::whatever>(var);
//.....
}
This code is repetitive and since the base class can handle new types (added to the enum) as well, I came up with the following solution:
// Helper Template Method
template <Base_Type::type_enum bt_itr>
Base_Type construct_switch(const Base_Type::type_enum& bt, const QVariant& v)
{
if(bt_itr==bt)
return Base_Type::construct<bt_itr>(v);
return construct_switch<(Base_Type::type_enum)(bt_itr+1)>(bt,v);
}
// Specialization for the last available (dummy type): num_types
template <>
Base_Type construct_switch<Base_Type::num_types>(const Base_Type::type_enum& bt, const QVariant&)
{
qWarning() << "Type" << bt << "could not be constructed";
return Base_Type(); // Creates an invalid Custom Type
}
And my original switch statement is replaced with:
return construct_switch<(Base_Type::type_enum)0>(t,var);
This solution works as expected.
The compiled code is however different. While the original switch statement had a complexity of O(1) the new approach results in a O(n) complexity. The generated code recursively calls my helper method until it finds the correct entry.
Why can't the compiler optimize this properly? Are there any better ways to solve this?
Similar problem:
Replacing switch statements when interfacing between templated and non-templated code
I should mention that I would like to avoid C++11 and C++14 and stick to C++03.
This is what I call the magic switch problem -- how to take a (range of) run time values and turn it into a compile time constant.
Abstractly, you want to generate this switch statement:
switch(n) {
(case I from 0 to n-1: /* use I as a constant */)...
}
You can use parameter packs to generate code that is similar to this in C++.
I'll start with c++14-replacing boilerplate:
template<unsigned...> struct indexes {typedef indexes type;};
template<unsigned max, unsigned... is> struct make_indexes: make_indexes<max-1, max-1, is...> {};
template<unsigned... is> struct make_indexes<0, is...>:indexes<is...> {};
template<unsigned max> using make_indexes_t = typename make_indexes<max>::type;
Now we can create a compile-time sequence of unsigned integers from 0 to n-1 easily. make_indexes_t<50> expands to indexes<0,1,2,3, ... ,48, 49>. The c++14 version does so in O(1) steps, as most (all?) compilers implement std::make_index_sequence with an intrinsic. The above does it in linear (at compile time -- nothing is done at run time) recursive depth, and quadratic compile time memory. This sucks, and you can do better with work (logarithmic depth, linear memory), but do you have more than a few 100 types? If not, this is good enough.
Next, we build an array of callbacks. As I hate C legacy function pointer syntax, I'll throw in some pointless boilerplate to hide it:
template<typename T> using type = T; // pointless boilerplate that hides C style function syntax
template<unsigned... Is>
Base_Type construct_runtime_helper( indexes<Is...>, Base_Type::type_enum e, QVariant const& v ) {
// array of pointers to functions: (note static, so created once)
static type< Base_Type(const QVariant&) >* const constructor_array[] = {
(&Base_Type::construct<Is>)...
};
// find the eth entry, and call it:
return constructor_array[ unsigned(e) ](v);
}
Base_Type construct_runtime_helper( Base_Type::type_enum e, QVariant const& v ) {
return construct_runtime_helper( make_indexes_t< Base_Type::num_types >(), e, v );
}
and Bob is your Uncle1. An O(1) array lookup (with an O(n) setup, which in theory could be done prior to your executable launching) for dispatch.
1 "Bob's your Uncle" is a British Commonwealth saying that says "and everything is finished and working" roughly.
Are all the functions inline? I'd expect a reasonable compiler to optimize the if tree into a switch, but only if the ifs are in the same function. For portability, you might not want to rely on this.
You can get O(1) with an indirect function call by having construct_switch populate a std::vector<std::function<Base_Type(const QVariant&)>> with lambda functions that do the construction and then dispatch off that.
Related
Is there a way to get the memory/speed benefit of fixed length (known at compile-time) arrays while still having the static error checking and readability provided by enum class when indexing an array?
I exactly want the behavior of the array, except I want to disallow users from indexing by integers. I'd prefer to force writing potentially_descriptive_name[kLogicalRisingEdge] over potentially_descriptive_name[0] as I work with physicists who prioritize new results over writing maintainable codebase since they only need to work on it for 5-7 years before never seeing it again.
I'm hoping to have the ability to quickly iterate over subsets of data an arbitrary number of times (in my field, the filtered subset is called a "cut" of the data), but I need to perform this preprocessing first and I thought it would be great for readability to have an enum for the cut, so I can have something like what's below:
have a header defining the cuts
// data_cuts.h
namespace data_cuts {
enum TimeCut {kInTime, kOutOfTime, kOther, N_elems}; // or enum class : uint?
TimeCut GetTimeCut(const Datum& d);
enum EdgeCut {kRising, kFalling, N_elems}; // sim
EdgeCut GetEdgeCut(const Datum& d);
}
filling the data like this in one part of the project
for (const auto& d : full_data)
data_by_time_cut[GetTimeCut(d)].push_back(datum);
// potentially a much less memory intensive way to do this, but that should be addressed elsewhere
reading said data in another part of the project
using namespace data_cuts;
for (TimeCut cut : std::vector<TimeCut>{kInTime, kOutOfTime})
histograms_by_time_cut[cut].FillN(data_by_time_cut[c]);
for (const auto& d : data_by_time_cut[kInTime])
histograms_by_time_cut[kInTime].FillSpecial(d);
but I don't know how to choose the type for data_by_time_cut and histograms_by_time_cut
Options I can think of, (I think improving as it goes down)
I could use std::map<CutType, T> and enum class CutType and indexing is only permissible with CutType and not numeric-int or another type of cut, but I have all of map's overhead.
I could use std::vector and convert the strongly typed enum to underlying every time I access, but it feels ungainly and it doesn't have the static check for generic integer indexing.
I could use std::array and convert the strongly typed enum to underlying again.
I could [learn how to and] write a template that wraps std::array and overloads operator[](T t) to call to_underlying and then I'm doing the above but less verbose. However, this just seems so generic, is this already implemented in a "better way" in C++14?
This is my attempt at the last option I listed, uses the code found here to define to_underlying
#include <array>
#include <vector>
template <typename E>
constexpr auto to_underlying(E e) noexcept
{
return static_cast<std::underlying_type_t<E>>(e);
}
template <typename T, typename U>
class EnumIndexedArray
{
public:
EnumIndexedArray(){}; // should I add the other constructors?
U& operator[](const T& t) {return m_arr[to_underlying(t)]; };
const U& operator[](const T& t) const {return m_arr[to_underlying(t)]; };
private:
std::array<T, to_underlying(T::N_elems)> m_arr;
};
namespace data_cuts {
enum class TimeCut : char {kInTime, kOutOfTime, kOther, N_elems};
enum class EdgeCut : char {kRising, kFalling, N_elems};
}
class Datum;
class Histogram;
using Data = std::vector<Datum>;
EnumIndexedArray<data_cuts::TimeCut, Data> data_by_time_cut;
EnumIndexedArray<data_cuts::TimeCut, Histogram> histograms_by_time_cut;
I wonder if I'm adding too much for this to be a general question, so I'll stop for now.
For timeouts in event reactors and proactors I use a priority queue that also allows O(log(n)) random access removes of events (when the event is signalled/completes rather than a timeout occurring). I store std::pair<std::chrono::steady_clock::time_point, Timed *> where Timed is a class that adds has an index (pointing into the queue) to allow efficient removal when calling TimedQ::Remove(Timed *p). When I want to have an event type associated with a timeout, I derive from Timed. The queue's Top() and Pop() return a pair.
I used to have a a bunch of code using the queue such as
std::tie(timePt0, eventPtr0) = timeoutQ.Pop();
std::tie(timePt1, eventPtr1) = std::move(hold);
which worked fine before I started using a base class Timed * in the queue instead of specific event type (i.e. Timed was originally a templated type instead), as I eventually needed to support multiple different event types that can be associated with the timeouts. However, with eventPtr* being a derived type (that I can static_cast to from a Timed * returned by the queue), code like the above no longer works.
I'm wondering what's the best way to do this. Right now, it's ended up very verbose, and I'm concerned about efficiencies like temporaries being created as well:
auto v(timeoutQ.Pop());
timePt0 = v.first;
eventPtr0 = static_cast<TimedEvent *>(v.second);
std::tie(timePt1, eventPtr1) = std::move(std::make_pair(hold.first, static_cast<TimedEvent *>(hold.second)); // I didn't literally do it like this, but I'm just trying to illustrate my struggle
The only other idea I had was to template the functions that return a pair by the derived event class, but this seems poor from a code size perspective, as multiple instances of those functions will be created even though the machine code should be identical since in all cases it's a pointer that's stored.
Edit:
I also tried using this, which compiles, but I'm not sure is correct or efficient:
template<class D>
std::pair<std::chrono::steady_clock::time_point, D *> &&Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed *> &&in)
{
return std::make_pair(in.first, static_cast<D *>(in.second));
}
the initial example then would become
std::tie(timePt0, eventPtr0) = Cnvrt<std::remove_pointer<decltype(eventPtr0)>::type>(timeoutQ.Pop());
std::tie(timePt1, eventPtr1) = Cnvrt<std::remove_pointer<decltype(eventPtr1)>::type>(hold);
The Cnvrt you've shown returns a dangling reference – classic UB.
Here's a corrected C++11-compliant version that also validates D at compile-time and removes the need for the manual std::remove_pointer<...>::type at the call site:
template<typename D>
constexpr
std::pair<std::chrono::steady_clock::time_point, D>
Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed*> const& in) noexcept
{
static_assert(std::is_pointer<D>{}, "D is not a pointer type");
using derived_type = typename std::remove_pointer<D>::type;
static_assert(std::is_base_of<Timed, derived_type>{}, "D does not derive from Timed");
using ptr_type = typename std::remove_cv<D>::type;
return {in.first, static_cast<ptr_type>(in.second)};
}
// ...
std::tie(timePt0, eventPtr0) = Cnvrt<decltype(eventPtr0)>(timeoutQ.Pop());
std::tie(timePt1, eventPtr1) = Cnvrt<decltype(eventPtr1)>(hold);
Online Demo
Here is an implementation that should work on VC++ 2012:
template<typename D>
std::pair<std::chrono::steady_clock::time_point, D>
Cnvrt(std::pair<std::chrono::steady_clock::time_point, Timed*> const& in) throw()
{
static_assert(std::is_pointer<D>::value, "D is not a pointer type");
typedef typename std::remove_pointer<D>::type derived_type;
static_assert(std::is_base_of<Timed, derived_type>::value, "D does not derive from Timed");
typedef typename std::remove_cv<D>::type ptr_type;
return std::make_pair(in.first, static_cast<ptr_type>(in.second));
}
Online Demo
There is no efficiency concern here whatsoever – even your worst-case scenario, if the compiler does no optimizations at all, is just a copy of one scalar and one pointer (VC++ 2012 may copy each twice, but again, only without optimizations enabled).
I have this wrapper function that is supposed to call the appropriate function on a large dataset based on the type of data it contains, like this:
void WrapperFunc( int iRealDataType, int iUseAsDataType )
{
// now call the right function based on both arguments
switch ( iRealDataType )
{
case FancyType1:
switch ( iUseAsDataType )
{
case CoolType1: DataAnalysisFunc_Fancy1_Cool1(); break;
// etc.
}
// etc.
}
}
So far, this was solved by having two nested switch statements and then calling one of the many many specialized functions for each existing combination of Real and UseAs data type. However as the number of defined types grows it is a nightmare to maintan the code base. So I decided to finally use templates. I mostly avoid them if I can, but this time they suit the problem well.
So now instead of DataAnalysisFunc_Fancy1_Cool1 i would like to call DataAnalysisFunc<FancyType1,CoolType1> ang get rid of the hundreds of lines of switch statements, BUT i cannot use it like this, since FancyType1 is an enum, not the type (which is Fancy1 for example).
Just to clarify - I know this sounds like a stupid artificial example, but I tried to simplify the problem as much as possible to get to the core of it, instead of explaining the ton of details that would go into a much more concrete example.
EDIT: my data analysis functions are in reality CUDA kernels - this will probably rule out some possible solutions. Sorry for that.
Templates sound like the wrong solution. What you want is a lookup table.
typedef void (*DataAnalysisFunc)();
const DataAnalysisFunc DataAnalysisFunctions[NumFancyTypes][NumCoolTypes] = {
/*Fancy1*/ {
/*Cool1*/ &DataAnalysisFunc_Fancy1_Cool1,
/*Cool2*/ &DataAnalysisFunc_Fancy1_Cool2 }
/*Fancy2*/ {
/*Cool1*/ &DataAnalysisFunc_ImpossibleCombination, // can't happen, throw or something
/*Cool2*/ &DataAnalysisFunc_Fancy2_Cool2 }
};
void WrapperFunc(int realDataType, int useAsDataType) {
assert(realDataType >= 0 && realDataType < NumFancyTypes);
assert(useAsDataType >= 0 && useAsDataType < NumCoolTypes);
(*DataAnalysisFunctions[realDataType][useAsDataType])();
}
Now, if those DataAnalysisFuncs share a lot of code, templates might help you there, but not for dynamic dispatch.
BUT i cannot use it like this, since FancyType1 is an enum, not the
type (which is Fancy1 for example)
You can convert enum to type, just use one of metaprogramming basic tools:
Int2Type, it is used to replace run-time branches of if statements on the compile time dispatches.
It looks like:
template <int Number>
struct Int2Type
{
enum {value};
};
Int2Type - is treated as a type, using it and function overloading - you can replace if statements.
UPDATE:
I added some example here, to make my answer more clear
1. Int2Type
// usage allows to implement dispatch in a compile time instead of branching statements in a run-time
template <int Val>
struct Int2Type
{
static const int val_= Val;
};
template <typename ItBegin, typename ItEnd>
void doSort(ItBegin it0, ItEnd it1, Int2Type<1>)
{
using namespace std;
// sort
cout << "Standart sorting algorithm was used. For iterators" << endl;
}
template <typename ItBegin, typename ItEnd>
void doSort(ItBegin it0, ItEnd it1, Int2Type<2>)
{
using namespace std;
// sort
cout << "Fast sorting algorithm was used. For pointers" << endl;
}
// based on the 3-rd dummy type parameter call will be dispatched to needed function
int arr[3];
MainTools::doSort(arr, arr + sizeof(arr) / sizeof(arr[0]), MainTools::Int2Type<1>());
vector<int> v(3);
MainTools::doSort(v.begin(), v.end(), MainTools::Int2Type<2>());
You're looking for type dispatching. I find this easiest to do with Boost.MPL.
#include <boost/mpl/for_each.hpp>
#include <boost/mpl/vector.hpp>
#include <boost/mpl/vector_c.hpp>
#include <boost/mpl/at.hpp>
struct DispatchChecker
{
int iRealDataType, iUseAsDataType;
template <class T>
void operator()(T) const
{
static const int iCurrentReal = boost::mpl::at_c<T, 0>::type::value;
static const int iCurrentUseAs = boost::mpl::at_c<T, 1>::type::value;
if(iRealDataType == iCurrentReal &&
iUseAsDataType == iCurrentUseAs)
DataAnalysisFunc<iCurrentReal,iCurrentUseAs>();
}
};
typedef /*mpl sequence of all valid types*/ valid_types;
boost::mpl::for_each<valid_types>(DispatchChecker{iRealDataType,iUseAsDataType});
boost::mpl::for_each accepts a compile time sequence and instantiates and runs a functor on each element of the sequence. In this case the functor checks that the compile parameters match the runtime parameters and calls the appropriate DataAnalysisFunc when they match.
As for how to get the valid_types, the easiest way is to just write out each valid pair in a sequence like this:
typedef boost::mpl::vector<
boost::mpl::vector_c<int, 0, 0>,
boost::mpl::vector_c<int, 2, 0>,
boost::mpl::vector_c<int, 1, 1>,
boost::mpl::vector_c<int, 0, 2>,
boost::mpl::vector_c<int, 1, 2>
> valid_types;
Note:
As Sebastian Redl points out, using templates is probably only worthwhile if the DataAnalasysFuncs share a lot code between them, otherwise a runtime dispatch is probably a much better solution.
Suppose I have a calculator class that implements the Strategy Pattern using std::function objects as follows (see Scott Meyers, Effective C++: 55 Specific Ways to Improve Your Programs and Designs):
class Calculator
{
public:
...
std::vector<double> Calculate(double x, double y)
{
std::vector<double> res;
for(const Function& f : functions)
res.push_back(f(x,y));
return res;
}
private:
std::vector<Function> functions;
};
where
typedef std::function<double(double,double)> Function;
Here is the problem I am facing: suppose functions f and g, both of type Function, perform expensive and identical calculations internally to get the final result. In order to improve efficiency, one could wrap all the common data in a struct, compute it once and provide to them as an argument. However, this design has several flaws. For example, this would cause a change in the signature of Function, which can result in unnecessary arguments being passed to some function implementations. Moreover, these common and internal data are no longer hidden from other components in the code, which can harm code simplicity.
I would like to discuss the following optimization strategy: implement a class CacheFG that:
Define a Update method that calculates its internal data with a given pair of doubles x and y; and
Define a Check method to determine if its current internal data was calculated with a given pair of doubles x and y.
What one could do then is to make f and g to share a common instance of the class CacheFG, which could be done using the std::shared_ptr construct. So, below would be the creation of f and g functions using auxiliary functions f_aux and g_aux.
double f_aux(double x, double y, const std::shared_ptr<CacheFG>& cache)
{
if(not cache->Check(x,y))
cache->Update(x,y);
...
}
std::shared_ptr<CacheFG> cache;
Function f = std::bind(f_aux, _1, _2, cache);
Function g = std::bind(g_aux, _1, _2, cache);
My questions are: (1) is this a safe approach for optimization? (2) is there a better approach for solving this problem?
Edit: After a few answers, I found out that my intention here is to implement a memoization technique in C++. I remark that only the last calculated state is enough for my purposes.
Thanks to DeadMG, I will now write here just an improvement over his approach. His idea consists of using a memoization technique with variadic templates. I just offer a slight modification, where I use the construct std::decay<Args>::type to ensure the definition of a tuple with non-reference types only. Otherwise, functions with const-reference arguments would cause compilation errors.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> MemoizeLast(std::function<Ret(Args...)> f)
{
std::tuple<typename std::decay<Args>::type...> cache;
Ret result = Ret();
return [=](Args... args) mutable -> Ret
{
if(std::tie(args...) == cache)
return Ret(result);
cache = std::make_tuple(args...);
return result = f(args...);
};
}
In order to prevent the move of result, a copy of it is returned (return Ret(result)) when the provided args is the one cached.
Why create your own class? There's no need for you to fail to re-create the interface of unordered_map. This functionality can be added as a re-usable algorithm based on std::function and std::unordered_map. It's been a while since I worked with variadic templates, but I hope you get the idea.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> memoize(std::function<Ret(Args...)> t) {
std::unordered_map<std::tuple<Args...>, Ret> cache;
return [=](Args... a) mutable -> Ret {
if (cache.find(std::make_tuple(a...)) != cache.end())
return cache[std::make_tuple(a...)];
else
return cache[std::make_tuple(a...)] = t(a...);
};
}
I don't recall, offhand, whether std::hash natively supports tuples. If not, you might need to add it, or use std::map which does natively support them.
Edit: Hmm, I didn't notice that you wanted to share the cache. Well, this shouldn't be too difficult a problem, just stick an unordered_map member in Calculator and pass it in by reference, but the semantics of doing so seem a bit... odd.
Edit again: Just the most recent value? Even simpler.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> memoize_last(std::function<Ret(Args...)> t) {
std::tuple<Args...> cache;
Ret result;
return [=](Args... a) mutable -> Ret {
if (std::tie(a...) == cache)
return result;
cache = std::make_tuple(a...);
return result = t(a...);
};
}
If you want to share between several Functions, then the alteration is the same- just declare it in the class and pass in as reference.
Before optimizing - measure. Then if you really perform many calculations with same value - then create this cache object. I'd like to hide cache checking and updating in CacheFG::get(x, y) and use it like const auto value = cache->get(x,y).
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.