Suppose I have a calculator class that implements the Strategy Pattern using std::function objects as follows (see Scott Meyers, Effective C++: 55 Specific Ways to Improve Your Programs and Designs):
class Calculator
{
public:
...
std::vector<double> Calculate(double x, double y)
{
std::vector<double> res;
for(const Function& f : functions)
res.push_back(f(x,y));
return res;
}
private:
std::vector<Function> functions;
};
where
typedef std::function<double(double,double)> Function;
Here is the problem I am facing: suppose functions f and g, both of type Function, perform expensive and identical calculations internally to get the final result. In order to improve efficiency, one could wrap all the common data in a struct, compute it once and provide to them as an argument. However, this design has several flaws. For example, this would cause a change in the signature of Function, which can result in unnecessary arguments being passed to some function implementations. Moreover, these common and internal data are no longer hidden from other components in the code, which can harm code simplicity.
I would like to discuss the following optimization strategy: implement a class CacheFG that:
Define a Update method that calculates its internal data with a given pair of doubles x and y; and
Define a Check method to determine if its current internal data was calculated with a given pair of doubles x and y.
What one could do then is to make f and g to share a common instance of the class CacheFG, which could be done using the std::shared_ptr construct. So, below would be the creation of f and g functions using auxiliary functions f_aux and g_aux.
double f_aux(double x, double y, const std::shared_ptr<CacheFG>& cache)
{
if(not cache->Check(x,y))
cache->Update(x,y);
...
}
std::shared_ptr<CacheFG> cache;
Function f = std::bind(f_aux, _1, _2, cache);
Function g = std::bind(g_aux, _1, _2, cache);
My questions are: (1) is this a safe approach for optimization? (2) is there a better approach for solving this problem?
Edit: After a few answers, I found out that my intention here is to implement a memoization technique in C++. I remark that only the last calculated state is enough for my purposes.
Thanks to DeadMG, I will now write here just an improvement over his approach. His idea consists of using a memoization technique with variadic templates. I just offer a slight modification, where I use the construct std::decay<Args>::type to ensure the definition of a tuple with non-reference types only. Otherwise, functions with const-reference arguments would cause compilation errors.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> MemoizeLast(std::function<Ret(Args...)> f)
{
std::tuple<typename std::decay<Args>::type...> cache;
Ret result = Ret();
return [=](Args... args) mutable -> Ret
{
if(std::tie(args...) == cache)
return Ret(result);
cache = std::make_tuple(args...);
return result = f(args...);
};
}
In order to prevent the move of result, a copy of it is returned (return Ret(result)) when the provided args is the one cached.
Why create your own class? There's no need for you to fail to re-create the interface of unordered_map. This functionality can be added as a re-usable algorithm based on std::function and std::unordered_map. It's been a while since I worked with variadic templates, but I hope you get the idea.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> memoize(std::function<Ret(Args...)> t) {
std::unordered_map<std::tuple<Args...>, Ret> cache;
return [=](Args... a) mutable -> Ret {
if (cache.find(std::make_tuple(a...)) != cache.end())
return cache[std::make_tuple(a...)];
else
return cache[std::make_tuple(a...)] = t(a...);
};
}
I don't recall, offhand, whether std::hash natively supports tuples. If not, you might need to add it, or use std::map which does natively support them.
Edit: Hmm, I didn't notice that you wanted to share the cache. Well, this shouldn't be too difficult a problem, just stick an unordered_map member in Calculator and pass it in by reference, but the semantics of doing so seem a bit... odd.
Edit again: Just the most recent value? Even simpler.
template<typename Ret, typename... Args>
std::function<Ret(Args...)> memoize_last(std::function<Ret(Args...)> t) {
std::tuple<Args...> cache;
Ret result;
return [=](Args... a) mutable -> Ret {
if (std::tie(a...) == cache)
return result;
cache = std::make_tuple(a...);
return result = t(a...);
};
}
If you want to share between several Functions, then the alteration is the same- just declare it in the class and pass in as reference.
Before optimizing - measure. Then if you really perform many calculations with same value - then create this cache object. I'd like to hide cache checking and updating in CacheFG::get(x, y) and use it like const auto value = cache->get(x,y).
Related
I'm trying to implement a generic ECS library in C++ for learning purpose. I was thinking about a lot of way to implement things but I always run into a problem. So if you could help me with this one :
Let say I have a constexpr hana::tuple of hana::type_c Components, something like :
struct C1 {};
struct C2 {};
struct C3 {};
constexpr auto components = hana::to_tuple(hana::tuple_t<C1, C2, C3>);
And now I have a component storage type, which is not a problem here, so let's call it Storage (the type differ for each component):
struct Storage {};
I want to link each component or each component group, with their Storage type. So the easy way is to do something like that:
constexpr auto component_storage = hana::make_tuple(
hana::make_pair(hana::to_tuple(hana::tuple_t<C1, C2>), type_c<Storage>),
hana::make_pair(hana::to_tuple(hana::tuple_t<C3>), type_c<Storage>)
);
But the problem now is runtime. If I initialize that tuple but with the real Storage and no longer type_c<Storage>, I'll have to loop through the tuple to find the Storage that I need. All of this at runtime no?
And this is really bad, my last version had something like Component::getStorage() and it was free (but more restrictive).
So the question is : how can I manage to have some getStorage<Component>() function which will cost nothing at runtime? Well by nothing I mean just return the reference of the Storage.
EDIT: The only way I have think so far is quite simple (sounds like a good point).
Pseudo-Code
struct LinkedStorage {
hana::tuple<...> storages;
hana::tuple<hana::pair...> index;
};
At lest something like:
constexpr auto components = hana::to_tuple(hana::tuple_t<C1, C2, C3>);
constexpr auto storage = hana::to_tuple(hana::tuple_t<Storage, Storage>);
constexpr auto index = hana::make_tuple(
hana::make_pair(hana::to_tuple(hana::tuple_t<C1>, 0),
hana::make_pair(hana::to_tuple(hana::tuple_t<C2, C3>, 1)
);
Like that I should be able to found the index at compile time and just access the right element at runtime. But I'm new at metaprogramming, so I guess someone could make something far better.
First of all, no need to use to_tuple(tuple_t<...>); you can just use tuple_t<...>. Now, I think what you actually want to do (since you seem to need runtime storage, which makes sense) is:
// "map" of a set of types to a storage of some type
using StorageMap = hana::tuple<
hana::pair<hana::tuple<hana::type<C1>, hana::type<C2>>, StorageA>,
hana::pair<hana::tuple<hana::type<C3>>, StorageB>
>;
// Actual object that contains the runtime storage (and the free mapping between types)
StorageMap map;
Now, you can implement your getStorage<Component>() function like this:
template <typename Component>
decltype(auto) getStorage() {
auto found = index_if(map, [](auto const& pair) {
return hana::contains(hana::first(pair), hana::type<Component>{});
});
return hana::second(hana::at(map, found));
}
where index_if is a trivial variant of the function presented in this answer that would work on an arbitrary predicate instead of a specific element. This functionality will be added to Hana when I get some free time (see related ticket).
It looks like you are trying to make a map that can look up a single instance using different keys. Here is a snippet from an old implementation that I wrote. I modified it a bit, but it should convey the idea.
namespace detail {
// extractKeys - returns pairs of each element and itself
struct extract_keys_fn
{
template<typename TypesType>
constexpr auto operator()(TypesType s) const {
return decltype(hana::unpack(typename TypesType::type{},
hana::make_tuple
^hana::on^
hana::reverse_partial(hana::make_pair, s)
)){};
}
};
constexpr extract_keys_fn extract_keys{};
}//detail
template<typename ...Pair>
struct multi_map
{
// the keys must be `type<tuple<path...>>`
using Storage = decltype(hana::make_map(std::declval<Pair>()...));
// each key is a hana::tuple which contain the keys we
// want to use to lookup an element
using Lookup = decltype(hana::unpack(
hana::flatten(hana::unpack(hana::keys(std::declval<Storage>()),
hana::make_tuple ^hana::on^ detail::extract_keys)),
hana::make_map
));
constexpr multi_map()
: storage()
{ }
constexpr multi_map(Pair&&... p)
: storage(hana::make_map(std::forward<Pair>(p)...))
{ }
constexpr multi_map(Pair const&... p)
: storage(hana::make_map(p...))
{ }
constexpr multi_map(Pair&... p)
: storage(hana::make_map(p...))
{ }
template<typename T>
constexpr decltype(auto) operator[](T t) const&
{
return hana::at_key(storage, hana::at_key(Lookup{}, t));
}
template<typename T>
constexpr decltype(auto) operator[](T t) &
{
return hana::at_key(storage, hana::at_key(Lookup{}, t));
}
template<typename T>
constexpr decltype(auto) operator[](T t) &&
{
return hana::at_key(storage, hana::at_key(Lookup{}, t));
}
Storage storage;
};
The basics of what is happening above is that storage is a hana::map containing the instances that you need references to. Then Lookup is a hana::map that points each key to the key that is used in storage (which is a tuple of all the keys that point to it). It's basically just a map to map, but with it you can get a reference to a single instance using any one of the keys.
I want to be able to iterate over a list of classes that inherit from a common ancestor.
Minified version of what I want (Python-like syntax as that's the language I'm coming from):
const *Player *PLAYERS[3] = { *PlayerTypeOne, *PlayerTypeTwo, *PlayerTypeThree};
int outcome = 0;
for player in players {
if (doThingWithPlayer((&player)(), some, other, variables) == true) {
outcome++;
}
}
If this is not the preferred way of doing this sort of operation, advice on how I should continue is very welcome.
The sort of code I want to avoid is:
int outcome = 0;
PlayerTypeOne player_one();
if doThingWithPlayer(player_one, some, other, variables){
outcome++;
}
PlayerTypeTwo player_two();
if doThingWithPlayer(player_two, some, other, variables){
outcome++;
}
PlayerTypeThree player_three();
if doThingWithPlayer(player_three, some, other, variables){
outcome++;
}
You are looking for a factory design pattern:
Player *create_by_name(const std::string &what)
{
if (what == "PlayerTypeOne")
return new PlayerTypeOne;
if (what == "PlayerTypeTwo")
return new PlayerTypeTwo;
// ...
}
and so on. What you also appear to want to do is to supply parameters to each subclass's constructors.
If all subclasses take the same constructor parameters, this becomes trivial: pass the parameters to the factory, and just have them forwarded to the constructors.
If you need to support different parameters to constructors, this becomes more complicated. I would suggest that you start small, and implement a simple factory for your objects, with no constructor parameters, or with just a couple of them that are the same for all subclasses. Once you have the basic principles working, then you can worry about handling the complicated corner cases.
Then, just have an array of class names, iterate over the array, and call the factory. This should have similar results as your pseudo-Python code.
C++ provides no built-in introspection, so you can't just obtain objects that represent your classes and create instances with them.
What you can do is use metaprogramming:
// A list of types
template <class...> struct pack { };
// Calls f with one default-constructed instance of each T
template <class... Ts, class F>
void construct_each(pack<Ts...>, F &&f) {
// Classic pre-C++17 expansion trick
using ex = int[];
(void)ex{(f(Ts{}), void(), 0)..., 0};
// C++17 version
// (void)(f(Ts{}), ...);
}
// ...
using Players = pack<PlayerTypeOne, PlayerTypeTwo, PlayerTypeThree>;
void foo() {
int outcome = 0;
construct_each(Players{}, [&](auto &&player) {
if(doThingWithPlayer(player, some, other, variables))
++outcome;
});
}
See it live on Coliru
I have a lot of custom datatypes in one of my projects which all share a common base class.
My data (coming from a database) has a datatype which is distinguished by an enum of the base class. My architecture allows a specific datatype to be specialized with a derived class or it can be handled by the base class.
When I construct one my specific datatypes I normally call the constructor directly:
Special_Type_X a = Special_Type_X("34.34:fdfh-78");
a.getFoo();
There is some template magic which also allows constructing it like this:
Type_Helper<Base_Type::special_type_x>::Type a = Base_Type::construct<Base_Type::special_type_x>("34.34:fdfh-78");
a.getFoo();
For some values of the type enum there might be no specialization so
Type_Helper<Base_Type::non_specialized_type_1>::Type == Base_Type
When I'm fetching data from the database the datatype isn't known at compile time so there's a third way to construct the datatypes (from a QVariant):
Base_Type a = Base_Type::construct(Base_type::whatever,"12.23#34io{3,3}");
But of course I want the correct constructor to be called, so the implementation of that method used to look like:
switch(t) {
case Base_Type::special_type_x:
return Base_Type::construct<Base_Type::special_type_x>(var);
case Base_Type::non_specialized_type_1:
return Base_Type::construct<Base_Type::non_specialized_type_1>(var);
case Base_Type::whatever:
return Base_Type::construct<Base_Type::whatever>(var);
//.....
}
This code is repetitive and since the base class can handle new types (added to the enum) as well, I came up with the following solution:
// Helper Template Method
template <Base_Type::type_enum bt_itr>
Base_Type construct_switch(const Base_Type::type_enum& bt, const QVariant& v)
{
if(bt_itr==bt)
return Base_Type::construct<bt_itr>(v);
return construct_switch<(Base_Type::type_enum)(bt_itr+1)>(bt,v);
}
// Specialization for the last available (dummy type): num_types
template <>
Base_Type construct_switch<Base_Type::num_types>(const Base_Type::type_enum& bt, const QVariant&)
{
qWarning() << "Type" << bt << "could not be constructed";
return Base_Type(); // Creates an invalid Custom Type
}
And my original switch statement is replaced with:
return construct_switch<(Base_Type::type_enum)0>(t,var);
This solution works as expected.
The compiled code is however different. While the original switch statement had a complexity of O(1) the new approach results in a O(n) complexity. The generated code recursively calls my helper method until it finds the correct entry.
Why can't the compiler optimize this properly? Are there any better ways to solve this?
Similar problem:
Replacing switch statements when interfacing between templated and non-templated code
I should mention that I would like to avoid C++11 and C++14 and stick to C++03.
This is what I call the magic switch problem -- how to take a (range of) run time values and turn it into a compile time constant.
Abstractly, you want to generate this switch statement:
switch(n) {
(case I from 0 to n-1: /* use I as a constant */)...
}
You can use parameter packs to generate code that is similar to this in C++.
I'll start with c++14-replacing boilerplate:
template<unsigned...> struct indexes {typedef indexes type;};
template<unsigned max, unsigned... is> struct make_indexes: make_indexes<max-1, max-1, is...> {};
template<unsigned... is> struct make_indexes<0, is...>:indexes<is...> {};
template<unsigned max> using make_indexes_t = typename make_indexes<max>::type;
Now we can create a compile-time sequence of unsigned integers from 0 to n-1 easily. make_indexes_t<50> expands to indexes<0,1,2,3, ... ,48, 49>. The c++14 version does so in O(1) steps, as most (all?) compilers implement std::make_index_sequence with an intrinsic. The above does it in linear (at compile time -- nothing is done at run time) recursive depth, and quadratic compile time memory. This sucks, and you can do better with work (logarithmic depth, linear memory), but do you have more than a few 100 types? If not, this is good enough.
Next, we build an array of callbacks. As I hate C legacy function pointer syntax, I'll throw in some pointless boilerplate to hide it:
template<typename T> using type = T; // pointless boilerplate that hides C style function syntax
template<unsigned... Is>
Base_Type construct_runtime_helper( indexes<Is...>, Base_Type::type_enum e, QVariant const& v ) {
// array of pointers to functions: (note static, so created once)
static type< Base_Type(const QVariant&) >* const constructor_array[] = {
(&Base_Type::construct<Is>)...
};
// find the eth entry, and call it:
return constructor_array[ unsigned(e) ](v);
}
Base_Type construct_runtime_helper( Base_Type::type_enum e, QVariant const& v ) {
return construct_runtime_helper( make_indexes_t< Base_Type::num_types >(), e, v );
}
and Bob is your Uncle1. An O(1) array lookup (with an O(n) setup, which in theory could be done prior to your executable launching) for dispatch.
1 "Bob's your Uncle" is a British Commonwealth saying that says "and everything is finished and working" roughly.
Are all the functions inline? I'd expect a reasonable compiler to optimize the if tree into a switch, but only if the ifs are in the same function. For portability, you might not want to rely on this.
You can get O(1) with an indirect function call by having construct_switch populate a std::vector<std::function<Base_Type(const QVariant&)>> with lambda functions that do the construction and then dispatch off that.
This question was closed as exact duplicate since I chose a misleading question title. It was not wrong but suggested an issue often discussed, e.g. in this question. Since the content is about a more specific topic never covered on Stackoverflow I would like the question to be reopened. This happened now, so here goes the question.
I have given a function expecting three integer values as parameters length(int x, int y, int z);. I cannot modify this function, e.g. to accept a struct or tuple of whatever as single parameter.
Is there a way in C++ to write another function which can be used as single argument to the function above, like length(arguments());?
Anyhow the return type of that function arguments(); seems to need to be int, int, int. But as far as far as I know I can't define and use functions like this in C++. I know that I could return a list, a tuple, a struct or a class by arguments(). The question was closed because some people thought I would have asked about this. But the difficult part is to pass the tuple, or struct, or whatever as the three given integer parameters.
Is this possible and if yes, how is that possible in C++? A solution making use of C++11 would be fine.
I don't think there is any direct way of doing what you want, but here is a C++11 technique that I use in several places of my code. The basic idea is to use a template function which I've called call_on_tuple to take a function argument f as well as a tuple of further arguments, expand the tuple and call the function on the expanded list of arguments:
template <typename Fun, typename... Args, unsigned... Is>
typename std::result_of<Fun(Args...)>::type
call_on_tuple(Fun&& f, std::tuple<Args...>&& tup, indices<Is...>)
{ return f(std::get<Is>(tup)...); }
So the idea is that instead of calling
length(arguments());
you would call
call_on_tuple(length,arguments());
This assumes that arguments() is changed so it returns a std::tuple<int,int,int> (this is basically the idea from the question you cited).
Now the difficult part is how to get the Is... argument pack, which is a pack of integers 0,1,2,... used to number the elements of the tuple.
If you are sure you'll always have three arguments, you could use 0,1,2 literally, but if the ambition is to make this work for any n-ary function, we need another trick, which has been described by other posts, for example in several answers to this post.
It's a trick to transform the number of arguments, i.e. sizeof...(Args) into a list of integers 0,1,...,sizeof...(Args):
I'll put this trick and the implementation of call_on_tuple in a namespace detail:
namespace detail {
template <unsigned... Is>
struct indices
{ };
template <unsigned N, unsigned... Is>
struct index_maker : index_maker<N-1,N-1,Is...>
{ };
template <unsigned... Is>
struct index_maker<0,Is...>
{ typedef indices<Is...> type; };
template <typename Fun, typename... Args, unsigned... Is>
typename std::enable_if<!std::is_void<typename std::result_of<Fun(Args...)>::type>::value,
typename std::result_of<Fun(Args...)>::type>::type
call_on_tuple(Fun&& f, std::tuple<Args...>&& tup, indices<Is...>)
{ return f(std::get<Is>(tup)...); }
}
Now the actual function call_on_tuple is defined in global namespace like this:
template <typename Fun, typename... Args>
typename std::enable_if<!std::is_void<typename std::result_of<Fun(Args...)>::type>::value,
typename std::result_of<Fun(Args...)>::type>::type
call_on_tuple(Fun&& f, std::tuple<Args...>&& tup)
{
using std::tuple;
using std::forward;
using detail::index_maker;
return detail::call_on_tuple
(forward<Fun>(f),forward<tuple<Args...>>(tup),typename index_maker<sizeof...(Args)>::type());
}
It basically calls detail::index_maker to generate the list of increasing integers and then calls detail::call_on_tuple with that.
As a result, you can do this:
int length(int x, int y, int z)
{ return x + y + z; }
std::tuple<int,int,int> arguments()
{ return std::tuple<int,int,int> { 1 , 2 , 3 }; }
int main()
{
std::cout << call_on_tuple(length,arguments()) << std::endl;
return 0;
}
which is hopefully close enough to what you needed.
Note. I have also added an enable_if to ensure this is only used with functions f that actually return a value. You can readily make another implementation for functions that return void.
Sorry again for closing your question prematurely.
PS. You'll need to add the following include statements to test this:
#include <tuple>
#include <type_traits>
#include <iostream>
It is not possible, C++ does not allow to provide 3 return values natively that can be used as 3 separate input arguments for another function.
But there are 'tricks' to return multiple values. Although none of these provide a perfect solution for your question, as they are not able to be used as a single argument to length() without modifying length().
Use a container object, like a struct, tuple or class
typedef struct { int a,b,c; } myContainer;
myContainer arguments(int x, int y, int z) {
myContainer result;
result.a = 1;
// etc
return result;
}
myContainer c = arguments(x, y, z);
length(c.a, c.b, c.c);
The trick is to overload the length() function, so it looks like you can use it with a single argument:
void length(myContainer c) {
length(c.a, c.b, c.c);
}
length(arguments());
Of course you could optimize it further, by using inline, macros, and what not.
I know it is still not exactly what you want, but I think this is the closest approach.
You need to declare a struct { int a, b, c; } or something similar (a class would work too) - I take it you have been programming python or php or some such.
Most programming languages would do this through some form of adapter function. That is a function that will take as argument the function to call (here length) and the arguments to call it with. You can probably build something similar in C++ with templates. Look at the functional header to get inspiration.
A language that natively provides what you are looking for is Perl. You can write:
sub arguments {
return 1, 2, 3;
}
sub length {
my ($p1, $p2, $p3) = #_;
# … Work with $p1, $p2 and $p3
}
length(arguments());
Pass in the arguments by reference so you can change them without returning or return a struct. You can only return a single value from a function.
we can only return one value. but in case you want to return multiple value you can use an array or define a object or a structure
int* arguments() {
int x[1,4,6]
return x;
};
void length(int i[]);
length(arguments());
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.