Efficient configuration of class hierarchy at compile-time - c++

This question is specifically about C++ architecture on embedded, hard real-time systems. This implies that large parts of the data-structures as well as the exact program-flow are given at compile-time, performance is important and a lot of code can be inlined. Solutions preferably use C++03 only, but C++11 inputs are also welcome.
I am looking for established design-patterns and solutions to the architectural problem where the same code-base should be re-used for several, closely related products, while some parts (e.g. the hardware-abstraction) will necessarily be different.
I will likely end up with a hierarchical structure of modules encapsulated in classes that might then look somehow like this, assuming 4 layers:
Product A Product B
Toplevel_A Toplevel_B (different for A and B, but with common parts)
Middle_generic Middle_generic (same for A and B)
Sub_generic Sub_generic (same for A and B)
Hardware_A Hardware_B (different for A and B)
Here, some classes inherit from a common base class (e.g. Toplevel_A from Toplevel_base) while others do not need to be specialized at all (e.g. Middle_generic).
Currently I can think of the following approaches:
(A): If this was a regular desktop-application, I would use virtual inheritance and create the instances at run-time, using e.g. an Abstract Factory.
Drawback: However the *_B classes will never be used in product A and hence the dereferencing of all the virtual function calls and members not linked to an address at run-time will lead to quite some overhead.
(B) Using template specialization as inheritance mechanism (e.g. CRTP)
template<class Derived>
class Toplevel { /* generic stuff ... */ };
class Toplevel_A : public Toplevel<Toplevel_A> { /* specific stuff ... */ };
Drawback: Hard to understand.
(C): Use different sets of matching files and let the build-scripts include the right one
// common/toplevel_base.h
class Toplevel_base { /* ... */ };
// product_A/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// product_B/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// build_script.A
compiler -Icommon -Iproduct_A
Drawback: Confusing, tricky to maintain and test.
(D): One big typedef (or #define) file
//typedef_A.h
typedef Toplevel_A Toplevel_to_be_used;
typedef Hardware_A Hardware_to_be_used;
// etc.
// sub_generic.h
class sub_generic {
Hardware_to_be_used the_hardware;
// etc.
};
Drawback: One file to be included everywhere and still the need of another mechnism to actually switch between different configurations.
(E): A similar, "Policy based" configuration, e.g.
template <class Policy>
class Toplevel {
Middle_generic<Policy> the_middle;
// ...
};
// ...
template <class Policy>
class Sub_generic {
class Policy::Hardware_to_be_used the_hardware;
// ...
};
// used as
class Policy_A {
typedef Hardware_A Hardware_to_be_used;
};
Toplevel<Policy_A> the_toplevel;
Drawback: Everything is a template now; a lot of code needs to be re-compiled every time.
(F): Compiler switch and preprocessor
// sub_generic.h
class Sub_generic {
#if PRODUCT_IS_A
Hardware_A _hardware;
#endif
#if PRODUCT_IS_B
Hardware_B _hardware;
#endif
};
Drawback: Brrr..., only if all else fails.
Is there any (other) established design-pattern or a better solution to this problem, such that the compiler can statically allocate as many objects as possible and inline large parts of the code, knowing which product is being built and which classes are going to be used?

I'd go for A. Until it's PROVEN that this is not good enough, go for the same decisions as for desktop (well, of course, allocating several kilobytes on the stack, or using global variables that are many megabytes large may be "obvious" that it's not going to work). Yes, there is SOME overhead in calling virtual functions, but I would go for the most obvious and natural C++ solution FIRST, then redesign if it's not "good enough" (obviously, try to determine performance and such early on, and use tools like a sampling profiler to determine where you are spending time, rather than "guessing" - humans are proven pretty poor guessers).
I'd then move to option B if A is proven to not work. This is indeed not entirely obvious, but it is, roughly, how LLVM/Clang solves this problem for combinations of hardware and OS, see:
https://github.com/llvm-mirror/clang/blob/master/lib/Basic/Targets.cpp

First I would like to point out that you basically answered your own question in the question :-)
Next I would like to point out that in C++
the exact program-flow are given at compile-time, performance is
important and a lot of code can be inlined
is called templates. The other approaches that leverage language features as opposed to build system features will serve only as a logical way of structuring the code in your project to the benefit of developers.
Further, as noted in other answers C is more common for hard real-time systems than are C++, and in C it is customary to rely on MACROS to make this kind of optimization at compile time.
Finally, you have noted under your B solution above that template specialization is hard to understand. I would argue that this depends on how you do it and also on how much experience your team has on C++/templates. I find many "template ridden" projects to be extremely hard to read and the error messages they produce to be unholy at best, but I still manage to make effective use of templates in my own projects because I respect the KISS principle while doing it.
So my answer to you is, go with B or ditch C++ for C

I understand that you have two important requirements :
Data types are known at compile time
Program-flow is known at compile time
The CRTP wouldn't really address the problem you are trying to solve as it would allow the HardwareLayer to call methods on the Sub_generic, Middle_generic or TopLevel and I don't believe it is what you are looking for.
Both of your requirements can be met using the Trait pattern (another reference). Here is an example proving both requirements are met. First, we define empty shells representing two Hardwares you might want to support.
class Hardware_A {};
class Hardware_B {};
Then let's consider a class that describes a general case which corresponds to Hardware_A.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long int64_t;
static int64_t getCPUSerialNumber() {return 0;}
};
Now let's see a specialization for Hardware_B :
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int int64_t;
static int64_t getCPUSerialNumber() {return 1;}
};
Now, here is a usage example within the Sub_generic layer :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::int64_t int64_t;
int64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And finally, a short main that executes both code paths and use both data types :
int main(int argc, const char * argv[]) {
std::cout << "Hardware_A : " << Sub_generic<Hardware_A>().doSomething() << std::endl;
std::cout << "Hardware_B : " << Sub_generic<Hardware_B>().doSomething() << std::endl;
}
Now if your HardwareLayer needs to maintain state, here is another way to implement the HardLayer and Sub_generic layer classes.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 0;
};
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 1;
};
template <typename Hardware>
class Sub_generic : public HardwareLayer<Hardware>
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And here is a last variant where only the Sub_generic implementation changes :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return hw.getCPUSerialNumber();}
private:
HwLayer hw;
};

On a similar train of thought to F, you could just have a directory layout like this:
Hardware/
common/inc/hardware.h
hardware1/src/hardware.cpp
hardware2/src/hardware.cpp
Simplify the interface to only assume a single hardware exists:
// sub_generic.h
class Sub_generic {
Hardware _hardware;
};
And then only compile the folder that contains the .cpp files for the hardware for that platform.
The benefits to this approach are:
It's simple to understand whats happening and to add a hardware3
hardware.h still serves as your API
It takes away the abstraction from the compiler (for your speed concerns)
Compiler 1 doesn't need to compile hardware2.cpp or hardware3.cpp which may contain things Compiler 1 can't do (like inline assembly, or some other specific Compiler 2 thing)
hardware3 might be much more complicated for some reason you haven't considered yet.. so giving it a whole directory structure encapsulates it.

Since this is for a hard real time embedded system, usually you would go for a C type of solution not c++.
With modern compilers I'd say that the overhead of c++ is not that great, so it's not entirely a matter of performance, but embedded systems tend to prefer c instead of c++.
What you are trying to build would resemble a classic device drivers library (like the one for ftdi chips).
The approach there would be (since it's written in C) something similar to your F, but with no compile time options - you would specialize the code, at runtime, based on somethig like PID, VID, SN, etc...
Now if you what to use c++ for this, templates should probably be your last option (code readability usually ranks higher than any advantage templates bring to the table). So you would probably go for something similar to A: a basic class inheritance scheme, but no particularly fancy design pattern is required.
Hope this helps...

I am going to assume that these classes only need to be created a single time, and that their instances persist throughout the entire program run time.
In this case I would recommend using the Object Factory pattern since the factory will only get run one time to create the class. From that point on the specialized classes are all a known type.

Related

how can I wrap libraries with templated code which requires runtime dispatch and/or inheritence

Motivation: In a suite of software, we make use of multiple unix-philosophied binaries that accomplish various tasks. These binaries will load configuration files of various types (yaml, json, xml, ini) and it's come to the point that we need to wrap these various features up into a tidy wrapper. All of the libraries (simdjson, yamlcpp, rapidxml) come with fairly modern C++ interfaces, but they are all slightly different from each other.
For instance, while yamlcpp checks the existence of a key simply by casting a node to bool, in simdjson you need to get the element and compare its .error() value to field not found constant.
All of the above means that subtle bugs are becoming more and more common because of slightly different handling methods, and it is a burden to ask the developer to be weary of each library's idiosyncracy. Also, some libraries load either json or yaml files, and this means that there are two code paths to load the same functional configuration.
Of particular note is that casting is different in each library and the exceptions raised (sometimes none) are not uniform.
(Since this is a one-time startup configuration problem, speed is not a major concern.)
Solution: the solution in principle is quite simple. I'd like to create an extremely thin templated wrapper around the libraries.
The problem I'm running into, however, is that the configuration file is fed through at run-time, and so this can't simply be a templated compile-time solution.
The Question: having presented the motivation and attempt at solution, here's my problem
Consider the following minimal code:
#include <iostream>
class simdjson {};
class YAML {}; // for compilability of example
struct Node
{
operator int() const { return 123; }
};
template<typename T> struct NodeImpl : public Node
{
operator int() const;
/* ... a bunch of common code ... */
};
/* ... tiny bits of specialization ... */
template<> NodeImpl<YAML>::operator int() const {return 42; /* return node.asInt(); */ }
template<> NodeImpl<simdjson>::operator int() const {return 52; /* return node.as<int>(); */}
int main() {
Node no_worky = NodeImpl<YAML>();
NodeImpl<YAML> a = NodeImpl<YAML>();
std::cout << "expect 42, got: " << (int)a << std::endl
<< "expect 42, got: " << (int)no_worky;
return 0;
}
There are 2 problems:
the fact that the type of configuration format can be determined at runtime requires this to be something other than a strictly templated solution (hence the inheritence)
making it inheritence based means that I (think) I have no solution other than virtual pointers and a virtual base Node class. This saddens me that I can't pass around the nodes by value (which I normally would be able to do in a purely templated implementation).
Is there an elegant solution I'm missing here, or is it simply that the runtime constraint means that I am constrained to use virtual base classes.
Edit after quite a bit of mucking around, I've come to the conclusion that this should be implemented entirely statically, and that the configuration code should be an unspecialized templated call like so:
template<typename K> void configure(NodeImpl<K> root)
{
std::cout << "expect 42, got: " << (int)root << std::endl;
/* ... do configuration stuff ... */
}
int main() {
if( /* file is yaml */ )
configure(NodeImpl<YAML>());
else if (/* file is json */)
configure(NodeImpl<json>());
return 0;
}
At first this felt like code duplication until I realized that one way or another, 2 branches of code will be emitted - whether I hide that behind a virtualization or not.
The above approach is entirely templated, and is entirely more sane.

C++ handling specific impl - #ifdef vs private inheritance vs tag dispatch

I have some classes implementing some computations which I have
to optimize for different SIMD implementations e.g. Altivec and
SSE. I don't want to polute the code with #ifdef ... #endif blocks
for each method I have to optimize so I tried a couple of other
approaches, but unfotunately I'm not very satisfied of how it turned
out for reasons I'll try to clarify. So I'm looking for some advice
on how I could improve what I have already done.
1.Different implementation files with crude includes
I have the same header file describing the class interface with different
"pseudo" implementation files for plain C++, Altivec and SSE only for the
relevant methods:
// Algo.h
#ifndef ALGO_H_INCLUDED_
#define ALGO_H_INCLUDED_
class Algo
{
public:
Algo();
~Algo();
void process();
protected:
void computeSome();
void computeMore();
};
#endif
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
// Algo_Altivec.cpp
void Algo::computeSome()
{
}
void Algo::computeMore()
{
}
... same for the other implementation files
Pros:
the split is quite straightforward and easy to do
there is no "overhead"(don't know how to say it better) to objects of my class
by which I mean no extra inheritance, no addition of member variables etc.
much cleaner than #ifdef-ing all over the place
Cons:
I have three additional files for maintenance; I could put the Scalar
implementation in the Algo.cpp file though and end up with just two but the
inclusion part will look and fell a bit dirtier
they are not compilable units per-se and have to be excluded from the
project structure
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded; ? is it even possible
to do that in the described scenario ?
I do not feel any structural cohesion in the approach
2.Different implementation files with private inheritance
// Algo.h
class Algo : private AlgoImpl
{
... as before
}
// AlgoImpl.h
#ifndef ALGOIMPL_H_INCLUDED_
#define ALGOIMPL_H_INCLUDED_
class AlgoImpl
{
protected:
AlgoImpl();
~AlgoImpl();
void computeSomeImpl();
void computeMoreImpl();
};
#endif
// Algo.cpp
...
void Algo::computeSome()
{
computeSomeImpl();
}
void Algo::computeMore()
{
computeMoreImpl();
}
// Algo_SSE.cpp
AlgoImpl::AlgoImpl()
{
}
AlgoImpl::~AlgoImpl()
{
}
void AlgoImpl::computeSomeImpl()
{
}
void AlgoImpl::computeMoreImpl()
{
}
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef-ing all over the place
still there is no "overhead" to my class - EBCO should kick in
the semantic of the class is much more cleaner at least comparing to the above
that is private inheritance == is implemented in terms of
the different files are compilable, can be included in the project
and selected via the build system
Cons:
I have three additional files for maintenance
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded
3.Is basically method 2 but with virtual functions in the AlgoImpl class. That
would allow me to overcome the duplicate implementation of plain C++ code if needed
by providing an empty implementation in the base class and override in the derived
although I will have to disable that behavior when I actually implement the optimized
version. Also the virtual functions will bring some "overhead" to objects of my class.
4.A form of tag dispatching via enable_if<>
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef ing all over the place
still there is no "overhead" to my class
will eliminate the need for different files for different implementations
Cons:
templates will be a bit more "cryptic" and seem to bring an unnecessary
overhead(at least for some people in some contexts)
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation
I cannot fallback to the plain C++ implementation if needed
What I couldn't figure out yet for any of the variants is how to properly and
cleanly fallback to the plain C++ implementation.
Also I don't want to over-engineer things and in that respect the first variant
seems the most "KISS" like even considering the disadvantages.
You could use a policy based approach with templates kind of like the way the standard library does for allocators, comparators and the like. Each implementation has a policy class which defines computeSome() and computeMore(). Your Algo class takes a policy as a parameter and defers to its implementation.
template <class policy_t>
class algo_with_policy_t {
policy_t policy_;
public:
algo_with_policy_t() { }
~algo_with_policy_t() { }
void process()
{
policy_.computeSome();
policy_.computeMore();
}
};
struct altivec_policy_t {
void computeSome();
void computeMore();
};
struct sse_policy_t {
void computeSome();
void computeMore();
};
struct scalar_policy_t {
void computeSome();
void computeMore();
};
// let user select exact implementation
typedef algo_with_policy_t<altivec_policy_t> algo_altivec_t;
typedef algo_with_policy_t<sse_policy_t> algo_sse_t;
typedef algo_with_policy_t<scalar_policy_t> algo_scalar_t;
// let user have default implementation
typedef
#if defined(ALTIVEC)
algo_altivec_t
#elif defined(SSE)
algo_sse_t
#else
algo_scalar_t
#endif
algo_default_t;
This lets you have all the different implementations defined within the same file (like solution 1) and compiled into the same program (unlike solution 1). It has no performance overheads (unlike virtual functions). You can either select the implementation at run time or get a default implementation chosen by the compile time configuration.
template <class algo_t>
void use_algo(algo_t algo)
{
algo.process();
}
void select_algo(bool use_scalar)
{
if (!use_scalar) {
use_algo(algo_default_t());
} else {
use_algo(algo_scalar_t());
}
}
As requested in the comments, here's a summary of what I did:
Set up policy_list helper template utility
This maintains a list of policies, and gives them a "runtime check" call before calling the first suitable implementaiton
#include <cassert>
template <typename P, typename N=void>
struct policy_list {
static void apply() {
if (P::runtime_check()) {
P::impl();
}
else {
N::apply();
}
}
};
template <typename P>
struct policy_list<P,void> {
static void apply() {
assert(P::runtime_check());
P::impl();
}
};
Set up specific policies
These policies implement a both a runtime test and an actual implementation of the algorithm in question. For my actual problem impl took another template parameter that specified what exactly it was they were implementing, here though the example assumes there is only one thing to be implemented. The runtime tests are cached in a static bool for some (e.g. the Altivec one I used) the test was really slow. For others (e.g. the OpenCL one) the test is actually "is this function pointer NULL?" after one attempt at setting it with dlsym().
#include <iostream>
// runtime SSE detection (That's another question!)
extern bool have_sse();
struct sse_policy {
static void impl() {
std::cout << "SSE" << std::endl;
}
static bool runtime_check() {
static bool result = have_sse();
// have_sse lives in another TU and does some cpuid asm stuff
return result;
}
};
// Runtime OpenCL detection
extern bool have_opencl();
struct opencl_policy {
static void impl() {
std::cout << "OpenCL" << std::endl;
}
static bool runtime_check() {
static bool result = have_opencl();
// have_opencl lives in another TU and does some LoadLibrary or dlopen()
return result;
}
};
struct basic_policy {
static void impl() {
std::cout << "Standard C++ policy" << std::endl;
}
static bool runtime_check() { return true; } // All implementations do this
};
Set per architecture policy_list
Trivial example sets one of two possible lists based on ARCH_HAS_SSE preprocessor macro. You might generate this from your build script, or use a series of typedefs, or hack support for "holes" in the policy_list that might be void on some architectures skipping straight to the next one, without trying to check for support. GCC sets some preprocessor macors for you that might help, e.g. __SSE2__.
#ifdef ARCH_HAS_SSE
typedef policy_list<opencl_policy,
policy_list<sse_policy,
policy_list<basic_policy
> > > active_policy;
#else
typedef policy_list<opencl_policy,
policy_list<basic_policy
> > active_policy;
#endif
You can use this to compile multiple variants on the same platform too, e.g. and SSE and no-SSE binary on x86.
Use the policy list
Fairly straightforward, call the apply() static method on the policy_list. Trust that it will call the impl() method on the first policy that passes the runtime test.
int main() {
active_policy::apply();
}
If you take the "per operation template" approach I mentioned earlier it might be something more like:
int main() {
Matrix m1, m2;
Vector v1;
active_policy::apply<matrix_mult_t>(m1, m2);
active_policy::apply<vector_mult_t>(m1, v1);
}
In that case you end up making your Matrix and Vector types aware of the policy_list in order that they can decide how/where to store the data. You can also use heuristics for this too, e.g. "small vector/matrix lives in main memory no matter what" and make the runtime_check() or another function test the appropriateness of a particular approach to a given implementation for a specific instance.
I also had a custom allocator for containers, which produced suitably aligned memory always on any SSE/Altivec enabled build, regardless of if the specific machine had support for Altivec. It was just easier that way, although it could be a typedef in a given policy and you always assume that the highest priority policy has the strictest allocator needs.
Example have_altivec():
I've included a sample have_altivec() implementation for completeness, simply because it's the shortest and therefore most appropriate for posting here. The x86/x86_64 CPUID one is messy because you have to support the compiler specific ways of writing inline ASM. The OpenCL one is messy because we check some of the implementation limits and extensions too.
#if HAVE_SETJMP && !(defined(__APPLE__) && defined(__MACH__))
jmp_buf jmpbuf;
void illegal_instruction(int sig) {
// Bad in general - https://www.securecoding.cert.org/confluence/display/seccode/SIG32-C.+Do+not+call+longjmp%28%29+from+inside+a+signal+handler
// But actually Ok on this platform in this scenario
longjmp(jmpbuf, 1);
}
#endif
bool have_altivec()
{
volatile sig_atomic_t altivec = 0;
#ifdef __APPLE__
int selectors[2] = { CTL_HW, HW_VECTORUNIT };
int hasVectorUnit = 0;
size_t length = sizeof(hasVectorUnit);
int error = sysctl(selectors, 2, &hasVectorUnit, &length, NULL, 0);
if (0 == error)
altivec = (hasVectorUnit != 0);
#elif HAVE_SETJMP_H
void (*handler) (int sig);
handler = signal(SIGILL, illegal_instruction);
if (setjmp(jmpbuf) == 0) {
asm volatile ("mtspr 256, %0\n\t" "vand %%v0, %%v0, %%v0"::"r" (-1));
altivec = 1;
}
signal(SIGILL, handler);
#endif
return altivec;
}
Conclusion
Basically you pay no penalty for platforms that can never support an implementation (the compiler generates no code for them) and only a small penalty (potentially just a very predictable by the CPU test/jmp pair if your compiler is half-decent at optimising) for platforms that could support something but don't. You pay no extra cost for platforms that the first choice implementation runs on. The details of the runtime tests vary between the technology in question.
If the virtual function overhead is acceptable, option 3 plus a few ifdefs seems a good compromise IMO. There are two variations that you could consider: one with abstract base class, and the other with the plain C implementation as the base class.
Having the C implementation as the base class lets you gradually add the vector optimized versions, falling back on the non-vectorized versions as you please, using an abstract interface would be a little cleaner to read.
Also, having separate C++ and vectorized versions of your class let you easily write unit tests that
Ensure that the vectorized code is giving the right result (easy to mess this up, and vector floating registers can have different precision than FPU, causing different results)
Compare the performance of the C++ vs the vectorized. It's often good to make sure the vectorized code is actually doing you any good. Compilers can generate very tight C++ code that sometimes does as well or better than vectorized code.
Here's one with the plain-c++ implementations as the base class. Adding an abstract interface would just add a common base class to all three of these:
// Algo.h:
class Algo_Impl // Default Plain C++ implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_SSE.h:
class Algo_Impl_SSE : public Algo_Impl // SSE
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_Altivec.h:
class Algo_Impl_Altivec : public Algo_Impl // Altivec implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Client.cpp:
Algo_Impl *myAlgo = 0;
#ifdef SSE
myAlgo = new Algo_Impl_SSE;
#elseif defined(ALTIVEC)
myAlgo = new Algo_Impl_Altivec;
#else
myAlgo = new Algo_Impl_Default;
#endif
...
You may consider to employ adapter patterns. There are a few types of adapters and it's quite an extensible concept. Here is an interesting article Structural Patterns: Adapter and Façade
that discusses very similar matter to the one in your question - the Accelerate framework as an example of the Adapter patter.
I think it is a good idea to discuss a solution on the level of design patterns without focusing on implementation detail like C++ language. Once you decide that the adapter states the right solutiojn for you, you can look for variants specific to your implemementation. For example, in C++ world there is known adapter variant called generic adapter pattern.
This isn't really a whole answer: just a variant on one of your existing options. In option 1 you've assumed that you include algo_altivec.cpp &c. into algo.cpp, but you don't have to do this. You could omit algo.cpp entirely, and have your build system decide which of algo_altivec.cpp, algo_sse.cpp, &c. to build. You'd have to do something like this anyway whichever option you use, since each platform can't compile every implementation; my suggestion is only that whichever option you choose, instead of having #if ALTIVEC_ENABLED everywhere in the source, where ALTIVEC_ENABLED is set from the build system, you just have the build system decide directly whether to compile algo_altivec.cpp .
This is a bit trickier to achieve in MSVC than make, scons, &c., but still possible. It's commonplace to switch in a whole directory rather than individual source files; that is, instead of algo_altivec.cpp and friends, you'd have platform/altivec/algo.cpp, platform/sse/algo.cpp, and so one. This way, when you have a second algorithm you need platform-specific implementations for, you can just add the extra source file to each directory.
Although my suggestion's mainly intended to be a variant of option 1, you can combine this with any of your options, to let you decide in the build system and at runtime which options to offer. In that case, though, you'll probably need implementation-specific header files too.
In order to hide the implementation details you may just use an abstract interface with static creator and provide three 3 implementation classes:
// --------------------- Algo.h ---------------------
#pragma once
typedef boost::shared_ptr<class Algo> AlgoPtr;
class Algo
{
public:
static AlgoPtr Create(std::string type);
~Algo();
void process();
protected:
virtual void computeSome() = 0;
virtual void computeMore() = 0;
};
// --------------------- Algo.cpp ---------------------
class PlainAlgo: public Algo { ... };
class AltivecAlgo: public Algo { ... };
class SSEAlgo: public Algo { ... };
static AlgoPtr Algo::Create(std::string type) { /* Factory implementation */ }
Please note, that since PlainAlgo, AlivecAlgo and SSEAlgo classes are defined in Algo.cpp, they are only seen from this compilation unit and therefore the implementation details hidden from the outside world.
Here is how one can use your class then:
AlgoPtr algo = Algo::Create("SSE");
algo->Process();
It seems to me that your first strategy, with separate C++ files and #including the specific implementation, is the simplest and cleanest. I would only add some comments to your Algo.cpp indicating which methods are in the #included files.
e.g.
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
// The following methods are implemented in separate,
// platform-specific files.
// void Algo::computeSome()
// void Algo::computeMore()
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
Policy-like templates (mixins) are fine until the requirement to fall back to default implementation. It's runtime opeation and should be handled by runtime polymorphism. Strategy pattern can handle this fine.
There's one drawback of this approach: Strategy-like algorithm implemented cannot be inlined. Such inlining can provide reasonable performance improvement in rare cases. If this is an issue you'll need to cover higher-level logic by Strategy.

What am I not getting about this abstract class implementation?

PREFACE: I'm relatively inexperienced in C++ so this very well could be a Day 1 n00b question.
I'm working on something whose long term goal is to be portable across multiple operating systems. I have the following files:
Utilities.h
#include <string>
class Utilities
{
public:
Utilities() { };
virtual ~Utilities() { };
virtual std::string ParseString(std::string const& RawString) = 0;
};
UtilitiesWin.h (for the Windows class/implementation)
#include <string>
#include "Utilities.h"
class UtilitiesWin : public Utilities
{
public:
UtilitiesWin() { };
virtual ~UtilitiesWin() { };
virtual std::string ParseString(std::string const& RawString);
};
UtilitiesWin.cpp
#include <string>
#include "UtilitiesWin.h"
std::string UtilitiesWin::ParseString(std::string const& RawString)
{
// Magic happens here!
// I'll put in a line of code to make it seem valid
return "";
}
So then elsewhere in my code I have this
#include <string>
#include "Utilities.h"
void SomeProgram::SomeMethod()
{
Utilities *u = new Utilities();
StringData = u->ParseString(StringData); // StringData defined elsewhere
}
The compiler (Visual Studio 2008) is dying on the instance declaration
c:\somepath\somecode.cpp(3) : error C2259: 'Utilities' : cannot instantiate abstract class
due to following members:
'std::string Utilities::ParseString(const std::string &)' : is abstract
c:\somepath\utilities.h(9) : see declaration of 'Utilities::ParseString'
So in this case what I'm wanting to do is use the abstract class (Utilities) like an interface and have it know to go to the implemented version (UtilitiesWin).
Obviously I'm doing something wrong but I'm not sure what. It occurs to me as I'm writing this that there's probably a crucial connection between the UtilitiesWin implementation of the Utilities abstract class that I've missed, but I'm not sure where. I mean, the following works
#include <string>
#include "UtilitiesWin.h"
void SomeProgram::SomeMethod()
{
Utilities *u = new UtilitiesWin();
StringData = u->ParseString(StringData); // StringData defined elsewhere
}
but it means I'd have to conditionally go through the different versions later (i.e., UtilitiesMac(), UtilitiesLinux(), etc.)
What have I missed here?
Utilities *u = new Utilities();
tells the compiler to make a new instance of the Utilities class; the fact that UtilitiesWin extends it isn't necessarily known and doesn't affect it. There could be lots of classes extending Utilities, but you told the compiler to make a new instance of Utilities, not those subclasses.
It sounds like you want to use the factory pattern, which is to make a static method in Utilities that returns a Utilities* that points to a particular instance:
static Utilities* Utilities::make(void) {return new UtilitiesWin();}
At some point you're going to have to instantiate a non-abstract subclass; there's no way around specifying UtilitiesWin at that point
You seem a bit confused as to what you want; you have to tell the computer at some stage which implementation of Utilities it is to use, but with the shape you've set out you only need to have
#ifdef windows
Utilities* u = new UtilitiesWin();
#endif
#ifdef spaceos3
Utilities* u = new UtilitiesSpaceOS3();
#endif
once in the program, and most of the source files can just call methods of u without knowing what kind of a u it is - which is I think what you were aiming at.
In C++ you cannot instantiate abstract classes, which is precisely what you are trying to do here:
Utilities *u = new Utilities();
It's very unclear to me why you would want to instantiate such a class, and what you would do with it if you could do so (which you can't). You cannot use an instantiation as an interface - the class definition provides that.
You are "getting" it right, you have to instantiate a concrete type. There are common solutions to this.
Yes, you have to make that decision which class to instantiate somewhere.
The implementation of that depends on the criteria for this decision: is it fixed for the binary? The same choice for each process? Or does it change for every instance of SomeProgram?
Fore the concrete classes you mention, the decision can probably be made at compile time, similar to what Tom suggests.
Second, SomeProgram should not make this choice itself. Rather the type or the instance should be configurable from the outside. The most simple approach is to pass the concrete instance to the constructor of SomeProgram:
class SomeProgram
{
private:
Utilities * m_utilities;
public:
Someprogram(Utilities * util) : m_utilities(util) {}
}
Note that SomeProgramonly "knows" the abstract class, none of the concrete classes.
For delayed construction, use a factory. If the utilities class should be injected as above, is expensive to create but isn't necessary most of the time, you would inject a factory instead: you pass a UtilityFactoryto the class, which SomeProgram can use to create the required instance on demand. The actual factory implementation decides the concrete class to chose. See Factory pattern for more.
If that's a common problem, look at Inversion of Control (IoC) - there are several library implementations out there that make that easier. It has become a buzzword in the wake of agressive unit testing, where replacing "real" implementations with mocks has to happen permanently. (I'm still waiting for a complete MockOS, though). I haven't worked on any application that seriously needed suhc a library in practice, though, and it is very likely overkill for your problem.

Multiple inheritance dilemma in C++

I'm facing problems with the design of a C++ library of mine. It is a library for reading streams that support a feature I haven't found on other "stream" implementations. It is not really important why I've decided to start writing it. The point is I have a stream class that provides two important behaviours through multiple inheritance: shareability and seekability.
Shareable streams are those that have a shareBlock(size_t length) method that returns a new stream that shares resources with its parent stream (e.g. using the same memory block used by parent stream). Seekable streams are those that are.. well, seekable. Through a method seek(), these classes can seek to a given point in the stream. Not all streams of the library are shareable and/or seekable.
A stream class that both provides implementation for seeking and sharing resources inherits interface classes called Seekable and Shareable. That's all good if I know the type of such a stream, but, sometimes, I might want a function to accept as argument a stream that simply fulfills the quality of being seekable and shareable at the same time, regardless of which stream class it actually is. I could do that creating yet another class that inherits both Seekable and Shareable and taking a reference to that type, but then I would have to make my classes that are both seekable and shareable inherit from that class. If more "behavioural classes" like those were to be added, I would need to make several modifications everywhere in the code, soon leading to unmaintainable code. Is there a way to solve this dilemma? If not, then I'm absolutely coming to understand why people are not satisfied by multiple inheritance. It almost does the job, but, just then, it doesn't :D
Any help is appreciated.
-- 2nd edit, preferred problem resolution --
At first I thought Managu's solution would be my preferred one. However, Matthieu M. came with another I preferred over Managu's: to use boost::enable_if<>. I would like to use Managu's solution if BOOST_MPL_ASSERT produced messages weren't so creepy. If there was any way to create instructive compile-time error messages, I would surely do that way. But, as I said, the methods available produce creepy messages. So I prefer the (much) lesser instructive, yet cleaner message produced when boost::enable_if<> conditions are not met.
I've created some macros to ease the task to write template functions that take arguments inheriting select class types, here they go:
// SonettoEnableIfDerivedMacros.h
#ifndef SONETTO_ENABLEIFDERIVEDMACROS_H
#define SONETTO_ENABLEIFDERIVEDMACROS_H
#include <boost/preprocessor/repetition/repeat.hpp>
#include <boost/preprocessor/array/elem.hpp>
#include <boost/mpl/bool.hpp>
#include <boost/mpl/and.hpp>
#include <boost/type_traits/is_base_and_derived.hpp>
#include <boost/utility/enable_if.hpp>
/*
For each (TemplateArgument,DerivedClassType) preprocessor tuple,
expand: `boost::is_base_and_derived<DerivedClassType,TemplateArgument>,'
*/
#define SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION(z,n,data) \
boost::is_base_and_derived<BOOST_PP_TUPLE_ELEM(2,1,BOOST_PP_ARRAY_ELEM(n,data)), \
BOOST_PP_TUPLE_ELEM(2,0,BOOST_PP_ARRAY_ELEM(n,data))>,
/*
ReturnType: Return type of the function
DerivationsArray: Boost.Preprocessor array containing tuples in the form
(TemplateArgument,DerivedClassType) (see
SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION)
Expands:
typename boost::enable_if<
boost::mpl::and_<
boost::is_base_and_derived<DerivedClassType,TemplateArgument>,
...
boost::mpl::bool_<true> // Used to nullify trailing comma
>, ReturnType>::type
*/
#define SONETTO_ENABLE_IF_DERIVED(ReturnType,DerivationsArray) \
typename boost::enable_if< \
boost::mpl::and_< \
BOOST_PP_REPEAT(BOOST_PP_ARRAY_SIZE(DerivationsArray), \
SONETTO_ENABLE_IF_DERIVED_EXPAND_CONDITION,DerivationsArray) \
boost::mpl::bool_<true> \
>, ReturnType>::type
#endif
// main.cpp: Usage example
#include <iostream>
#include "SonettoEnableIfDerivedMacros.h"
class BehaviourA
{
public:
void behaveLikeA() const { std::cout << "behaveLikeA()\n"; }
};
class BehaviourB
{
public:
void behaveLikeB() const { std::cout << "behaveLikeB()\n"; }
};
class BehaviourC
{
public:
void behaveLikeC() const { std::cout << "behaveLikeC()\n"; }
};
class CompoundBehaviourAB : public BehaviourA, public BehaviourB {};
class CompoundBehaviourAC : public BehaviourA, public BehaviourC {};
class SingleBehaviourA : public BehaviourA {};
template <class MustBeAB>
SONETTO_ENABLE_IF_DERIVED(void,(2,((MustBeAB,BehaviourA),(MustBeAB,BehaviourB))))
myFunction(MustBeAB &ab)
{
ab.behaveLikeA();
ab.behaveLikeB();
}
int main()
{
CompoundBehaviourAB ab;
CompoundBehaviourAC ac;
SingleBehaviourA a;
myFunction(ab); // Ok, prints `behaveLikeA()' and `behaveLikeB()'
myFunction(ac); // Fails with `error: no matching function for
// call to `myFunction(CompoundBehaviourAC&)''
myFunction(a); // Fails with `error: no matching function for
// call to `myFunction(SingleBehaviourA&)''
}
As you can see, the error messages are exceptionally clean (at least in GCC 3.4.5). But they can be misleading. It doesn't inform you that you've passed the wrong argument type. It informs you that the function doesn't exist (and, in fact, it doesn't due to SFINAE; but that may not be exactly clear to the user). Still, I prefer those clean messages over those randomStuff ... ************** garbage ************** BOOST_MPL_ASSERT produces.
If you find any bugs in this code, please edit and correct them, or post a comment in that regard. The one major issue I find in those macros is that they're limited to some Boost.Preprocessor limits. Here, for example, I can only pass a DerivationsArray of up to 4 items to SONETTO_ENABLE_IF_DERIVED(). I think those limits are configurable though, and maybe they will even be lifted in upcoming C++1x standard, won't they? Please, correct me if I'm wrong. I don't remember if they have suggested changes to the preprocessor.
Thank you.
Just a few thoughts:
STL has this same sort of problem with iterators and functors. The solution there was basically to remove types from the equation all together, document the requirements (as "concepts"), and use what amounts to duck typing. This fits well a policy of compile-time polymorphism.
Perhaps a midground would be to create a template function which statically checks its conditions at instantiation. Here's a sketch (which I don't guarantee will compile).
class shareable {...};
class seekable {...};
template <typename StreamType>
void needs_sharable_and_seekable(const StreamType& stream)
{
BOOST_STATIC_ASSERT(boost::is_base_and_derived<shareable, StreamType>::value);
BOOST_STATIC_ASSERT(boost::is_base_and_derived<seekable, StreamType>::value);
....
}
Edit: Spent a few minutes making sure things compiled, and "cleaning up" the error messages:
#include <boost/type_traits/is_base_and_derived.hpp>
#include <boost/mpl/assert.hpp>
class shareable {};
class seekable {};
class both : public shareable, public seekable
{
};
template <typename StreamType>
void dosomething(const StreamType& dummy)
{
BOOST_MPL_ASSERT_MSG((boost::is_base_and_derived<shareable, StreamType>::value),
dosomething_requires_shareable_stream,
(StreamType));
BOOST_MPL_ASSERT_MSG((boost::is_base_and_derived<seekable, StreamType>::value),
dosomething_requires_seekable_stream,
(StreamType));
}
int main()
{
both b;
shareable s1;
seekable s2;
dosomething(b);
dosomething(s1);
dosomething(s2);
}
Take a look at boost::enable_if
// Before
template <class Stream>
some_type some_function(const Stream& c);
// After
template <class Stream>
boost::enable_if<
boost::mpl::and_<
boost::is_base_and_derived<Shareable,Stream>,
boost::is_base_and_derived<Seekable,Stream>
>,
some_type
>
some_function(const Stream& c);
Thanks to SFINAE this function will only be considered if Stream satisfies the requirement, ie here derive from both Shareable and Seekable.
How about using a template method?
template <typename STREAM>
void doSomething(STREAM &stream)
{
stream.share();
stream.seek(...);
}
You might want the Decorator pattern.
Assuming both Seekable and Shareable have common ancestor, one way I can think of is trying to downcast (of course, asserts replaced with your error-checking):
void foo(Stream *s) {
assert(s != NULL);
assert(dynamic_cast<Seekable*>(s) != NULL);
assert(dynamic_cast<Shareable*>(s) != NULL);
}
Replace 'shareable' and 'seekable' with 'in' and 'out' and find your 'io' solution. In a library similar problems should have similar solutions.

Where do you find templates useful?

At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.