How to reduce compile time with C++ templates - c++

I'm in the process of changing part of my C++ app from using an older C type array to a templated C++ container class. See this question for details. While the solution is working very well, each minor change I make to the templated code causes a very large amount of recompilation to take place, and hence drastically slows build time. Is there any way of getting template code out of the header and back into a cpp file, so that minor implementation changes don't cause major rebuilds?

Several approaches:
The export keyword could theoretically help, but it was poorly supported and was officially removed in C++11.
Explicit template instantiation (see here or here) is the most straightforward approach, if you can predict ahead of time which instantiations you'll need (and if you don't mind maintaining this list).
Extern templates, which are already supported by several compilers as extensions. It's my understanding that extern templates don't necessarily let you move the template definitions out of the header file, but they do make compiling and linking faster (by reducing the number of times that template code must be instantiated and linked).
Depending on your template design, you may be able to move most of its complexity into a .cpp file. The standard example is a type-safe vector template class that merely wraps a type-unsafe vector of void*; all of the complexity goes in the void* vector that resides in a .cpp file. Scott Meyers gives a more detailed example in Effective C++ (item 42, "Use private inheritance judiciously", in the 2nd edition).

I think the general rules apply. Try to reduce coupling between parts of the code. Break up too large template headers into smaller groups of functions used together, so the whole thing won't have to be included in each and every source file.
Also, try to get the headers into a stable state fast, perhaps testing them out against a smaller test program, so they wouldn't need changing (too much) when integrated into a larger program.
(As with any optimization, it might be less worth to optimize for the compiler's speed when dealing with templates, rather than finding an "algorithmic" optimization that reduces the work-load drastically in the first place.)

First of all, for completeness, I'll cover the straightforward solution: only use templated code when necessary, and base it on non-template code (with implementation in its own source file).
However, I suspect that the real issue is that you use generic programming as you would use typical OO-programming and end up with a bloated class.
Let's take an example:
// "bigArray/bigArray.hpp"
template <class T, class Allocator>
class BigArray
{
public:
size_t size() const;
T& operator[](size_t index);
T const& operator[](size_t index) const;
T& at(size_t index);
T const& at(size_t index);
private:
// impl
};
Does this shock you ? Probably not. It seems pretty minimalist after all. The thing is, it's not. The at methods can be factored out without any loss of generality:
// "bigArray/at.hpp"
template <class Container>
typename Container::reference_type at(Container& container,
typename Container::size_type index)
{
if (index >= container.size()) throw std::out_of_range();
return container[index];
}
template <class Container>
typename Container::const_reference_type at(Container const& container,
typename Container::size_type index)
{
if (index >= container.size()) throw std::out_of_range();
return container[index];
}
Okay, this changes the invocation slightly:
// From
myArray.at(i).method();
// To
at(myArray,i).method();
However, thanks to Koenig's lookup, you can call them unqualified as long as you put them in the same namespace, so it's just a matter of habit.
The example is contrived but the general point stands. Note that because of its genericity at.hpp never had to include bigArray.hpp and will still produce as tight code as if it were a member method, it's just that we can invoke it on other containers if we wish.
And now, a user of BigArray does not need to include at.hpp if she does not uses it... thus reducing her dependencies and not being impacted if you change the code in that file: for example alter std::out_of_range call to feature the file name and line number, the address of the container, its size and the index we tried to access.
The other (not so obvious) advantage, is that if ever integrity constraint of BigArray is violated, then at is obviously out of cause since it cannot mess with the internals of the class, thus reducing the number of suspects.
This is recommended by many authors, such as Herb Sutters in C++ Coding Standards:
Item 44: Prefer writing nonmember nonfriend functions
and has been extensively used in Boost... But you do have to change your coding habits!
Then of course you need to only include what you do depend on, there ought to be static C++ code analyzers that report included but unused header files which can help figuring this out.

Using templates as a problem solving technique can create compilation slowdowns. A classical example of this is the std::sort vs. qsort function from C. The C++ version of this function takes longer to compile because it needs to be parsed in every translation unit and because almost every use of this function creates a different instance of this template (assuming that closure types are usually provided as sorting predicate).
Although these slowdowns are to be expected, there are some rules that can help you to write efficient templates. Four of them are described below.
The Rule of Chiel
The Rule of Chiel, presented below, describes which C++ constructs are the most difficult ones for the compiler. If possible, it’s best to avoid those constructs to reduce compilation times.
The following C++ features/constructs are sorted in descending order by compile time:
SFINAE
Instantiating a function template
Instantiating a type
Calling an alias
Adding a parameter to a type
Adding a parameter to an alias call
Looking up a memorized type
Optimizations based on the above rules were used when Boost.TMP was designed and developed. As much as possible, avoid top constructs for quick template compilation.
Below are some examples illustrating how to make use of the rules listed above.
Reduce Template Instantiations
Let's have a look at std::conditional. Its declaration is:
template< bool B, typename T, typename F >
struct conditional;
Whenever we change any of three arguments given to that template, the compiler will have to create a new instance of it. For example, imagine the following types:
struct first{};
struct second{};
Now, all the following will end up in instantiations of different types:
using type1 = conditional<true, first, second>;
using type2 = conditional<true, second, first>;
std::is_same_v<type1, type2>; // it’s false
using type3 = conditional<false, first, second>;
using type4 = conditional<false, second, first>;
std::is_same_v<type1, type2>; // it’s false
We can reduce the number of instantiations by changing the implementation of conditional to:
template <bool>
struct conditional{
template <typename T, typename F>
using type = T;
};
template <>
struct conditional<false>{
template <typename T, typename F>
using type = F;
};
In this case, the compiler will create only two instantiations of type “conditional” for all possible arguments. For more details about this example, check out Odin Holmes' talk about the Kvasir library.
Create Explicit Template Instantiations
Whenever you suspect that an instance of a template is going to be used often, it’s a good idea to explicitly instantiate it. Usually, std::string is an explicit instantiation of std::basic_string<char>.
Create Specializations for Compile-time Algorithms
Kvasir-MPL specializes algorithms for long lists of types to speed them up. You can see an example of this here. In this header file, the sorting algorithm is manually specialized for a list of 255 types. Manual specialization speeds up compilations for long lists.

You can use explicit instantiation; however, only the template types you instantiate will compile ahead of time.
You might be able to take advantage of c++20's modules.
If you can factor out the templated types from your algorithm, you can put it in its own .cc file.
I wouldn't suggest this unless it's a major problem but: you may be able to provide a template container interface that is implemented with calls to a void* implementation that you are free to change at will.
Before c++11 you could use a compiler that supports the export keyword.

You can define a base class without templates and move most of the implementation there. The templated array would then define only proxy methods, that use base class for everything.

Related

Is all use of templates in C++ metaprogramming?

I'm trying to understand what metaprogramming is general and what it is in C++ in particular. If I search for c++ metaprogramming I do get tutorials of template metaprogramming (TMP), but no explanation of if it only categorizes a specific use of templates or all usages of templates.
My question is if all usages of templates in C++ is categorized as metaprogramming. An explanation of why it is or isn't would also be helpful. Thank you.
My question is if all usages of templates in C++ is categorized as metaprogramming.
No.
Not all usages of templates, in C++, are metaprogramming.
Obviously it's a question of definitions but, in C++, "metaprogramming" is synonymous of "compile-time computation".
So with templates we do metaprogramming (specifically template metaprogramming) but not all uses of templates are metaprogramming.
A simple counter-example
template <typename K, typename V>
void printKeyVal (K const & k, V const & v)
{ std::cout << k << ": " << v << std::endl; }
The preceding printKeyVal() is a template function that print, to standard output (so run-time, not compile-time), a couple of generic values.
It can't run compile-time so it's "template" but isn't "metaprogramming".
More in general: std::vector is a template class that uses memory allocation. And memory allocation (till C++17; maybe in future can be different) can't be used in compile-time code.
So std::vector (contrary to std::array that, having a fixed size, doesn't use memory allocation) is a template feature that can't be used (when the use involve the instantiation of a std::vector object) for metaprogramming.
What is TMP in C++?
Template metaprogramming (TMP) in C++ is a technique for expressing and executing arbitrary algorithms in compile-time using C++ templates. It is usually enabled by the use of template specialization to emulate conditional branches and recursive template definition to emulate loops. The most well-known example is a compile-time factorial computation:
template <unsigned int n>
struct factorial {
// recursive definition to emulate a loop or a regular recursion
enum { value = n * factorial<n - 1>::value };
};
// specialization that describes "break" condition for the recursion
template <>
struct factorial<0> {
enum { value = 1 };
};
which uses both of the aforementioned techniques.
A far more common, however, is a use of TMP for type detection and transformation rather than actual numeric computation, e.g. a standard std::is_pointer utility (source):
// generic definition that emulates "false" conditional branch
template<class T>
struct is_pointer_helper : std::false_type {};
// a specialization that emulates "true" conditional branch
template<class T>
struct is_pointer_helper<T*> : std::true_type {};
template<class T>
struct is_pointer : is_pointer_helper< typename std::remove_cv<T>::type > {};
Most of the utilities provided by the standard type_traits header are implemented using TMP techniques.
Given that TMP algorithms are expressed using type definitions, it's worth mentioning that TMP is a form of declarative programming in which the logic of computation is expressed without the use of explicit control flow statements (if, else, for, etc...).
Is all usage of C++ templates a metaprogramming?
The short answer is: No. If the templates aren't used for expressing a compile-time algorithm then it's not a metaprogramming, it's a generic programming.
The primary goal for introducing templates in C++ was to enable generic programming, that is to allow reusing the same algorithms (find, copy, sort, etc...) and data structures (vector, list, map, etc...) for any types, including user-defined ones, that satisfy certain requirements.
In fact TMP in C++ was discovered by accident and was not the intended use of templates.
In summary: Template metaprogramming in C++ is the use of templates to express a compile-time algorithm, most (all?) other uses of C++ templates is a form of generic programming.
I'm trying to understand what metaprogramming is general and what it is in C++ in particular
You haven't said what you understand by metaprogramming in general yet, so your answers don't have a common starting point.
I'm going to assume the wikipedia definition is good enough for this:
Metaprogramming is a programming technique in which computer programs have the ability to treat other programs as their data.
... can be used to move computations from run-time to compile-time, to generate code using compile time computations ...
C++ doesn't generally allow self-modifying code, so I'm ignoring that. I'm also choosing not to count the preprocessor, as textual substitution at (or arguably just before) compile time is not the same as operating on the semantics of the program.
My question is if all usages of templates in C++ is categorized as metaprogramming
No, it is not.
Consider, for reference:
#define MAX(a,b) ((a) > (b) ? (a) : (b))
which is loosely the way to write a generic (type-agnostic) max function without using templates. I've already said I don't count the preprocessor as metaprogramming, but in any case it always produces identical code whenever it is used.
It simply delegates parsing that code, and worrying about types and whether a>b is defined to the compiler, in a later translation phase. Nothing here operates at compile time to produce different resulting code depending on ... anything. Nothing, at compile time, is computed.
Now, we can compare the template version:
template <typename T>
T max(T a, T b) { return a > b ? a : b; }
This does not simply perform a textual substitution. The process of instantiation is more complex, name lookup rules and overloads may be considered, and in some sense different instantiations may not be textually equivalent (eg. one may use bool ::operator< (T,T) and one bool T::operator<(T const&) or whatever).
However, the meaning of each instantiation is the same (assuming compatible definitions of operator< for different types, etc.) and nothing was computed at compile time apart from the compiler's usual (mechanical) process of resolving types and names and so on.
As an aside, it's definitely not enough that your program contains instructions for the compiler to tell it what to do, because that's what all programming is.
Now, there are marginal cases like
template <unsigned N>
struct factorial() { enum { value = N * factorial<N-1>::value }; };
which do move a computation to compile time (and in this case a non-terminating one since I can't be bothered to write the terminal case), but are arguably not metaprogramming.
Even though the Wikipedia definition mentioned moving computations to compile time, this is only a value computation - it's not making a compile-time decision about the structure or semantics of your code.
When writing C++ functions with templates, you are writing "instructions" for the compiler to tell it what to do when encountering calls of the function. In this sense, you are not directly writing code, therefore we call it meta-programming.
So yes, every C++ code involving templates is considered meta-programming
Note that only the parts defining templates functions or classes are meta-programming. Regular functions and classes are categorized as regular C++ !

Static polymorphism: How to define the interface?

Below is a very simple example of what I understand as static polymorphism. The reason why I'm not using dynamic polymorphism is that I do not want to obstruct inlining of functions of PROCESSOR in op.
template <class PROCESSOR>
void op(PROCESSOR* proc){
proc->doSomething(5);
proc->doSomethingElse();
}
int main() {
ProcessorY py;
op<ProcessorY>(&py);
return 0;
}
The problem with this example is: There exists no explicit definition of what functions a PROCESSOR has to define. If one is missing, you will just get a compile error. I think this is bad style.
It also has a very practical drawback: On-line assistance of IDEs cannot of course show you the functions that are available on that object.
What is a good/official way to define the public interface of a PROCESSOR?
There exists no explicit definition of what methods a PROCESSOR has to define. If one is missing, you will just get a compile error. I think, this is bad style.
It is. It is not. It may be. It depends.
Yes, if you want to define behavior in this way, you may want to also restrict the content, that template parameters should have. Unfortunately, it is not possible to do this "explicitly" right now.
What you want is constraints and concepts feature, that was supposed to appear as part of C++ 11, but was delayed and is still not available as of C++ 14.
However, getting compile-time error is often the best way to restrict template parameters. As an example, we can use std library:
1) Iterators.
C++ library defines few types of iterators: forward_iterator, random_access_iterator and others. For each type, there is a set of properties and valid expressions defined, that are guaranteed to be available. If you used iterator, that is not fully compatible with random_access_iterator in container, that requires random_access_iterator, you will get compiler error at some point (most likely, when using dereference operator ([]), which is required in this iterator class).
2) Allocators.
All containers in std library use allocator to perform memory allocation/deallocation and objects construction. By default, std::allocator is used. If you want to exchange it with your own, you need to ensure, that it has everything, that std::allocator is guaranteed to have. Otherwise, you will get compile-time error.
So, until we get concepts, this is the best and most widely used solution.
First, I think there is no problem with your static polymorphism example. Since it's static, i.e. compile-time resolved, by definition it has less strict demands regarding its interface definition.
Also it's absolutely legitimate that the incorrect code just won't compile/link, though a more clear error message from the compiler would be nicer.
If you, however, insist on interface definition, you may rewrite your example the following way:
template <class Type>
class Processor
{
public:
void doSomething(int);
void doSomethingElse();
};
template <class Type>
void op(Processor<Type>* proc){
proc->doSomething(5);
proc->doSomethingElse();
}
// specialization
template <>
class Processor<Type_Y>
{
// implement the specialized methods
};
typedef Processor<Type_Y> ProcessorY;
int main() {
ProcessorY py;
op(&py);
return 0;
}

C++ concepts vs static_assert

What is exactly new in c++ concepts? In my understanding they are functionally equal to using static_assert, but in a 'nice' manner meaning that compiler errors will be more readable (as Bjarne Stroustup said you won't get 10 pages or erros, but just one).
Basically, is it true that everything you can do with concepts you can also achieve using static_assert?
Is there something I am missing?
tl;dr
Compared to static_asserts, concepts are more powerful because:
they give you good diagnostic that you wouldn't easily achieve with static_asserts
they let you easily overload template functions without std::enable_if (that is impossible only with static_asserts)
they let you define static interfaces and reuse them without losing diagnostic (there would be the need for multiple static_asserts in each function)
they let you express your intents better and improve readability (which is a big issue with templates)
This can ease the worlds of:
templates
static polymorphism
overloading
and be the building block for interesting paradigms.
What are concepts?
Concepts express "classes" (not in the C++ term, but rather as a "group") of types that satisfy certain requirements. As an example you can see that the Swappable concept express the set of types that:
allows calls to std::swap
And you can easily see that, for example, std::string, std::vector, std::deque, int etc... satisfy this requirement and can therefore be used interchangeably in a function like:
template<typename Swappable>
void func(const Swappable& a, const Swappable& b) {
std::swap(a, b);
}
Concepts always existed in C++, the actual feature that will be added in the (possibly near) future will just allow you to express and enforce them in the language.
Better diagnostic
As far as better diagnostic goes, we will just have to trust the committee for now. But the output they "guarantee":
error: no matching function for call to 'sort(list<int>&)'
sort(l);
^
note: template constraints not satisfied because
note: `T' is not a/an `Sortable' type [with T = list<int>] since
note: `declval<T>()[n]' is not valid syntax
is very promising.
It's true that you can achieve a similar output using static_asserts but that would require different static_asserts per function and that could get tedious very fast.
As an example, imagine you have to enforce the amount of requirements given by the Container concept in 2 functions taking a template parameter; you would need to replicate them in both functions:
template<typename C>
void func_a(...) {
static_assert(...);
static_assert(...);
// ...
}
template<typename C>
void func_b(...) {
static_assert(...);
static_assert(...);
// ...
}
Otherwise you would loose the ability to distinguish which requirement was not satisfied.
With concepts instead, you can just define the concept and enforce it by simply writing:
template<Container C>
void func_a(...);
template<Container C>
void func_b(...);
Concepts overloading
Another great feature that is introduced is the ability to overload template functions on template constraints. Yes, this is also possible with std::enable_if, but we all know how ugly that can become.
As an example you could have a function that works on Containers and overload it with a version that happens to work better with SequenceContainers:
template<Container C>
int func(C& c);
template<SequenceContainer C>
int func(C& c);
The alternative, without concepts, would be this:
template<typename T>
std::enable_if<
Container<T>::value,
int
> func(T& c);
template<typename T>
std::enable_if<
SequenceContainer<T>::value,
int
> func(T& c);
Definitely uglier and possibly more error prone.
Cleaner syntax
As you have seen in the examples above the syntax is definitely cleaner and more intuitive with concepts. This can reduce the amount of code required to express constraints and can improve readability.
As seen before you can actually get to an acceptable level with something like:
static_assert(Concept<T>::value);
but at that point you would loose the great diagnostic of different static_assert. With concepts you don't need this tradeoff.
Static polymorphism
And finally concepts have interesting similarities to other functional paradigms like type classes in Haskell. For example they can be used to define static interfaces.
For example, let's consider the classical approach for an (infamous) game object interface:
struct Object {
// …
virtual update() = 0;
virtual draw() = 0;
virtual ~Object();
};
Then, assuming you have a polymorphic std::vector of derived objects you can do:
for (auto& o : objects) {
o.update();
o.draw();
}
Great, but unless you want to use multiple inheritance or entity-component-based systems, you are pretty much stuck with only one possible interface per class.
But if you actually want static polymorphism (polymorphism that is not that dynamic after all) you could define an Object concept that requires update and draw member functions (and possibly others).
At that point you can just create a free function:
template<Object O>
void process(O& o) {
o.update();
o.draw();
}
And after that you could define another interface for your game objects with other requirements. The beauty of this approach is that you can develop as many interfaces as you want without
modifying your classes
require a base class
And they are all checked and enforced at compile time.
This is just a stupid example (and a very simplistic one), but concepts really open up a whole new world for templates in C++.
If you want more informations you can read this nice article on C++ concepts vs Haskell type classes.

Is there any reason to make a template template parameter non variadic?

If I expect a template template parameter to have one arguement then I could declare it as follows:
template<template<typename> class T>
struct S {
T<int> t_;
//other code here
}
however if I later want to provide a template template parameter which takes two arguements where the second has a default value (like std::vector) T<int> t_; would still work but the template would not match template<typename> class T. I could fix this by making template<typename> class T into a variadic template template template<typename...> class T. Now my code is more flexible.
Should I make all my template template parameters variadic in future? Is there any reason I should not (assuming C++11 support is already required for my code for other reasons)?
First, documentation. If the parameter is variadic, the user now needs to check some other source to find out that this really wants something that will takes one template parameter.
Second, early checking. If you accidentally pass two arguments to T in S, the compiler won't tell you if it's variadic until a user actually tries to use it.
Third, error messages. If the user passes a template that actually needs two parameters, in the variadic version the compiler will give him an error message on the line where S instantiates T, with all the backtrace stuff in-between. In the fixed version, he gets the error where he instantiates S.
Fourth, it's not necessary, because template aliases can work around the issue too.
S<vector> s; // doesn't work
// but this does:
template <typename T> using vector1 = vector<T>;
S<vector1> s;
So my conclusion is, don't make things variadic. You don't actually gain flexibility, you just reduce the amount of code the user has to write a bit, at the cost of poorer readability.
If you know already that you will need it with high probability, you should add it. Otherwise, You Ain't Gonna Need It (YAGNI) so it would add more stuff to maintain. It's similar to the decision to have a template parameter in the first place. Especially in a TDD type of environment, you would refactor only when the need arises, rather than do premature abstraction.
Nor is it a good idea to apply rampant abstraction to every part of
the application. Rather, it requires a dedication on the part of the
developers to apply abstraction only to those parts of the program
that exhibit frequent change. Resisting premature abstraction is as
important as abstraction itself
Robert C. Martin
Page 132, Agile Principles, Patterns and Practices in C#
The benefits of variadic templates can be real, as you point out yourself. But as long as the need for them is still speculative, I wouldn't add them.

typedef vs public inheritance in c++ meta-programming

Disclaimer: the question is completely different from Inheritance instead of typedef and I could not find any similar question so far
I like to play with c++ template meta-programming (at home mostly, I sometimes introduce it lightly at work but I don't want to the program to become only readable to anyone who did not bother learning about it), however I have been quite put out by the compiler errors whenever something goes wrong.
The problem is that of course c++ template meta-programming is based on template, and therefore anytime you get a compiler error within a deeply nested template structure, you've got to dig your way in a 10-lines error message. I have even taken the habit of copy/pasting the message in a text-editor and then indent the message to get some structure until I get an idea of what is actually happening, which adds some work to tracking the error itself.
As far as I know, the problem is mostly due to the compiler and how it output typedefs (there are other problems like the depth of nesting, but then it's not really the compiler fault). Cool features like variadic templates or type deduction (auto) are announced for the upcoming C++0x but I would really like to have better error messages to boot. It can prove painful to use template meta-programming, and I do wonder what this will become when more people actually get into them.
I have replaced some of the typedefs in my code, and use inheritance instead.
typedef partition<AnyType> MyArg;
struct MyArg2: partition<AnyType> {};
That's not much more characters to type, and this is not less readable in my opinion. In fact it might even be more readable, since it guarantees that the new type declared appears close to the left margin, instead of being at an undetermined offset to the right.
This however involves another problem. In order to make sure that I didn't do anything stupid, I often wrote my templates functions / classes like so:
template <class T> T& get(partition<T>&);
This way I was sure that it can only be invoked for a suitable object.
Especially when overloading operators such as operator+ you need some way to narrow down the scope of your operators, or run the risk of it been invoked for int's for example.
However, if this works with a typedef'ed type, since it is only an alias. It sure does not work with inheritance...
For functions, one can simply use the CRTP
template <class Derived, class T> partition;
template <class Derived, class T> T& get(partition<Derived,T>&);
This allows to know the 'real' type that was used to invoke the method before the compiler used the public inheritance. One should note that this decrease the chances this particular function has to be invoked since the compiler has to perform a transformation, but I never noticed any problem so far.
Another solution to this problem is adding a 'tag' property to my types, to distinguish them from one another, and then count on SFINAE.
struct partition_tag {};
template <class T> struct partition { typedef partition_tag tag; ... };
template <class T>
typename boost::enable_if<
boost::same_type<
typename T::tag,
partition_tag
>,
T&
>::type
get(T&)
{
...
}
It requires some more typing though, especially if one declares and defines the function / method at different places (and if I don't bother my interface is pretty soon jumbled). However when it comes to classes, since no transformation of types is performed, it does get more complicated:
template <class T>
class MyClass { /* stuff */ };
// Use of boost::enable_if
template <class T, class Enable = void>
class MyClass { /* empty */ };
template <class T>
class MyClass <
T,
boost::enable_if<
boost::same_type<
typename T::tag,
partition_tag
>
>
>
{
/* useful stuff here */
};
// OR use of the static assert
template <class T>
class MyClass
{
BOOST_STATIC_ASSERT((/*this comparison of tags...*/));
};
I tend to use more the 'static assert' that the 'enable_if', I think it is much more readable when I come back after some time.
Well, basically I have not made my mind yet and I am still experimenting between the different technics exposed here.
Do you use typedefs or inheritance ?
How do you restrict the scope of your methods / functions or otherwise control the type of the arguments provided to them (and for classes) ?
And of course, I'd like more that personal preferences if possible. If there is a sound reason to use a particular technic, I'd rather know about it!
EDIT:
I was browsing stackoverflow and just found this perl from Boost.MPL I had completely forgotten:
BOOST_MPL_ASSERT_MSG
The idea is that you give the macro 3 arguments:
The condition to check
a message (C++ identifier) that should be used for display in the error message
the list of types involved (as a tuple)
It may help considerably in both code self documentation and better error output.
What you are trying to do is to explicitly check whether types passed as template arguments provide the concepts necessary. Short of the concept feature, which was thrown out of C++0X (and thus being one of the main culprits for it becoming C++1X) it's certainly hard to do proper concept checking. Since the 90ies there have been several attempts to create concept-checking libraries without language support, but, basically, all these have achieved is to show that, in order to do it right, concepts need to become a feature of the core language, rather than a library-only feature.
I don't find your ideas of deriving instead of typedef and using enable_if very appealing. As you have said yourself, it often obscures the actual code only for the sake of better compiler error messages.
I find the static assert a lot better. It doesn't require changing the actual code, we all are used to having assertion checks in algorithms and learned to mentally skip over them if we want to understand the actual algorithms, it might produce better error messages, and it will carry over to C++1X better, which is going to have a static_assert (completely with class designer-provided error messages) built in into the language. (I suspect BOOST_STATIC_ASSERT to simply use the built-in static_assert if that's available.)