Are there any real cases proving that 'sizeof...' is necessary? - c++

I wonder about the advantages of the new operator sizeof... (not to be confused with the sizeof operator). I searched the web and found a few examples that seem all like the following one:
template<class... ArgTypes>
std::size_t GetLength()
{
return sizeof...(ArgTypes);
}
I think the examples are not illustrative.
Are there any real examples to illustrate that sizeof... is very useful?
Updates:
I found another examples from here that seem more meaningful:
template<class ...A> void func(A ...args){
typedef typename common_type<A...>::type common;
std::array<common, sizeof...(A)> a = {{ args... }};
}
template<typename... A> int func(const A&... args)
{
boost::any arr[sizeof...(A)] = { args... };
return 0;
}

Here is my example of what you can do with sizeof...:
/// Transform a single boolean value into a number
constexpr unsigned int boolCode(bool value) {
return value;
}
/// Transform a sequence of booleans into a number
template <typename... Args>
constexpr unsigned int boolCode(bool value, Args... others) {
return value << sizeof...(others) | boolCode(others...);
}
And this handy function could be used in a switch statement, like this:
switch (boolCode(condition1, condition2, condition3)) {
case boolCode(false,false,false): //...
case boolCode(false,false,true): //...
case boolCode(false,true,false): //...
case boolCode(false,true,true): //...
case boolCode(true,false,false): //...
case boolCode(true,false,true): //...
case boolCode(true,true,false): //...
case boolCode(true,true,true): //...
}

you probably want to read discussion between STL and CornedBee in the comments:
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Core-C-/Stephan-T-Lavavej-Core-Cpp-8-of-n#comments
Important bit:
sizeof... is not just syntactic sugar, though. A manually implemented
sizeof... would have linear "runtime" (number of instantiations),
whereas the built-in sizeof... is O(1). (One big issue of variadics as
they are is that compilation tends to be very slow, due to lack of
random access into arguments packs. Some guy (I think from Boost)
studied this and found that compilation speed of Boost.Tuple (a
preprorcessor-powered non-variadic tuple) compiled significantly
faster than a naive variadics-based version.)

The first and foremost reason sizeof was introduced into C++
was because it was present in C, where it is necessary in order
to know how much memory to allocate, e.g. malloc(
n * sizeof(struct MyClass) ). In C++, it's used in similar
cases, where allocation and initialization are separate, for
example in container classes, or variants, or maybe classes.
It's also been known to be used in template meta-programming, in
conjunction with function override resolution. Things along the
lines of: sizeof( discriminatorFunction( someArgs ) ) == sizeof(
TrueType ).

Related

Is there an existing name for this type and function?

There are 2 hard problems in computer science: cache invalidation, naming things and off-by-one errors.
This is about the 2nd problem: naming things.
I'm looking if this technique or type has been used somewhere else already and has a name. dichotomy is an ok name, but bools_at_compile_time is a horrible one.
using dichotomy_t = std::variant<std::false_type, std::true_type>;
// (or a struct that inherits from that, and overloads operator bool())
constexpr dichotomy_t dichotomy( bool b ) {
if (b) return std::true_type{};
return std::false_type{};
}
template<class F, class...Bools>
constexpr auto bools_at_compile_time( F&& f, Bools...bools ) {
static_assert( (std::is_same<Bools, bool>{} && ...) );
return std::visit( std::forward<F>(f), dichotomy(bools)... );
}
dichotomy_t is a variant between true and false. Its runtime representation is 0 or 1.
What this lets you do is:
auto foo( bool x, bool y ) { // <-- x and y are run-time bools here
auto func = [&](auto x, auto y) {
return some_template<x,y>(); // <-- x and y are compile-time bools here
};
return bools_at_compile_time( func, x, y ); // <-- converts runtime to compile time bools
}
Is there a name for dichotomy_t or the more general bools_at_compile_time technique? I'm looking for a name that is well known in any community (even a non-C++ one), even a verb that describes "taking a runtime value and creating a switch and a set of compile time value in generated code to pick between" better than a sentence.
Live example
A good answer would include the name, citations/quotes describing what that name means, examples of that named thing in use in the other context, and evidence that this name is equivalent to or inclusive of the above type/value and function.
(It may help to find a name the generalization of this would be an enum instead of a bool, which has a fixed number of known states, and a switch/case map that converts the runtime value into a compile-time constant in each case clause.)
I do not know of any existing names for this pattern, but if you take a good look at how the STL is naming things, you can use name close enough to make your code explicit.
I also liked the dispatcher_t idea from #Jarod42 , I think it is more generic than dichotomy_t or n_chotomy_t.
dichotomy() could be called make_variant(b). Since it will return the std::variant value of a boolean given in argument. Much like std::make_tuple makes a tuple from multiple arguments.
I would suggest to replace bools_at_compile_time by static_eval. Much like static_assert makes an assertion at compile time.
Not that if eval is not the correct adjective for your use case you can easily adapt it static_*.
#include <type_traits>
#include <variant>
#include <utility>
using dichotomy_t = std::variant<std::false_type, std::true_type>;
// (or a struct that inherits from that, and overloads operator bool())
constexpr dichotomy_t make_variant( bool b ) {
if (b) return std::true_type{};
return std::false_type{};
}
template<class F, class...Bools>
constexpr auto static_eval( F&& f, Bools...bools ) {
static_assert( (std::is_same<Bools, bool>{} && ...) );
return std::visit( std::forward<F>(f), make_variant(bools)... );
}
template<bool x, bool y>
auto some_template() {
return x || y;
}
auto foo( bool x, bool y ) { // <-- x and y are run-time bools here
auto func = [&](auto x, auto y) {
return some_template<x,y>(); // <-- x and y are compile-time bools here
};
return static_eval( func, x, y ); // <-- converts runtime to compile time bools
}
#include <iostream>
int main() {
std::cout << foo( true, true ) << "\n";
}
Generation of specialized version of a function is called cloning. (see Procedure Cloning). The term clone is used to name the specialized function generated by the optimizer during constant propagation (see gcc doc).
The set of specialized functions generated by std::visit could be named clone set.
This set is generated for all combinations of argument value. This term combination let us suppose that the set of possible value of each argument is finite.
So we could have a long name for the set of clones such as, set of clones for all combination of argument values. An other option more obscure but shorter could be combinatorial clone set.
As already pointed out, the action of selecting the right function to call in term of the argument could be called dispatch.
So I would propose combinatiorial_clone_set_dispatch or dispatch_in_combinatorial_clone_set ...
As I am unaware of a similar implementation, I'll just go type by type with bikeshed colors.
using boolean_t = std::variant<std::false_type, std::true_type>;
This is pretty self-explanatory, as it's a variant that can store one or the other of the std::integral_constants for true or false. It's kind of a bool, but bool_t is likely to cause confusion. An alternative is boolean_variant, but that may be too verbose.
constexpr boolean_t to_boolean_t( bool b ) {
if (b) return std::true_type{};
return std::false_type{};
}
I started with convert_bool, but that's a bit too generic. to_boolean_t is more expressive. make_boolean_t is also a possibility, as it is basically a boolean_t factory function. Note: I previously chose to_constexpr_boolean, but that's unnecessarily verbose.
template<class F, class...Bools>
constexpr auto static_eval( F&& f, Bools...bools ) {
static_assert( (std::is_same<Bools, bool>{} && ...) );
return std::visit( std::forward<F>(f), to_boolean_t(bools)... );
}
I chose static_eval here as I like Clonk's reasoning, but "static" has contextual meaning in C++, so alternatives are (in no order of importance):
boolean_visit
static_visit
constexpr_eval
constexpr_visit
You issue was: (bold mine)
I'm looking for a name that is well known in any community (even a
non-C++ one), even a verb that describes "taking a runtime value and
creating a switch and a set of compile time value in generated code to
pick between" better than a sentence.
There is, but only if you will adopt it from a related field of science:
The U.S. National Electrical Code (NEC) defines a switchboard as "a
large single panel, frame, or assembly of panels on which are mounted,
on the face, back, or both, switches, over-current and other
protective devices, buses, and usually instruments". The role of a
switchboard is to allow the division of the current supplied to the
switchboard into smaller currents for further distribution and to
provide switching, current protection and (possibly) metering for
those various currents. In general, switchboards may distribute power
to transformers, panelboards, control equipment, and, ultimately, to
individual system loads.
Adopting this thinking, you would simply call it switches.
I will also add that it is quite unusual to specify (ie. repeat) the storage type or cv-qualifier, etc. in type/variable names - even when not directly visible you would usually leave that as implicit - unless it really needs to be emphasized.
Maybe staticCastValue?
As in you are casting a dynamic(runtime) value to a static value.
Can be used with templates or overloads for different types.
Or maybe assertInmutable?
As in you are converting a mutable type into an inmutable one.
Or perhaps expressConstantly?
As in you are expressing the same value but in constant form.
A form similar to constexpr.
A wild one:
staticBifurcate?
As in theres two things to choose from, thus a bifurcation is there.
bifurcate
verb
/ˈbʌɪfəkeɪt/
1.
divide into two branches or forks.
"just below Cairo the river bifurcates"
Or finally convertToConstExpr?
Explicitly saying that the value will be converted to something akin or compatible with a constexpr.

Building data structures at compile time with template-metaprogramming, constexpr or macros

I want to optimize a little programm/library i'm writing and since 2 weeks i'm somewhat stuck and now wondering if what i had in mind is even possible like that.
(Please be gentle i don't have very much experience in meta-programming.)
My goal is of course to have certain computations be done by the compiler, so that the programmer - hopefully - only has to edit code at one point in the program and have the compiler "create" all the boilerplate. I do have a resonably good idea how to do what i want with macros, but it is wished that i do it with templates if possible.
My goal is:
Lets say i have a class that a using programmer can derive from. There he can have multiple incoming and outgoing datatypes that i want to register somehow so that the base class can do i'ts operations on them.
class my_own_multiply : function_base {
in<int> a;
in<float> b;
out<double> c;
// ["..."] // other content of the class that actually does something but is irrelevant
register_ins<a, b> ins_of_function; // example meta-function calls
register_outs<c> outs_of_function;
}
The meta-code i have up till now is this: (but it's not jet working/complete)
template <typename... Ts>
struct register_ins {
const std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)> ins;
constexpr std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)>
build_ins_array() {
std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)> ins_build;
for (unsigned int i = 0; i < sizeof...(Ts); ++i) {
ins_build[i] = std::make_unique<in_type_erasured>();
}
return ins_build;
}
constexpr register_ins() : ins(build_ins_array()) {
}
template <typename T>
T getValueOf(unsigned int in_nr) {
return ins[in_nr]->getValue();
}
};
As you may see, i want to call my meta-template-code with a variable number of ins. (Variable in the sens that the programmer can put however many he likes in there, but they won't change at runtime so they can be "baked" in at compile time)
The meta-code is supposed to be creating an array, that is of the lengt of the number of ins and is initialized so that every field points to the original in in the my_own_multiply class. Basically giving him an indexable data structure that will always have the correct size. And that i could access from the function_base class to use all ins for certain functions wich are also iterable making things convinient for me.
Now i have looked into how one might do that, but i now am getting the feeling that i might not really be allowed to "create" this array at compile time in a fashion that allows me to still have the ins a and b be non static and non const so that i can mutate them. From my side they wouldn't have to be const anyway, but my compliler seems to not like them to be free. The only thing i need const is the array with the pointers. But using constexpr possibly "makes" me make them const?
Okay, i will clarify what i don't get:
When i'm trying to create an "instance" of my meta-stuff-structure then it fails because it expects all kinds of const, constexpr and so on. But i don't want them since i need to be able to mutate most of those variables. I only need this meta-stuff to create an array of the correct size already at compile time. But i don't want to sacrifice having to make everything static and const in order to achive this. So is this even possible under these kinds of terms?
I do not get all the things you have in mind (also regarding that std::unique_ptr in your example), but maybe this helps:
Starting from C++14 (or C++11, but that is strictly limited) you may write constexpr functions which can be evaluated at compile-time. As a precondition (in simple words), all arguments "passed by the caller" must be constexpr. If you want to enforce that the compiler replaces that "call" by the result of a compile-time computation, you must assign the result to a constexpr.
Writing usual functions (just with constexpr added) allows to write code which is simple to read. Moreover, you can use the same code for both: compile-time computations and run-time computations.
C++17 example (similar things are possible in C++14, although some stuff from std is just missing the constexpr qualifier):
http://coliru.stacked-crooked.com/a/154e2dfcc41fb6c7
#include <cassert>
#include <array>
template<class T, std::size_t N>
constexpr std::array<T, N> multiply(
const std::array<T, N>& a,
const std::array<T, N>& b
) {
// may be evaluated in `constexpr` or in non-`constexpr` context
// ... in simple man's words this means:
// inside this function, `a` and `b` are not `constexpr`
// but the return can be used as `constexpr` if all arguments are `constexpr` for the "caller"
std::array<T, N> ret{};
for(size_t n=0; n<N; ++n) ret[n] = a[n] * b[n];
return ret;
}
int main() {
{// compile-time evaluation is possible if the input data is `constexpr`
constexpr auto a = std::array{2, 4, 6};
constexpr auto b = std::array{1, 2, 3};
constexpr auto c = multiply(a, b);// assigning to a `constexpr` guarantees compile-time evaluation
static_assert(c[0] == 2);
static_assert(c[1] == 8);
static_assert(c[2] == 18);
}
{// for run-time data, the same function can be used
auto a = std::array{2, 4, 6};
auto b = std::array{1, 2, 3};
auto c = multiply(a, b);
assert(c[0] == 2);
assert(c[1] == 8);
assert(c[2] == 18);
}
return 0;
}

Multiply variable number of arguments

Say I have a variable amount of arguments which I want to multiply together. The first way I think of is a recursive algorithm:
template<typename Head>
u64 Multiply(Head head) const
{
return head;
}
template<typename Head, typename... Tail>
u64 Multiply(Head head, Tail... tail) const
{
return head * Multiply(tail...);
}
But then I saw this trick:
// u32 and u64 are 32 and 64 bit unsigned integers.
template<typename... T>
u64 Multiply(T... args)
{
u64 res = 1;
for (const u32& arg: {args...})
res *= arg;
return res;
}
The second one appears way nicer to me. Easier to read. However, how does this act on performance? Is anything being copied? What does {args...} do in the first place? Is there a better method?
I have access to C++14.
Edit to be clear: It is about run time multiplication, not compile time.
More to be clear: I do not want to compute integers necessarily(although that is my current application), but the algorithm that I found was specialized for integers.
More: Arguments are of the same type. Algorithms without this restriction would be very interesting but maybe for a different question.
There are multiple questions asked here:
What's the impact on performance? Dunno. You'll need to measure. Depending on the type of the arguments I can imagine that the compiler entirely optimizes things either way, though: it does know the number of arguments and the types.
What is { args... }? Well, it creates an std::initializer_list<T> for the common type of the arguments (assuming there is one). You may want to use the value with std::common_type_t<T...> instead of a fixed type, though.
Is there a better method? There are a couple of approaches although I could imagine that the compiler actually does rather well with this expansion. The alternative which immediately comes to mind is return (args * ... * 1); which, however, requires C++17. If there is at least one argument the * 1 can be omitted: it is there to avoid a compile-time error if there is an empty list of variadic parameters.
The code
template<typename... T>
u64 Multiply(T... args)
{
u64 res = 1;
for (const u32& size : {args...})
res *= size;
return res;
}
is a bit mysterious to me :-) Why we have template parameters with type T and inside the method we used fix size values? And the variable name size looks very obscure because this var has nothing to do with any kind of size. And using integer types inside is also not a valid assumption if you give floating point data into the template.
OK, but to answer your question:
The first one can be used with all types you put into the template function. The second one used fixed ( unsigned integer ) types, which is not what I expect if I see the declaration of the template itself.
Both version can be made constexpr as I learned now :-) and work pretty well for compile time calculation.
To answer the question from your comment:
{args...}
expands to:
{ 1,2,3,4}
which is simply an "array" ( std::std::initializer_list) and only works if all elements have the same type.
So having
for (const u32& size : {args...})
simply iterates over the array.

Returning container from function: optimizing speed and modern style

Not entirely a question, although just something I have been pondering on how to write such code more elegantly by style and at the same time fully making use of the new c++ standard etc. Here is the example
Returning Fibonacci sequence to a container upto N values (for those not mathematically inclined, this is just adding the previous two values with the first two values equal to 1. i.e. 1,1,2,3,5,8,13, ...)
example run from main:
std::vector<double> vec;
running_fibonacci_seq(vec,30000000);
1)
template <typename T, typename INT_TYPE>
void running_fibonacci_seq(T& coll, const INT_TYPE& N)
{
coll.resize(N);
coll[0] = 1;
if (N>1) {
coll[1] = 1;
for (auto pos = coll.begin()+2;
pos != coll.end();
++pos)
{
*pos = *(pos-1) + *(pos-2);
}
}
}
2) the same but using rvalue && instead of & 1.e.
void running_fibonacci_seq(T&& coll, const INT_TYPE& N)
EDIT: as noticed by the users who commented below, the rvalue and lvalue play no role in timing - the speeds were actually the same for reasons discussed in the comments
results for N = 30,000,000
Time taken for &:919.053ms
Time taken for &&: 800.046ms
Firstly I know this really isn't a question as such, but which of these or which is best modern c++ code? with the rvalue reference (&&) it appears that move semantics are in place and no unnecessary copies are being made which makes a small improvement on time (important for me due to future real-time application development). some specific ''questions'' are
a) passing a container (which was vector in my example) to a function as a parameter is NOT an elegant solution on how rvalue should really be used. is this fact true? if so how would rvalue really show it's light in the above example?
b) coll.resize(N); call and the N=1 case, is there a way to avoid these calls so the user is given a simple interface to only use the function without creating size of vector dynamically. Can template metaprogramming be of use here so the vector is allocated with a particular size at compile time? (i.e. running_fibonacci_seq<30000000>) since the numbers can be large is there any need to use template metaprogramming if so can we use this (link) also
c) Is there an even more elegant method? I have a feeling std::transform function could be used by using lambdas e.g.
void running_fibonacci_seq(T&& coll, const INT_TYPE& N)
{
coll.resize(N);
coll[0] = 1;
coll[1] = 1;
std::transform (coll.begin()+2,
coll.end(), // source
coll.begin(), // destination
[????](????) { // lambda as function object
return ????????;
});
}
[1] http://cpptruths.blogspot.co.uk/2011/07/want-speed-use-constexpr-meta.html
Due to "reference collapsing" this code does NOT use an rvalue reference, or move anything:
template <typename T, typename INT_TYPE>
void running_fibonacci_seq(T&& coll, const INT_TYPE& N);
running_fibonacci_seq(vec,30000000);
All of your questions (and the existing comments) become quite meaningless when you recognize this.
Obvious answer:
std::vector<double> running_fibonacci_seq(uint32_t N);
Why ?
Because of const-ness:
std::vector<double> const result = running_fibonacci_seq(....);
Because of easier invariants:
void running_fibonacci_seq(std::vector<double>& t, uint32_t N) {
// Oh, forgot to clear "t"!
t.push_back(1);
...
}
But what of speed ?
There is an optimization called Return Value Optimization that allows the compiler to omit the copy (and build the result directly in the caller's variable) in a number of cases. It is specifically allowed by the C++ Standard even when the copy/move constructors have side effects.
So, why passing "out" parameters ?
you can only have one return value (sigh)
you may wish the reuse the allocated resources (here the memory buffer of t)
Profile this:
#include <vector>
#include <cstddef>
#include <type_traits>
template <typename Container>
Container generate_fibbonacci_sequence(std::size_t N)
{
Container coll;
coll.resize(N);
coll[0] = 1;
if (N>1) {
coll[1] = 1;
for (auto pos = coll.begin()+2;
pos != coll.end();
++pos)
{
*pos = *(pos-1) + *(pos-2);
}
}
return coll;
}
struct fibbo_maker {
std::size_t N;
fibbo_maker(std::size_t n):N(n) {}
template<typename Container>
operator Container() const {
typedef typename std::remove_reference<Container>::type NRContainer;
typedef typename std::decay<NRContainer>::type VContainer;
return generate_fibbonacci_sequence<VContainer>(N);
}
};
fibbo_maker make_fibbonacci_sequence( std::size_t N ) {
return fibbo_maker(N);
}
int main() {
std::vector<double> tmp = make_fibbonacci_sequence(30000000);
}
the fibbo_maker stuff is just me being clever. But it lets me deduce the type of fibbo sequence you want without you having to repeat it.

What is metaprogramming?

With reference to this question, could anybody please explain and post example code of metaprogramming? I googled the term up, but I found no examples to convince me that it can be of any practical use.
On the same note, is Qt's Meta Object System a form of metaprogramming?
jrh
Most of the examples so far have operated on values (computing digits of pi, the factorial of N or similar), and those are pretty much textbook examples, but they're not generally very useful. It's just hard to imagine a situation where you really need the compiler to comput the 17th digit of pi. Either you hardcode it yourself, or you compute it at runtime.
An example that might be more relevant to the real world could be this:
Let's say we have an array class where the size is a template parameter(so this would declare an array of 10 integers: array<int, 10>)
Now we might want to concatenate two arrays, and we can use a bit of metaprogramming to compute the resulting array size.
template <typename T, int lhs_size, int rhs_size>
array<T, lhs_size + rhs_size> concat(const array<T, lhs_size>& lhs, const array<T, rhs_size>& rhs){
array<T, lhs_size + rhs_size> result;
// copy values from lhs and rhs to result
return result;
}
A very simple example, but at least the types have some kind of real-world relevance. This function generates an array of the correct size, it does so at compile-time, and with full type safety. And it is computing something that we couldn't easily have done either by hardcoding the values (we might want to concatenate a lot of arrays with different sizes), or at runtime (because then we'd lose the type information)
More commonly, though, you tend to use metaprogramming for types, rather than values.
A good example might be found in the standard library. Each container type defines its own iterator type, but plain old pointers can also be used as iterators.
Technically an iterator is required to expose a number of typedef members, such as value_type, and pointers obviously don't do that. So we use a bit of metaprogramming to say "oh, but if the iterator type turns out to be a pointer, its value_type should use this definition instead."
There are two things to note about this. The first is that we're manipulating types, not values We're not saying "the factorial of N is so and so", but rather, "the value_type of a type T is defined as..."
The second thing is that it is used to facilitate generic programming. (Iterators wouldn't be a very generic concept if it didn't work for the simplest of all examples, a pointer into an array. So we use a bit of metaprogramming to fill in the details required for a pointer to be considered a valid iterator).
This is a fairly common use case for metaprogramming. Sure, you can use it for a wide range of other purposes (Expression templates are another commonly used example, intended to optimize expensive calculations, and Boost.Spirit is an example of going completely overboard and allowing you to define your own parser at compile-time), but probably the most common use is to smooth over these little bumps and corner cases that would otherwise require special handling and make generic programming impossible.
The concept comes entirely from the name Meta- means to abstract from the thing it is prefixed on.
In more 'conversational style' to do something with the thing rather than the thing itself.
In this regard metaprogramming is essentially writing code, which writes (or causes to be written) more code.
The C++ template system is meta programming since it doesn't simply do textual substitution (as the c preprocessor does) but has a (complex and inefficient) means of interacting with the code structure it parses to output code that is far more complex. In this regard the template preprocessing in C++ is Turing complete. This is not a requirement to say that something is metaprogramming but is almost certainly sufficient to be counted as such.
Code generation tools which are parametrizable may be considered metaprogramming if their template logic is sufficiently complex.
The closer a system gets to working with the abstract syntax tree that represents the language (as opposed to the textual form we represent it in) the more likely it is to be considered metaprogramming.
From looking at the QT MetaObjects code I would not (from a cursory inspection) call it meta programming in the sense usually reserved for things like the C++ template system or Lisp macros. It appears to simply be a form of code generation which injects some functionality into existing classes at the compile stage (it can be viewed as a precursor to the sort of Aspect Oriented Programming style currently in vogue or the prototype based object systems in languages like JavaScripts
As example of the sort of extreme lengths you can take this in C++ there is Boost MPL whose tutorial shows you how to get:
Dimensioned types (Units of Measure)
quantity<float,length> l( 1.0f );
quantity<float,mass> m( 2.0f );
m = l; // compile-time type error
Higher Order Metafunctions
twice(f, x) := f(f(x))
template <class F, class X>
struct twice
: apply1<F, typename apply1<F,X>::type>
{};
struct add_pointer_f
{
template <class T>
struct apply : boost::add_pointer<T> {};
};
Now we can use twice with add_pointer_f to build pointers-to-pointers:
BOOST_STATIC_ASSERT((
boost::is_same<
twice<add_pointer_f, int>::type
, int**
>::value
));
Although it's large (2000loc) I made a reflexive class system within c++ that is compiler independant and includes object marshalling and metadata but has no storage overhead or access time penalties. It's hardcore metaprogramming, and being used in a very big online game for mapping game objects for network transmission and database-mapping (ORM).
Anyways it takes a while to compile, about 5 minutes, but has the benefit of being as fast as hand tuned code for each object. So it saves lots of money by reducing significant CPU time on our servers (CPU usage is 5% of what it used to be).
Here's a common example:
template <int N>
struct fact {
enum { value = N * fact<N-1>::value };
};
template <>
struct fact<1> {
enum { value = 1 };
};
std::cout << "5! = " << fact<5>::value << std::endl;
You're basically using templates to calculate a factorial.
A more practical example I saw recently was an object model based on DB tables that used template classes to model foreign key relationships in the underlying tables.
Another example: in this case the metaprogramming tecnique is used to get an arbitrary-precision value of PI at compile-time using the Gauss-Legendre algorithm.
Why should I use something like that in real world? For example to avoid repeating computations, to obtain smaller executables, to tune up code for maximizing performance on a specific architecture, ...
Personally I love metaprogramming because I hate repeating stuff and because I can tune up constants exploiting architecture limits.
I hope you like that.
Just my 2 cents.
/**
* FILE : MetaPI.cpp
* COMPILE : g++ -Wall -Winline -pedantic -O1 MetaPI.cpp -o MetaPI
* CHECK : g++ -Wall -Winline -pedantic -O1 -S -c MetaPI.cpp [read file MetaPI.s]
* PURPOSE : simple example template metaprogramming to compute the
* value of PI using [1,2].
*
* TESTED ON:
* - Windows XP, x86 32-bit, G++ 4.3.3
*
* REFERENCES:
* [1]: http://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_algorithm
* [2]: http://www.geocities.com/hjsmithh/Pi/Gauss_L.html
* [3]: http://ubiety.uwaterloo.ca/~tveldhui/papers/Template-Metaprograms/meta-art.html
*
* NOTE: to make assembly code more human-readable, we'll avoid using
* C++ standard includes/libraries. Instead we'll use C's ones.
*/
#include <cmath>
#include <cstdio>
template <int maxIterations>
inline static double compute(double &a, double &b, double &t, double &p)
{
double y = a;
a = (a + b) / 2;
b = sqrt(b * y);
t = t - p * ((y - a) * (y - a));
p = 2 * p;
return compute<maxIterations - 1>(a, b, t, p);
}
// template specialization: used to stop the template instantiation
// recursion and to return the final value (pi) computed by Gauss-Legendre algorithm
template <>
inline double compute<0>(double &a, double &b, double &t, double &p)
{
return ((a + b) * (a + b)) / (4 * t);
}
template <int maxIterations>
inline static double compute()
{
double a = 1;
double b = (double)1 / sqrt(2.0);
double t = (double)1 / 4;
double p = 1;
return compute<maxIterations>(a, b, t, p); // call the overloaded function
}
int main(int argc, char **argv)
{
printf("\nTEMPLATE METAPROGRAMMING EXAMPLE:\n");
printf("Compile-time PI computation based on\n");
printf("Gauss-Legendre algorithm (C++)\n\n");
printf("Pi=%.16f\n\n", compute<5>());
return 0;
}
The following example is lifted from the excellent book C++ Templates - The complete guide.
#include <iostream>
using namespace std;
template <int N> struct Pow3 {
enum { pow = 3 * Pow3<N-1>::pow };
}
template <> struct Pow3<0> {
enum { pow = 1 };
}
int main() {
cout << "3 to the 7 is " << Pow<7>::pow << "\n";
}
The point of this code is that the recursive calculation of the 7th power of 3 takes place at compile time rather than run time. It is thus extremely efficient in terms of runtime performance, at the expense of slower compilation.
Is this useful? In this example, probably not. But there are problems where performing calculations at compile time can be an advantage.
It's hard to say what C++ meta-programming is. More and more I feel it is much like introducing 'types' as variables, in the way functional programming has it. It renders declarative programming possible in C++.
It's way easier to show examples.
One of my favorites is a 'trick' (or pattern:) ) to flatte multiply nested switch/case blocks:
#include <iostream>
using namespace std;
enum CCountry { Belgium, Japan };
enum CEra { ancient, medieval, future };
// nested switch
void historic( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ):
switch( era ) {
case( ancient ): cout << "Ambiorix"; break;
case( medieval ): cout << "Keizer Karel"; break;
}
break;
case( Japan ):
switch( era ) {
case( future ): cout << "another Ruby?"; break;
case( medieval ): cout << "Musashi Mijamoto"; break;
}
break;
}
}
// the flattened, metaprogramming way
// define the conversion from 'runtime arguments' to compile-time arguments (if needed...)
// or use just as is.
template< CCountry country, CEra era > void thistoric();
template<> void thistoric<Belgium, ancient> () { cout << "Ambiorix"; }
template<> void thistoric<Belgium, medieval>() { cout << "Keizer Karel"; }
template<> void thistoric<Belgium, future >() { cout << "Beer, lots of it"; }
template<> void thistoric<Japan, ancient> () { cout << "wikipedia"; }
template<> void thistoric<Japan, medieval>() { cout << "Musashi"; }
template<> void thistoric<Japan, future >() { cout << "another Ruby?"; }
// optional: conversion from runtime to compile-time
//
template< CCountry country > struct SelectCountry {
static void select( CEra era ) {
switch (era) {
case( medieval ): thistoric<country, medieval>(); break;
case( ancient ): thistoric<country, ancient >(); break;
case( future ): thistoric<country, future >(); break;
}
}
};
void Thistoric ( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ): SelectCountry<Belgium>::select( era ); break;
case( Japan ): SelectCountry<Japan >::select( era ); break;
}
}
int main() {
historic( Belgium, medieval ); // plain, nested switch
thistoric<Belgium,medieval>(); // direct compile time switch
Thistoric( Belgium, medieval );// flattened nested switch
return 0;
}
The only time I needed to use Boost.MPL in my day job was when I needed to convert boost::variant to and from QVariant.
Since boost::variant has an O(1) visitation mechanism, the boost::variant to QVariant direction is near-trivial.
However, QVariant doesn't have a visitation mechanism, so in order to convert it into a boost::variant, you need to iterate over the mpl::list of types that the specific boost::variant instantiation can hold, and for each type ask the QVariant whether it contains that type, and if so, extract the value and return it in a boost::variant. It's quite fun, you should try it :)
QtMetaObject basically implements reflection (Reflection) and IS one of the major forms of metaprogramming, quite powerful actually. It is similar to Java's reflection and it's also commonly used in dynamic languages (Python, Ruby, PHP...). It's more readable than templates, but both have their pros and cons.
This is a simple "value computation" along the lines of Factorial. However, it's one you are much more likely to actually use in your code.
The macro CT_NEXTPOWEROFTWO2(VAL) uses template metaprogramming to compute the next power of two greater than or equal to a value for values known at compile time.
template<long long int POW2VAL> class NextPow2Helper
{
enum { c_ValueMinusOneBit = (POW2VAL&(POW2VAL-1)) };
public:
enum {
c_TopBit = (c_ValueMinusOneBit) ?
NextPow2Helper<c_ValueMinusOneBit>::c_TopBit : POW2VAL,
c_Pow2ThatIsGreaterOrEqual = (c_ValueMinusOneBit) ?
(c_TopBit<<1) : c_TopBit
};
};
template<> class NextPow2Helper<1>
{ public: enum { c_TopBit = 1, c_Pow2ThatIsGreaterOrEqual = 1 }; };
template<> class NextPow2Helper<0>
{ public: enum { c_TopBit = 0, c_Pow2ThatIsGreaterOrEqual = 0 }; };
// This only works for values known at Compile Time (CT)
#define CT_NEXTPOWEROFTWO2(VAL) NextPow2Helper<VAL>::c_Pow2ThatIsGreaterOrEqual