As seen in this question, there is a difference between the results MKL gives, between serial and distributed execution. For that reason, I would like to study that error. From my book I have:
|ε_(x_c)| = |x - x_c| <= 1/2 * 10^(-d), where d specifies the decimal digits that are accurate, between the actual number, x and the number the computer has, x_c.
|ρ_(x_c)| = |x - x_c|/|x| <= 5 * 10^(-s) is the absolute relative error, where s specifies the number of significant digits.
So, we can write code like this:
double calc_error(double a,double x)
{
return std::abs(x-a)/std::abs(a);
}
in order to compute the absolute error for example, as seen here.
Are there more types of errors to study, except from the absolute error and the absolute relative error?
Here are some of my data to play with:
serial gives:
-250207683.634793 -1353198687.861288 2816966067.598196 -144344843844.616425 323890119928.788757
distributed gives:
-250207683.634692 -1353198687.861386 2816966067.598891 -144344843844.617096 323890119928.788757
and then I can expand the idea(s) to the actual data and results.
It doesn't get much more complicated than absolute and absolute relative errors. There is another method that compares integer-representations of floating-point formats, the idea being that you want your "tolerance" to adapt with the magnitude of the numbers you are comparing (specifically because there aren't "as many" numbers representable depending on the magnitude).
All in all, I think your question is very similar to floating-point comparison, for which there is this excellent guide, and this more exhaustive but much longer paper.
It might also be worth throwing in these for comparing floating point values:
#include <limits>
#include <cmath>
template <class T>
struct fp_equal_strict
{
inline bool operator() ( const T& a, const T& b )
{
return std::abs(a - b)
<= std::max(
std::numeric_limits<T>::min() * std::min( std::abs(a), std::abs(b) ),
std::numeric_limits<T>::epsilon()
);
}
};
template <class T>
struct fp_equal_loose
{
inline bool operator() ( const T& a, const T& b )
{
return std::abs(a - b)
<= std::max(
std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) ),
std::numeric_limits<T>::epsilon()
);
}
};
template <class T>
struct fp_greater
{
inline bool operator() ( const T& a, const T& b )
{
return (a - b) >= std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) );
}
};
template <class T>
struct fp_lesser
{
inline bool operator() ( const T& a, const T& b )
{
return (b - a) >= std::numeric_limits<T>::min() * std::max( std::abs(a), std::abs(b) );
}
};
I would mention that it is also possible to perform an ULPs (Units in the Last Place) comparison, which shows how far away two floating point numbers are in the binary representation. This is a nice indication of "closeness" since if two numbers are for example one ULP apart it means that there is no floating point number between them, so they are as close as possible in the binary representation without acutally being equal.
This method is descrbied here which is a more recent version than the article linked from the accepted answer by the same author. Sample code is also provided.
As an aside, but related to the context of your work (comparing sequential vs parallel floating point computations) it is important to note that floating point operations are not associative which means parallel implementations may not in general give the same result as sequential implementations. Even changing the compiler and optimisation options can actually lead to different results (e.g. GCC vs ICC, -O0 vs -O3).
An example algorithm on how to reduce the error computation for performing summation of floating point numbers can be found here and a comprehensive document by the author of that algorithm can be found here.
Related
I understood that std::numeric_limits::espsilon() and DBL_EPSILON should deliver the same value but are defined in different headers, limits, and cfloat. Which makes std::numeric_limits::espsilon() a c++ style way of writing and DBL_EPSILON the c style.
My question is if there is any benefit in using std::numeric_limits::espsilon() over DBL_EPSILON in c++ project? Aside from a clean c++ coding style. Or did I understand this completely wrong?
Here on this page https://en.cppreference.com/w/cpp/types/numeric_limits you can find tables of what are the C marco equivalents of the std::numeric_limits.
They are equivalents, so for any pair of std::limits function/constant and C macro you find in the table, they can be interchanged.
The big difference is in generic code:
template <typename T>
void foo() {
std::cout << std::numeric_limits<T>::epsilon();
}
Doing the same with C macros would require to write much more code. Also any opportunity to not use a macro is a good one.
First of all DBL_EPSILON is a C API so it is a good practice to use C++ specific API in C++ code. I know it is more typing, still it is a good practice. In my code when I need epsilon multiple times for single type, I just bind it to some constexpr.
More important thing this is a great tool when you write a template. For example:
template<std::floating_point T>
bool fuzzyCompare(T a, T b)
{
return std::fabs(a - b) <= 4 * std::max(std::fabs(a), std::fabs(b)) * std::numeric_limits<T>::espsilon();
}
One obvious advantage of using std::numeric_limits<T>::epsilon is generic function. Imagine if you write some function like an almost-equal. This function should accept floating numbers of different precisions, including double, float, long double. Or maybe even integer types as well. To write this with the macro solution, you would have to write an overload for each of the floating point types:
bool almost_equal(float a, float b, int k)
{
return std::abs(a - b) <= FLT_EPSILON * std::abs(a + b) * k;
}
bool almost_equal(double a, double b, int k)
{
return std::abs(a - b) <= DBL_EPSILON * std::abs(a + b) * k;
}
...
But with the numeric_limits template, you can simply write a single function for all of them:
template<typename T>
bool almost_equal(T a, T b, int k)
{
return std::abs(a - b) <= std::numeric_limits<T>::epsilon() * std::abs(a + b) * k;
}
I need to write a float comparison function (equal/not equal) but I have to use C++98 and boost libraries at most. I know that float comparison should include epsilon but I don't know how to write such code without using C++11.
One C++98 example:
#include <cmath>
#include <limits>
#include <iostream>
inline bool equal_with_tolerance(float a, float b, float tolerance = std::numeric_limits<float>::epsilon()) {
return std::abs(a - b) < tolerance;
}
int main() {
float a = 0.1f;
float b = 0.1000001f;
std::cout << (a == b) << '\n'; // Outputs 0.
std::cout << equal_with_tolerance(a, b) << '\n'; // Outputs 1.
}
tolerance depends on your problem domain, using std::numeric_limits<float>::epsilon is rarely adequate, see this for more details.
I know that float comparison should include epsilon but I don't know how
You could use std::numeric_limits<float>::epsilon() to get the "machine" epsilon.
However, floating point equality comparison with tolerance is not quite as simple as directly comparing absolute difference to machine epsilon. Any small epsilon is going to devolve the comparison into an equality comparison for large values, which leaves you with zero error tolerance.
Meaningful tolerant comparison, requires that you know what sort of values you're expecting, their magnitude, their sign, expected error that you wish to tolerate.
This blog explains the problem in intricate detail. It suggests following, which may be reasonable for "generic" comparison
bool AlmostEqualRelativeAndAbs(float A, float B,
float maxDiff, float maxRelDiff = FLT_EPSILON)
{
// Check if the numbers are really close -- needed
// when comparing numbers near zero.
float diff = fabs(A - B);
if (diff <= maxDiff)
return true;
A = fabs(A);
B = fabs(B);
float largest = (B > A) ? B : A;
if (diff <= largest * maxRelDiff)
return true;
return false;
}
The example is in C, but trivial to translate to C++ idioms. There is also an ULP based function in the article, but its implementation relies on union type punning that is not allowed in C++.
i am doing fixed point implementation in c++ and i am trying to define “not-a-number” and support a function bool isnan( … ) which returns true if the number is not-a-number and false otherwise.
can someone one give me some ideas of how to define “not-a-number” and implement a function bool isnan( … ) in my fixed point math implemenation.
i have read about C++ Nan, but i couldnt get any source or reference for how to manually define and create function nan() to use it in fixed point implemenation.
can some one tell me how to proceed or give some references to proceed?
Thank you
UPDATE Fixedpoint header
#ifndef __fixed_point_header_h__
#define __fixed_point_header_h__
#include <boost/operators.hpp>
#include <boost/assert.hpp>
#endif
namespace fp {
template<typename FP, unsigned char I, unsigned char F>
class fixed_point: boost::ordered_field_operators<fp::fixed_point<FP, I, F> >
{
//compute the power of 2 at compile time by template recursion
template<int P,typename T = void>
struct power2
{
static const long long value = 2 * power2<P-1,T>::value;
};
template <typename P>
struct power2<0, P>
{
static const long long value = 1;
};
fixed_point(FP value,bool): fixed_(value){ } // initializer list
public:
typedef FP base_type; /// fixed point base type of this fixed_point class.
static const unsigned char integer_bit_count = I; /// integer part bit count.
static const unsigned char fractional_bit_count = F; /// fractional part bit count.
fixed_point(){ } /// Default constructor.
//Integer to Fixed point
template<typename T> fixed_point(T value) : fixed_((FP)value << F)
{
BOOST_CONCEPT_ASSERT((boost::Integer<T>));
}
//floating point to fixed point
fixed_point(float value) :fixed_((FP)(value * power2<F>::value)){ }
fixed_point(double value) : fixed_((FP)(value * power2<F>::value)) { }
fixed_point(long double value) : fixed_((FP)(value * power2<F>::value)) { }
/// Copy constructor,explicit definition
fixed_point(fixed_point<FP, I, F> const& rhs): fixed_(rhs.fixed_)
{ }
// copy-and-swap idiom.
fp::fixed_point<FP, I, F> & operator =(fp::fixed_point<FP, I, F> const& rhs)
{
fp::fixed_point<FP, I, F> temp(rhs); // First, make a copy of the right-hand side
swap(temp); //swapping the copied(old) data the new data.
return *this; //return by reference
}
/// Exchanges the elements of two fixed_point objects.
void swap(fp::fixed_point<FP, I, F> & rhs)
{
std::swap(fixed_, rhs.fixed_);
}
bool operator <(
/// Right hand side.
fp::fixed_point<FP, I, F> const& rhs) const
{
return fixed_ < rhs.fixed_; //return by value
}
bool operator ==(
/// Right hand side.
fp::fixed_point<FP, I, F> const& rhs) const
{
return fixed_ == rhs.fixed_; //return by value
}
// Addition.
fp::fixed_point<FP, I, F> & operator +=(fp::fixed_point<FP, I, F> const& summation)
{
fixed_ += summation.fixed_;
return *this; //! /return A reference to this object.
}
/// Subtraction.
fp::fixed_point<FP, I, F> & operator -=(fp::fixed_point<FP, I, F> const& subtraction)
{
fixed_ -= subtraction.fixed_;
return *this; // return A reference to this object.
}
// Multiplication.
fp::fixed_point<FP, I, F> & operator *=(fp::fixed_point<FP, I, F> const& factor)
{
fixed_ = ( fixed_ * (factor.fixed_ >> F) ) +
( ( fixed_ * (factor.fixed_ & (power2<F>::value-1) ) ) >> F );
return *this; //return A reference to this object.
}
/// Division.
fp::fixed_point<FP, I, F> & operator /=(fp::fixed_point<FP, I, F> const& divisor)
{
fp::fixed_point<FP, I, F> fp_z=1;
fp_z.fixed_ = ( (fp_z.fixed_) << (F-2) ) / ( divisor.fixed_ >> (2) );
*this *= fp_z;
return *this; //return A reference to this object
}
private:
/// The value in fixed point format.
FP fixed_;
};
} // namespace fmpl
#endif
#endif // __fixed_point_header__
usually fixedpoint math is used on embedded hardware that has no FPU.
Mostly this hardware also lacks of Programm or Data Space or/and processing power.
Are you sure that you require a generic support of NAN, INF, or what ever?
May be it is sufficient to explicitly implement this on as separate Flags on the operations that can produce theese values.
THen you use fixed point arithmetic you extremely good have to know your data to avoid overflows or underflows on mutliplications or divisions. So your algorithms have to be written in a way that avoids theese special conditions anyway.
Additional to this even then using double: once you have one of theese special values in your algorithm they spread like a virus and the result is quite useless.
As conlusion: In My Opinions explicitly implementing this in your fixedpoint class is a significant waste of processing power, because you have to add conditionals to every fixpoint operation. And conditionals are poison to the deep cpu pipelines of DSPs or µC.
Could you give us an example of what you mean by fixed point? Is it implemented as a class? Is it fixed number of bytes, or do you support 8, 16, 32, 64bit numbers? How do you represent negative values?
Depending on these factors you can implement a few possibe different ways. The way the IEEE floating point numbers get away with it, is because the number are encoded in a special format allowing flags to be set based on the bit pattern. In a fixed point implementation that might not be possible. but if its a class you could define the arithmetic operators for the class and then set the resultant number to be nan.
UPDATE
Looking at the code it seems you are just stuffing the information in the value. So the best way may be to have an isnan flag in the class, and set it from the appropriate math operations, and then check for it before you perform the operations so the isnan propogates.
Essentially, you must set aside some value or set of values to represent a NaN. In every operation on your objects (e.g., addition), you must test whether an input value is a NaN and respond accordingly.
Additionally, you must make sure no normal operation produces a NaN result inadvertently. So you have to handle overflows and such to ensure that, if a calculated result would be a bit pattern for a NaN, you produce an infinity and/or an exception indication and/or whatever result is desired.
That is basically it; there is no magic.
Generally, you would not want to use a single bit as a flag, because that wastes many bit combinations that could be used to represent values. IEEE 754 sets aside one value of the exponent field (all ones) to indicate infinity (if the significand field is all zeroes) or NaN (otherwise). That way, only a small portion of the bit combinations are used for NaNs. (For 32-bit, there are 224-2 NaNs out of 232 possible bit combinations, so less than .4% of the potential values are expended on NaNs.)
For the issue of the floating precision, I defined my custom compare function for floating numbers:
bool cmp(double a, double b)
{
if(abs(a - b) <= eps) return false;
return a < b;
}
Then I call sort on some array of floating numbers. I've heard that some bad compare function will cause the sort to segment fault. I just wondering will cmp work correctly for sort? On one hand, cmp satisfied the associating rule. But on the other hand, cmp(x - eps, x) == false && cmp(x, x + eps) == false doesn't mean cmp(x - eps, x + eps) == false.
I didn't use sort directly on floating numbers because what I want to sort is pair<double, double>.
For example:
(1,2), (2,1), (2.000000001, 0)
I'd like to consider 2 and 2.000000001 as the same and expect the result to be:
(1,2), (2.000000001, 0), (2,1)
std::sort requires a comparer that defines a strict weak ordering. This means, among other things, that the following condition must be met:
We define two items, a and b, to be equivalent (a === b) if !cmp(a, b) && !cmp(b, a)
Equivalence is transitive: a === b && b === c => a === c
As you already say in your question, your function cmp() does not meet these conditions, so you cannot use your function in std::sort(). Not only the result of the algorithm will be unpredictable, which is bad unless you are actually looking for this unpredictability (cf. randomize): if you have a few values that are very close to each other, such that any of them compare true with some, but false with some others, the algorithm might enter an infinite loop.
So the answer is no, you cannot use your function cmp() in std::sort() unless you want to risk your program freezing.
Why would you bother to make an approximate less-than comparison? That makes no sense.
Just sort your array strictly by actual values.
Then use your approximate comparison function to determine which of the elements you wish to consider to be equal.
(The equivalent in English would be the infamous "almost better". Think about it.)
It's possible to define a comparison function for floating point that groups similar values. You do so by rounding:
bool cmp(double a, double b)
{
const double eps = 0.0001;
int a_exp;
double a_mant = frexp(a, &a_exp); // Between 0.5 and 1.0
a_mant -= modf(a_mant, eps); // Round a_mant to 0.00001
a = ldexp(a_mant, a_exp); // Round a to 0.00001 * 10^a_exp
// and the same for b
int b_exp;
double b_mant = frexp(b, &b_exp);
b_mant -= modf(b_mant, eps);
b = ldexp(b_mant, b_exp);
// Compare rounded results.
return a < b;
}
Now cmp(a,b)==true implies that a<b, and a==b and a>b both imply cmp(a,b)==false.
With reference to this question, could anybody please explain and post example code of metaprogramming? I googled the term up, but I found no examples to convince me that it can be of any practical use.
On the same note, is Qt's Meta Object System a form of metaprogramming?
jrh
Most of the examples so far have operated on values (computing digits of pi, the factorial of N or similar), and those are pretty much textbook examples, but they're not generally very useful. It's just hard to imagine a situation where you really need the compiler to comput the 17th digit of pi. Either you hardcode it yourself, or you compute it at runtime.
An example that might be more relevant to the real world could be this:
Let's say we have an array class where the size is a template parameter(so this would declare an array of 10 integers: array<int, 10>)
Now we might want to concatenate two arrays, and we can use a bit of metaprogramming to compute the resulting array size.
template <typename T, int lhs_size, int rhs_size>
array<T, lhs_size + rhs_size> concat(const array<T, lhs_size>& lhs, const array<T, rhs_size>& rhs){
array<T, lhs_size + rhs_size> result;
// copy values from lhs and rhs to result
return result;
}
A very simple example, but at least the types have some kind of real-world relevance. This function generates an array of the correct size, it does so at compile-time, and with full type safety. And it is computing something that we couldn't easily have done either by hardcoding the values (we might want to concatenate a lot of arrays with different sizes), or at runtime (because then we'd lose the type information)
More commonly, though, you tend to use metaprogramming for types, rather than values.
A good example might be found in the standard library. Each container type defines its own iterator type, but plain old pointers can also be used as iterators.
Technically an iterator is required to expose a number of typedef members, such as value_type, and pointers obviously don't do that. So we use a bit of metaprogramming to say "oh, but if the iterator type turns out to be a pointer, its value_type should use this definition instead."
There are two things to note about this. The first is that we're manipulating types, not values We're not saying "the factorial of N is so and so", but rather, "the value_type of a type T is defined as..."
The second thing is that it is used to facilitate generic programming. (Iterators wouldn't be a very generic concept if it didn't work for the simplest of all examples, a pointer into an array. So we use a bit of metaprogramming to fill in the details required for a pointer to be considered a valid iterator).
This is a fairly common use case for metaprogramming. Sure, you can use it for a wide range of other purposes (Expression templates are another commonly used example, intended to optimize expensive calculations, and Boost.Spirit is an example of going completely overboard and allowing you to define your own parser at compile-time), but probably the most common use is to smooth over these little bumps and corner cases that would otherwise require special handling and make generic programming impossible.
The concept comes entirely from the name Meta- means to abstract from the thing it is prefixed on.
In more 'conversational style' to do something with the thing rather than the thing itself.
In this regard metaprogramming is essentially writing code, which writes (or causes to be written) more code.
The C++ template system is meta programming since it doesn't simply do textual substitution (as the c preprocessor does) but has a (complex and inefficient) means of interacting with the code structure it parses to output code that is far more complex. In this regard the template preprocessing in C++ is Turing complete. This is not a requirement to say that something is metaprogramming but is almost certainly sufficient to be counted as such.
Code generation tools which are parametrizable may be considered metaprogramming if their template logic is sufficiently complex.
The closer a system gets to working with the abstract syntax tree that represents the language (as opposed to the textual form we represent it in) the more likely it is to be considered metaprogramming.
From looking at the QT MetaObjects code I would not (from a cursory inspection) call it meta programming in the sense usually reserved for things like the C++ template system or Lisp macros. It appears to simply be a form of code generation which injects some functionality into existing classes at the compile stage (it can be viewed as a precursor to the sort of Aspect Oriented Programming style currently in vogue or the prototype based object systems in languages like JavaScripts
As example of the sort of extreme lengths you can take this in C++ there is Boost MPL whose tutorial shows you how to get:
Dimensioned types (Units of Measure)
quantity<float,length> l( 1.0f );
quantity<float,mass> m( 2.0f );
m = l; // compile-time type error
Higher Order Metafunctions
twice(f, x) := f(f(x))
template <class F, class X>
struct twice
: apply1<F, typename apply1<F,X>::type>
{};
struct add_pointer_f
{
template <class T>
struct apply : boost::add_pointer<T> {};
};
Now we can use twice with add_pointer_f to build pointers-to-pointers:
BOOST_STATIC_ASSERT((
boost::is_same<
twice<add_pointer_f, int>::type
, int**
>::value
));
Although it's large (2000loc) I made a reflexive class system within c++ that is compiler independant and includes object marshalling and metadata but has no storage overhead or access time penalties. It's hardcore metaprogramming, and being used in a very big online game for mapping game objects for network transmission and database-mapping (ORM).
Anyways it takes a while to compile, about 5 minutes, but has the benefit of being as fast as hand tuned code for each object. So it saves lots of money by reducing significant CPU time on our servers (CPU usage is 5% of what it used to be).
Here's a common example:
template <int N>
struct fact {
enum { value = N * fact<N-1>::value };
};
template <>
struct fact<1> {
enum { value = 1 };
};
std::cout << "5! = " << fact<5>::value << std::endl;
You're basically using templates to calculate a factorial.
A more practical example I saw recently was an object model based on DB tables that used template classes to model foreign key relationships in the underlying tables.
Another example: in this case the metaprogramming tecnique is used to get an arbitrary-precision value of PI at compile-time using the Gauss-Legendre algorithm.
Why should I use something like that in real world? For example to avoid repeating computations, to obtain smaller executables, to tune up code for maximizing performance on a specific architecture, ...
Personally I love metaprogramming because I hate repeating stuff and because I can tune up constants exploiting architecture limits.
I hope you like that.
Just my 2 cents.
/**
* FILE : MetaPI.cpp
* COMPILE : g++ -Wall -Winline -pedantic -O1 MetaPI.cpp -o MetaPI
* CHECK : g++ -Wall -Winline -pedantic -O1 -S -c MetaPI.cpp [read file MetaPI.s]
* PURPOSE : simple example template metaprogramming to compute the
* value of PI using [1,2].
*
* TESTED ON:
* - Windows XP, x86 32-bit, G++ 4.3.3
*
* REFERENCES:
* [1]: http://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_algorithm
* [2]: http://www.geocities.com/hjsmithh/Pi/Gauss_L.html
* [3]: http://ubiety.uwaterloo.ca/~tveldhui/papers/Template-Metaprograms/meta-art.html
*
* NOTE: to make assembly code more human-readable, we'll avoid using
* C++ standard includes/libraries. Instead we'll use C's ones.
*/
#include <cmath>
#include <cstdio>
template <int maxIterations>
inline static double compute(double &a, double &b, double &t, double &p)
{
double y = a;
a = (a + b) / 2;
b = sqrt(b * y);
t = t - p * ((y - a) * (y - a));
p = 2 * p;
return compute<maxIterations - 1>(a, b, t, p);
}
// template specialization: used to stop the template instantiation
// recursion and to return the final value (pi) computed by Gauss-Legendre algorithm
template <>
inline double compute<0>(double &a, double &b, double &t, double &p)
{
return ((a + b) * (a + b)) / (4 * t);
}
template <int maxIterations>
inline static double compute()
{
double a = 1;
double b = (double)1 / sqrt(2.0);
double t = (double)1 / 4;
double p = 1;
return compute<maxIterations>(a, b, t, p); // call the overloaded function
}
int main(int argc, char **argv)
{
printf("\nTEMPLATE METAPROGRAMMING EXAMPLE:\n");
printf("Compile-time PI computation based on\n");
printf("Gauss-Legendre algorithm (C++)\n\n");
printf("Pi=%.16f\n\n", compute<5>());
return 0;
}
The following example is lifted from the excellent book C++ Templates - The complete guide.
#include <iostream>
using namespace std;
template <int N> struct Pow3 {
enum { pow = 3 * Pow3<N-1>::pow };
}
template <> struct Pow3<0> {
enum { pow = 1 };
}
int main() {
cout << "3 to the 7 is " << Pow<7>::pow << "\n";
}
The point of this code is that the recursive calculation of the 7th power of 3 takes place at compile time rather than run time. It is thus extremely efficient in terms of runtime performance, at the expense of slower compilation.
Is this useful? In this example, probably not. But there are problems where performing calculations at compile time can be an advantage.
It's hard to say what C++ meta-programming is. More and more I feel it is much like introducing 'types' as variables, in the way functional programming has it. It renders declarative programming possible in C++.
It's way easier to show examples.
One of my favorites is a 'trick' (or pattern:) ) to flatte multiply nested switch/case blocks:
#include <iostream>
using namespace std;
enum CCountry { Belgium, Japan };
enum CEra { ancient, medieval, future };
// nested switch
void historic( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ):
switch( era ) {
case( ancient ): cout << "Ambiorix"; break;
case( medieval ): cout << "Keizer Karel"; break;
}
break;
case( Japan ):
switch( era ) {
case( future ): cout << "another Ruby?"; break;
case( medieval ): cout << "Musashi Mijamoto"; break;
}
break;
}
}
// the flattened, metaprogramming way
// define the conversion from 'runtime arguments' to compile-time arguments (if needed...)
// or use just as is.
template< CCountry country, CEra era > void thistoric();
template<> void thistoric<Belgium, ancient> () { cout << "Ambiorix"; }
template<> void thistoric<Belgium, medieval>() { cout << "Keizer Karel"; }
template<> void thistoric<Belgium, future >() { cout << "Beer, lots of it"; }
template<> void thistoric<Japan, ancient> () { cout << "wikipedia"; }
template<> void thistoric<Japan, medieval>() { cout << "Musashi"; }
template<> void thistoric<Japan, future >() { cout << "another Ruby?"; }
// optional: conversion from runtime to compile-time
//
template< CCountry country > struct SelectCountry {
static void select( CEra era ) {
switch (era) {
case( medieval ): thistoric<country, medieval>(); break;
case( ancient ): thistoric<country, ancient >(); break;
case( future ): thistoric<country, future >(); break;
}
}
};
void Thistoric ( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ): SelectCountry<Belgium>::select( era ); break;
case( Japan ): SelectCountry<Japan >::select( era ); break;
}
}
int main() {
historic( Belgium, medieval ); // plain, nested switch
thistoric<Belgium,medieval>(); // direct compile time switch
Thistoric( Belgium, medieval );// flattened nested switch
return 0;
}
The only time I needed to use Boost.MPL in my day job was when I needed to convert boost::variant to and from QVariant.
Since boost::variant has an O(1) visitation mechanism, the boost::variant to QVariant direction is near-trivial.
However, QVariant doesn't have a visitation mechanism, so in order to convert it into a boost::variant, you need to iterate over the mpl::list of types that the specific boost::variant instantiation can hold, and for each type ask the QVariant whether it contains that type, and if so, extract the value and return it in a boost::variant. It's quite fun, you should try it :)
QtMetaObject basically implements reflection (Reflection) and IS one of the major forms of metaprogramming, quite powerful actually. It is similar to Java's reflection and it's also commonly used in dynamic languages (Python, Ruby, PHP...). It's more readable than templates, but both have their pros and cons.
This is a simple "value computation" along the lines of Factorial. However, it's one you are much more likely to actually use in your code.
The macro CT_NEXTPOWEROFTWO2(VAL) uses template metaprogramming to compute the next power of two greater than or equal to a value for values known at compile time.
template<long long int POW2VAL> class NextPow2Helper
{
enum { c_ValueMinusOneBit = (POW2VAL&(POW2VAL-1)) };
public:
enum {
c_TopBit = (c_ValueMinusOneBit) ?
NextPow2Helper<c_ValueMinusOneBit>::c_TopBit : POW2VAL,
c_Pow2ThatIsGreaterOrEqual = (c_ValueMinusOneBit) ?
(c_TopBit<<1) : c_TopBit
};
};
template<> class NextPow2Helper<1>
{ public: enum { c_TopBit = 1, c_Pow2ThatIsGreaterOrEqual = 1 }; };
template<> class NextPow2Helper<0>
{ public: enum { c_TopBit = 0, c_Pow2ThatIsGreaterOrEqual = 0 }; };
// This only works for values known at Compile Time (CT)
#define CT_NEXTPOWEROFTWO2(VAL) NextPow2Helper<VAL>::c_Pow2ThatIsGreaterOrEqual