Dynamic creation of a pointer function in c++ - c++

I was working on my advanced calculus homework today and we're doing some iteration methods along the lines of newton's method to find solutions to things like x^2=2. It got me thinking that I could write a function that would take two function pointers, one to the function itself and one to the derivative and automate the process. This wouldn't be too challenging, then I started thinking could I have the user input a function and parse that input (yes I can do that). But can I then dynamically create a pointer to a one-variable function in c++. For instance if x^2+x, can I make a function double function(double x){ return x*x+x;} during run-time. Is this remotely feasible, or is it along the lines of self-modifying code?
Edit:
So I suppose how this could be done if you stored the information in an array and that had a function that evaluated the information stored in this array with a given input. Then you could create a class and initialize the array inside of that class and then use the function from there. Is there a better way?

As others have said, you cannot create new C++ functions at runtime in any portable way. You can however create an expression evaluator that can evaluate things like:
(1 + 2) * 3
contained in a string, at run time. It's not difficult to expand such an evaluator to have variables and functions.

You can't dynamically create a function in the sense that you can generate raw machine code for it, but you can quite easily create mathematical expressions using polymorphism:
struct Expr
{
virtual double eval(double x) = 0;
};
struct Sum : Expr
{
Sum(Expr* a, Expr* b):a(a), b(b) {}
virtual double eval(double x) {return a->eval(x) + b->eval(x);}
private:
Expr *a, *b;
};
struct Product : Expr
{
Product(Expr* a, Expr* b):a(a), b(b) {}
virtual double eval(double x) {return a->eval(x) * b->eval(x);}
private:
Expr *a, *b;
};
struct VarX : Expr
{
virtual double eval(double x) {return x;}
};
struct Constant : Expr
{
Constant(double c):c(c) {}
virtual double eval(double x) {return c;}
private:
double c;
};
You can then parse your expression into an Expr object at runtime. For example, x^2+x would be Expr* e = new Sum(new Product(new VarX(), new VarX()), new VarX()). You can then evaluate that for a given value of x by using e->eval(x).
Note: in the above code, I have ignored const-correctness for clarity -- you should not :)

It is along the lines of self-modifying code, and it is possible—just not in "pure" C++. You would need to know some assembly and a few implementation details. Without going down this road, you could abstractly represent operations (e.g. with functors) and build an expression tree to be evaluated.
However, for the simple situation of just one variable that you've given, you'd only need to store coefficients, and you can evaluate those for a given value easily.
// store coefficients as vector in "reverse" order, e.g. 1x^2 - 2x + 3
// is stored as [3, -2, 1]
typedef double Num;
typedef vector<double> Coeffs;
Num eval(Coeffs c, Num x) {
assert(c.size()); // must not be empty
Num result = 0;
Num factor = 1;
for (Coeffs::const_iterator i = c.begin(); i != c.end(); ++i) {
result += *i * factor;
factor *= x;
}
return result;
}
int main() {
Coeffs c; // x^2 + x + 0
c.push_back(0);
c.push_back(1);
c.push_back(1);
cout << eval(c, 0) << '\n';
cout << eval(c, 1) << '\n';
cout << eval(c, 2) << '\n';
}

You don't really need self modifiying code for that. But you will be writing what comes down to an expression parser and interpreter. You write the code to parse your function into suitable data structures (e.g. trees). For a given input you now traverse the tree and calculate the result of the function. Calculation can be done through a visitor.

You don't need to know assembly. Write c++ code for the possible expressions, and then write a compiler which examines the expression and choose the appropriate code snippets. That could be done at runtime like an interpreter usually does, or it could be a compile phase which creates code to execute by copying the instructions from each expression evaluation into allocated memory and then sets it up as a function. The latter is harder to understand and code, but will perform better. But for the development time plus execution time to be less than an interpreted implementation, the compiled code would have to be used lots (billions) of times.

As others have mentioned. Writing self-modifying code isn't necessary at all and is painfull in a compiled language if you want it to be portable.
The hardest part of your work is parsing the input. I recommend muParser to evaluate your expressions. It should take away a lot of pain and you would be able to focus on the important part of your project.

Related

How to iterate through variable members of a class C++

I'm currently trying to do a complicated variable correction to a bunch of variables (based on normalizing in various phase spaces) for some data that I'm reading in. Since each correction follows the same process, I was wondering if there would be anyway to do this iteratively rather than handle each variable by itself (since I need to this for about 18-20 variables). Can C++ handle this? I was told by someone to try this in python but I feel like it could be done in C++ in some way... I'm just hitting a wall!
To give you an idea, given something like:
class VariableClass{
public :
//each object of this class represents an event for this particlular data set
//containing the following variables
double x;
double y;
double z;
}
I want to do something along the lines of:
for (int i=0; i < num_variables; i++)
{
for (int j=0; j < num_events; j++)
{
//iterate through events
}
//correct variable here, then move on to next one
}
Thanks in advance for any advice!!!
I'm assuming your member variables will not all have the same type. Otherwise you can just throw them into a container. If you have C++11, one way you could solve this problem is a tuple. With some template metaprogramming you can simulate a loop over all elements of the tuple. The function std::tie will build a tuple with references to all of your members that you can "iterate" like this:
struct DoCorrection
{
template<typename T>
void operator()(T& t) const { /* code goes here */ }
};
for_each(std::tie(x, y, z), DoCorrection());
// see linked SO answer for the detailed code to make this special for_each work.
Then, you can specialize operator() for each member variable type. That will let you do the appropriate math automatically without manually keeping track of the types.
taken from glm (detail vec3.incl)
template <typename T>
GLM_FUNC_QUALIFIER typename tvec3<T>::value_type &
tvec3<T>::operator[]
(
size_type i
)
{
assert(i < this->length());
return (&x)[i];
}
this would translate to your example:
class VariableClass{
public :
//each object of this class represents an event for this particlular data
double x;
double y;
double z;
double & operator[](int i) {
assert(i < 3);
return (&x)[i];
}
}
VariableClass foo();
foo.x = 2.0;
std::cout << foo[0] << std::endl; // => 2.0
Althought i would recomment glm, if it is just about vector math.
Yes, just put all your variables into a container, like std::vector, for example.
http://en.cppreference.com/w/cpp/container/vector
I recommend spending some time reading about all the std classes. There are many containers and many uses.
In general you cannot iterate over members without relying on implementation defined things like padding or reordering of sections with different access qualifiers (literally no compiler does the later - it is allowed though).
However, you can use a the generalization of a record type: a std::tuple. Iterating a tuple isn't straight-forward but you will find plenty of code that does it. The worst here is the loss of named variables, which you can mimic with members.
If you use Boost, you can use Boost.Fusion's helper-macro BOOST_FUSION_ADAPT_STRUCT to turn a struct into a Fusion sequence and then you can use it with Fusion algorithms.

'Freezing' an expression

I have a C++ expression that I wish to 'freeze'. By this, I mean I have syntax like the following:
take x*x with x in container ...
where the ... indicates further (non-useful to this problem) syntax. However, if I attempt to compile this, no matter what preprocessor translations I've used to make 'take' an 'operator' (in inverted commas because it's technically not an operator, but the translation phase turns it into a class with, say, operator* available to it), the compiler still attempts to evaluate / work out where the x*x is coming from, (and, since it hasn't been declared previously (as it's declared further at the 'in' stage), it instead) can't find it and throws a compile error.
My current idea essentially involves attempting to place the expression inside a lambda (and since we can deduce the type of the container, we can declare x with the right type as, say, [](decltype(*begin(container)) x) { return x*x } -- thus, when the compiler looks at this statement, it's valid and no error is thrown), however, I'm running into errors actually achieving this.
Thus, my question is:
Is there a way / what's the best way to 'freeze' the x*x part of my expression?
EDIT:
In an attempt to clarify my question, take the following. Assume that the operator- is defined in a sane way so that the following attempts to achieve what the above take ... syntax does:
MyTakeClass() - x*x - MyWithClass() - x - MyInClass() - container ...
When this statement is compiled, the compiler will throw an error; x is not declared so x*x makes no sense (nor does x - MyInClass(), etc, etc). What I'm trying to achieve is to find a way to make the above expression compile, using any voodoo magic available, without knowing the type of x (or, in fact, that it will be named x; it could viably be named 'somestupidvariablename') in advance.
I came up with an almost solution, based on expression templates (note: these are not expression templates, they are based on expression templates). Unfortunately, I could not come up with a way that does not require you to predeclare x, but I did come up with a way to delay the type, so you only have to declare x one globally, and can use it for different types over and over in the same program/file/scope. Here is the expression type that works the magic, which I designed to be very flexible, you should be able to easily add operations and uses at will. It is used exactly how you described, except for the predeclaration of x.
Downsides I'm aware of: it does require T*T, T+T, and T(long) be compilable.
expression x(0, true); //x will be the 0th parameter. Sorry: required :(
int main() {
std::vector<int> container;
container.push_back(-3);
container.push_back(0);
container.push_back(7);
take x*x with x in container; //here's the magic line
for(unsigned i=0; i<container.size(); ++i)
std::cout << container[i] << ' ';
std::cout << '\n';
std::vector<float> container2;
container2.push_back(-2.3);
container2.push_back(0);
container2.push_back(7.1);
take 1+x with x in container2; //here's the magic line
for(unsigned i=0; i<container2.size(); ++i)
std::cout << container2[i] << ' ';
return 0;
}
and here's the class and defines that makes it all work:
class expression {
//addition and constants are unused, and merely shown for extendibility
enum exprtype{parameter_type, constant_type, multiplication_type, addition_type} type;
long long value; //for value types, and parameter number
std::unique_ptr<expression> left; //for unary and binary functions
std::unique_ptr<expression> right; //for binary functions
public:
//constructors
expression(long long val, bool is_variable=false)
:type(is_variable?parameter_type:constant_type), value(val)
{}
expression(const expression& rhs)
: type(rhs.type)
, value(rhs.value)
, left(rhs.left.get() ? std::unique_ptr<expression>(new expression(*rhs.left)) : std::unique_ptr<expression>(NULL))
, right(rhs.right.get() ? std::unique_ptr<expression>(new expression(*rhs.right)) : std::unique_ptr<expression>(NULL))
{}
expression(expression&& rhs)
:type(rhs.type), value(rhs.value), left(std::move(rhs.left)), right(std::move(rhs.right))
{}
//assignment operator
expression& operator=(expression rhs) {
type = rhs.type;
value = rhs.value;
left = std::move(rhs.left);
right = std::move(rhs.right);
return *this;
}
//operators
friend expression operator*(expression lhs, expression rhs) {
expression ret(0);
ret.type = multiplication_type;
ret.left = std::unique_ptr<expression>(new expression(std::move(lhs)));
ret.right = std::unique_ptr<expression>(new expression(std::move(rhs)));
return ret;
}
friend expression operator+(expression lhs, expression rhs) {
expression ret(0);
ret.type = addition_type;
ret.left = std::unique_ptr<expression>(new expression(std::move(lhs)));
ret.right = std::unique_ptr<expression>(new expression(std::move(rhs)));
return ret;
}
//skip the parameter list, don't care. Ignore it entirely
expression& operator<<(const expression&) {return *this;}
expression& operator,(const expression&) {return *this;}
template<class container>
void operator>>(container& rhs) {
for(auto it=rhs.begin(); it!=rhs.end(); ++it)
*it = execute(*it);
}
private:
//execution
template<class T>
T execute(const T& p0) {
switch(type) {
case parameter_type :
switch(value) {
case 0: return p0; //only one variable
default: throw std::runtime_error("Invalid parameter ID");
}
case constant_type:
return ((T)(value));
case multiplication_type:
return left->execute(p0) * right->execute(p0);
case addition_type:
return left->execute(p0) + right->execute(p0);
default:
throw std::runtime_error("Invalid expression type");
}
}
//This is also unused, and merely shown as extrapolation
template<class T>
T execute(const T& p0, const T& p1) {
switch(type) {
case parameter_type :
switch(value) {
case 0: return p0;
case 1: return p1; //this version has two variables
default: throw std::runtime_error("Invalid parameter ID");
}
case constant_type:
return value;
case multiplication_type:
return left->execute(p0, p1) * right->execute(p0, p1);
case addition_type:
return left->execute(p0, p1) + right->execute(p0, p1);
default:
throw std::runtime_error("Invalid expression type");
}
}
};
#define take
#define with <<
#define in >>
Compiles and runs with correct output at http://ideone.com/Dnb50
You may notice that since the x must be predeclared, the with section is ignored entirely. There's almost no macro magic here, the macros effectively turn it into "x*x >> x << container", where the >>x does absolutely nothing at all. So the expression is effectively "x*x << container".
Also note that this method is slow, because this is an interpreter, with almost all the slowdown that implies. However, it has the bonus that it is serializable, you could save the function to a file, load it later, and execute it then.
R.MartinhoFernandes has observed that the definition of x can be simplified to merely be expression x;, and it can deduce the order of parameters from the with section, but it would require a lot of rethinking of the design and would be more complicated. I might come back and add that functionality later, but in the meantime, know that it is definitely possible.
If you can modify the expression to `take(x*x with x in container)`, than that would remove the need to predeclare `x`, with something far far simpler than expression templates.
#define with ,
#define in ,
#define take(expr, var, con) \
std::transform(con.begin(), con.end(), con.begin(), \
[](const typename con::value_type& var) -> typename con::value_type \
{return expr;});
int main() {
std::vector<int> container;
container.push_back(-3);
container.push_back(0);
container.push_back(7);
take(x*x with x in container); //here's the magic line
for(unsigned i=0; i<container.size(); ++i)
std::cout << container[i] << ' ';
}
I made an answer very similar to my previous answer, but using actual expression templates, which should be much faster. Unfortunately, MSVC10 crashes when it attempts to compile this, but MSVC11, GCC 4.7.0 and Clang 3.2 all compile and run it just fine. (All other versions untested)
Here's the usage of the templates. Implementation code is here.
#define take
#define with ,
#define in >>=
//function call for containers
template<class lhsexpr, class container>
lhsexpr operator>>=(lhsexpr lhs, container& rhs)
{
for(auto it=rhs.begin(); it!=rhs.end(); ++it)
*it = lhs(*it);
return lhs;
}
int main() {
std::vector<int> container0;
container0.push_back(-4);
container0.push_back(0);
container0.push_back(3);
take x*x with x in container0; //here's the magic line
for(auto it=container0.begin(); it!=container0.end(); ++it)
std::cout << *it << ' ';
std::cout << '\n';
auto a = x+x*x+'a'*x;
auto b = a; //make sure copies work
b in container0;
b in container1;
std::cout << sizeof(b);
return 0;
}
As you can see, this is used exactly like my previous code, except now all the functions are decided at compile time, which means this will have exactly the same speed as a lambda. In fact, C++11 lambdas were preceeded by boost::lambda which works on very similar concepts.
This is a separate answer, because the code is far different, and far more complicated/intimidating. That's also why the implementation is not in the answer itself.
I don't think it is possible to get this "list comprehesion" (not quite, but it is doing the same thing) ala haskell using the preprocessor. The preprocessor just does simple search and replace with the possibility of arguments, so it cannot perform arbitrary replacements. Especially changing the order of parts of expression is not possible.
I cannot see a way to do this, without changing the order, since you always need x somehow to appear before x*x to define this variable. Using a lambda will not help, since you still need x in front of the x*x part, even if it is just as an argument. This makes this syntax not possible.
There are some ways around this:
Use a different preprocessor. There are preprocessors based on the ideas of Lisp-macros, which can be made syntax aware and hence can do arbitrary transformation of one syntax tree into another. One example is Camlp4/Camlp5 developed for the OCaml language. There are some very good tutorials on how to use this for arbitrary syntax transformation. I used to have an explanation on how to use Camlp4 to transform makefiles into C code, but I cannot find it anymore. There are some other tutorials on how to do such things.
Change the syntax slightly. Such list comprehension is essientially just a syntactic simplification of the usage of a Monad. With the arrival of C++11 Monads have become possible in C++. However the syntactic sugar may not be. If you decide to wrap the stuff you are trying to do in a Monad, many things will still be possible, you will just have to change the syntax slightly. Implementing Monads in C++ is anything but fun though (although I first expected otherwise). Have a look here for some example how to get Monads in C++.
The best approach is to parse it using the preprocessor.I do believe the preprocessor can be a very powerful tool for building EDSLs(embedded domain specific languages), but you must first understand the limitations of the preprocessor parsing things. The preprocessor can only parse out predefined tokens. So the syntax must be changed slightly by placing parenthesis around the expressions, and a FREEZE macro must surround it also(I just picked FREEZE, it could be called anything):
FREEZE(take(x*x) with(x, container))
Using this syntax you can convert it to a preprocessor sequence(using the Boost.Preprocessor library, of course). Once you have it as a preprocessor sequence you can apply lots of algorithms to it to transform it to however you like. A similiar approach is done with the Linq library for C++, where you can write this:
LINQ(from(x, numbers) where(x > 2) select(x * x))
Now, to convert to a pp sequence first you need to define the keywords to be parsed, like this:
#define KEYWORD(x) BOOST_PP_CAT(KEYWORD_, x)
#define KEYWORD_take (take)
#define KEYWORD_with (with)
So the way this will work is when you call KEYWORD(take(x*x) with(x, container)) it will expand to (take)(x*x) with(x, container), which is the first step towards converting it to a pp sequence. Now to keep going we need to use a while construct from the Boost.Preprocessor library, but first we need to define some little macros to help us along the way:
// Detects if the first token is parenthesis
#define IS_PAREN(x) IS_PAREN_CHECK(IS_PAREN_PROBE x)
#define IS_PAREN_CHECK(...) IS_PAREN_CHECK_N(__VA_ARGS__,0)
#define IS_PAREN_PROBE(...) ~, 1,
#define IS_PAREN_CHECK_N(x, n, ...) n
// Detect if the parameter is empty, works even if parenthesis are given
#define IS_EMPTY(x) BOOST_PP_CAT(IS_EMPTY_, IS_PAREN(x))(x)
#define IS_EMPTY_0(x) BOOST_PP_IS_EMPTY(x)
#define IS_EMPTY_1(x) 0
// Retrieves the first element of the sequence
// Example:
// HEAD((1)(2)(3)) // Expands to (1)
#define HEAD(x) PICK_HEAD(MARK x)
#define MARK(...) (__VA_ARGS__),
#define PICK_HEAD(...) PICK_HEAD_I(__VA_ARGS__,)
#define PICK_HEAD_I(x, ...) x
// Retrieves the tail of the sequence
// Example:
// TAIL((1)(2)(3)) // Expands to (2)(3)
#define TAIL(x) EAT x
#define EAT(...)
This provides some better detection of parenthesis and emptiness. And it provides a HEAD and TAIL macro which works slightly different than BOOST_PP_SEQ_HEAD. (Boost.Preprocessor can't handle sequences that have vardiac parameters). Now heres how we can define a TO_SEQ macro which uses the while construct:
#define TO_SEQ(x) TO_SEQ_WHILE_M \
( \
BOOST_PP_WHILE(TO_SEQ_WHILE_P, TO_SEQ_WHILE_O, (,x)) \
)
#define TO_SEQ_WHILE_P(r, state) TO_SEQ_P state
#define TO_SEQ_WHILE_O(r, state) TO_SEQ_O state
#define TO_SEQ_WHILE_M(state) TO_SEQ_M state
#define TO_SEQ_P(prev, tail) BOOST_PP_NOT(IS_EMPTY(tail))
#define TO_SEQ_O(prev, tail) \
BOOST_PP_IF(IS_PAREN(tail), \
TO_SEQ_PAREN, \
TO_SEQ_KEYWORD \
)(prev, tail)
#define TO_SEQ_PAREN(prev, tail) \
(prev (HEAD(tail)), TAIL(tail))
#define TO_SEQ_KEYWORD(prev, tail) \
TO_SEQ_REPLACE(prev, KEYWORD(tail))
#define TO_SEQ_REPLACE(prev, tail) \
(prev HEAD(tail), TAIL(tail))
#define TO_SEQ_M(prev, tail) prev
Now when you call TO_SEQ(take(x*x) with(x, container)) you should get a sequence (take)((x*x))(with)((x, container)).
Now, this sequence is much easier to work with(because of the Boost.Preprocessor library). You can now reverse it, transform it, filter it, fold over it, etc. This is extremely powerful, and is much more flexible than having them defined as macros. For example, in the Linq library the query from(x, numbers) where(x > 2) select(x * x) gets transformed into these macros:
LINQ_WHERE(x, numbers)(x > 2) LINQ_SELECT(x, numbers)(x * x)
Which these macros, it will then generate the lambda for list comprehension, but they have much more to work with when it generates the lambda. The same can be done in your library too, take(x*x) with(x, container) could be transformed into something like this:
FREEZE_TAKE(x, container, x*x)
Plus, you aren't defining macros like take which invade the global space.
Note: These macros here require a C99 preprocessor and thus won't work in MSVC.(There are workarounds though)

Lambda Expression vs Functor in C++

I wonder where should we use lambda expression over functor in C++. To me, these two techniques are basically the same, even functor is more elegant and cleaner than lambda. For example, if I want to reuse my predicate, I have to copy the lambda part over and over. So when does lambda really come in to place?
A lambda expression creates an nameless functor, it's syntactic sugar.
So you mainly use it if it makes your code look better. That generally would occur if either (a) you aren't going to reuse the functor, or (b) you are going to reuse it, but from code so totally unrelated to the current code that in order to share it you'd basically end up creating my_favourite_two_line_functors.h, and have disparate files depend on it.
Pretty much the same conditions under which you would type any line(s) of code, and not abstract that code block into a function.
That said, with range-for statements in C++0x, there are some places where you would have used a functor before where it might well make your code look better now to write the code as a loop body, not a functor or a lambda.
1) It's trivial and trying to share it is more work than benefit.
2) Defining a functor simply adds complexity (due to having to make a bunch of member variables and crap).
If neither of those things is true then maybe you should think about defining a functor.
Edit: it seems to be that you need an example of when it would be nice to use a lambda over a functor. Here you go:
typedef std::vector< std::pair<int,std::string> > whatsit_t;
int find_it(std::string value, whatsit_t const& stuff)
{
auto fit = std::find_if(stuff.begin(), stuff.end(), [value](whatsit_t::value_type const& vt) -> bool { return vt.second == value; });
if (fit == stuff.end()) throw std::wtf_error();
return fit->first;
}
Without lambdas you'd have to use something that similarly constructs a functor on the spot or write an externally linkable functor object for something that's annoyingly trivial.
BTW, I think maybe wtf_error is an extension.
Lambdas are basically just syntactic sugar that implement functors (NB: closures are not simple.) In C++0x, you can use the auto keyword to store lambdas locally, and std::function will enable you to store lambdas, or pass them around in a type-safe manner.
Check out the Wikipedia article on C++0x.
Small functions that are not repeated.
The main complain about functors is that they are not in the same place that they were used. So you had to find and read the functor out of context to the place it was being used in (even if it is only being used in one place).
The other problem was that functor required some wiring to get parameters into the functor object. Not complex but all basic boilerplate code. And boiler plate is susceptible to cut and paste problems.
Lambda try and fix both these. But I would use functors if the function is repeated in multiple places or is larger than (can't think up an appropriate term as it will be context sensitive) small.
lambda and functor have context. Functor is a class and therefore can be more complex then a lambda. A function has no context.
#include <iostream>
#include <list>
#include <vector>
using namespace std;
//Functions have no context, mod is always 3
bool myFunc(int n) { return n % 3 == 0; }
//Functors have context, e.g. _v
//Functors can be more complex, e.g. additional addNum(...) method
class FunctorV
{
public:
FunctorV(int num ) : _v{num} {}
void addNum(int num) { _v.push_back(num); }
bool operator() (int num)
{
for(int i : _v) {
if( num % i == 0)
return true;
}
return false;
}
private:
vector<int> _v;
};
void print(string prefix,list<int>& l)
{
cout << prefix << "l={ ";
for(int i : l)
cout << i << " ";
cout << "}" << endl;
}
int main()
{
list<int> l={1,2,3,4,5,6,7,8,9};
print("initial for each test: ",l);
cout << endl;
//function, so no context.
l.remove_if(myFunc);
print("function mod 3: ",l);
cout << endl;
//nameless lambda, context is x
l={1,2,3,4,5,6,7,8,9};
int x = 3;
l.remove_if([x](int n){ return n % x == 0; });
print("lambda mod x=3: ",l);
x = 4;
l.remove_if([x](int n){ return n % x == 0; });
print("lambda mod x=4: ",l);
cout << endl;
//functor has context and can be more complex
l={1,2,3,4,5,6,7,8,9};
FunctorV myFunctor(3);
myFunctor.addNum(4);
l.remove_if(myFunctor);
print("functor mod v={3,4}: ",l);
return 0;
}
Output:
initial for each test: l={ 1 2 3 4 5 6 7 8 9 }
function mod 3: l={ 1 2 4 5 7 8 }
lambda mod x=3: l={ 1 2 4 5 7 8 }
lambda mod x=4: l={ 1 2 5 7 }
functor mod v={3,4}: l={ 1 2 5 7 }
First, i would like to clear some clutter here.
There are two different things
Lambda function
Lambda expression/functor.
Usually, Lambda expression i.e. [] () {} -> return-type does not always synthesize to closure(i.e. kind of functor). Although this is compiler dependent. But you can force compiler by enforcing + sign before [] as +[] () {} -> return-type. This will create function pointer.
Now, coming to your question. You can use lambda repeatedly as follows:
int main()
{
auto print = [i=0] () mutable {return i++;};
cout<<print()<<endl;
cout<<print()<<endl;
cout<<print()<<endl;
// Call as many time as you want
return 0;
}
You should use Lambda wherever it strikes in your mind considering code expressiveness & easy maintainability like you can use it in custom deleters for smart pointers & with most of the STL algorithms.
If you combine Lambda with other features like constexpr, variadic template parameter pack or generic lambda. You can achieve many things.
You can find more about it here
As you pointed out, it works best when you need a one-off and the coding overhead of writing it out as a function isn't worth it.
Conceptually, the decision of which to use is driven by the same criterion as using a named variable versus a in-place expression or constant...
size_t length = strlen(x) + sizeof(y) + z++ + strlen('\0');
...
allocate(length);
std::cout << length;
...here, creating a length variable encourages the program to consider it's correctness and meaning in isolation of it's later use. The name hopefully conveys enough that it can be understood intuitively and independently of it's initial value. It then allows the value to be used several times without repeating the expression (while handling z being different). While here...
allocate(strlen(x) + sizeof(y) + z++ + strlen('\0'));
...the total code is reduced and the value is localised at the point it's needed. The only thing to "carry forwards" from a reading of this line is the side effects of allocation and increment (z), but there's no extra local variable with scope or later use to consider. The programmer has to mentally juggle less state while continuing their analysis of the code.
The same distinction applies to functions versus inline statements. For the purposes of answering your question, functors versus lambdas can be seen as just a particular case of this function versus inlining decision.
I tend to prefer Functors over Lambdas these days. Although they require more code, Functors yield cleaner algorithms. The below comparison between find_id and find_id2 showcase that result. While both yield sufficiently clean code, find_id2 is slightly easier to read as the MatchName(name) definition is extracted from (and secondary to) the primary algorithm.
I would argue, however, that the Functor code should be placed inside implementation files right above the function definition where it is used to provide direct access to the function definition. Otherwise a Lambda would be better for code-locality/organization.
#include <iostream>
#include <vector>
#include <string>
using namespace std;
struct Person {
int id;
string name;
};
typedef vector<Person> People;
int find_id(string const& name, People const& people) {
auto MatchName = [name](Person const& p) -> bool
{
return p.name == name;
};
auto found = find_if(people.begin(), people.end(), MatchName);
if (found == people.end()) return -1;
return found->id;
}
struct MatchName {
string const& name;
MatchName(string const& name) : name(name) {}
bool operator() (Person const& person)
{
return person.name == name;
}
};
int find_id2(string const& name, People const& people) {
auto found = find_if(people.begin(), people.end(), MatchName(name));
if (found == people.end()) return -1;
return found->id;
}
int main() {
People people { {0, "Jim"}, {1, "Pam"}, {2, "Dwight"} };
cout << "Pam's ID is " << find_id("Pam", people) << endl;
cout << "Dwight's ID is " << find_id2("Dwight", people) << endl;
}
The Functor is self-documenting by default; but Lambda's need to be stored in variables (to be self-documenting) inside more-complex algorithm definitions. Hence, it is preferable to not use Lambda's inline as many people do (for code readability) in order to gain the self-documenting benefit as shown above in the MatchName Lambda.
When a Lambda is stored in a variable at the call-site (or used inline), primary algorithms are slightly more difficult to read. Since Lambdas are secondary in nature to algorithms where they are used, it is preferable to clean up the primary algorithms by using self-documenting subroutines (e.g. Functors). This might not matter as much in this example, but if one wanted to use more complex algorithms it can significantly reduce the burden interpreting code.
Functors can be as simple (as in the example above) or complex as they need to be. Sometimes complexity is desirable and cases for dynamic polymorphism (e.g. for strategy/decorator design patterns; or their template-equivalent policy types). This is a use-case Lambda's can not satisfy.
Functors require explicit declaration of capture variables without polluting primary algorithms. When more-and-more capture variables are required by Lambda's the tendency is to use a blanket-capture like [=]. But this reduces readability greatly as one must mentally jump between the Lambda definition and all surrounding local variables, possibly member variables, and more.

Template trick to optimize out allocations

I have:
struct DoubleVec {
std::vector<double> data;
};
DoubleVec operator+(const DoubleVec& lhs, const DoubleVec& rhs) {
DoubleVec ans(lhs.size());
for(int i = 0; i < lhs.size(); ++i) {
ans[i] = lhs[i]] + rhs[i]; // assume lhs.size() == rhs.size()
}
return ans;
}
DoubleVec someFunc(DoubleVec a, DoubleVec b, DoubleVec c, DoubleVec d) {
DoubleVec ans = a + b + c + d;
}
Now, in the above, the "a + b + c + d" will cause the creation of 3 temporary DoubleVec's -- is there a way to optimize this away with some type of template magic ... i.e. to optimize it down to something equivalent to:
DoubleVec ans(a.size());
for(int i = 0; i < ans.size(); i++) ans[i] = a[i] + b[i] + c[i] + d[i];
You can assume all DoubleVec's have the same # of elements.
The high level idea is to have do some type of templateied magic on "+", which "delays the computation" until the =, at which point it looks into itself, goes hmm ... I'm just adding thes numbers, and syntheizes a[i] + b[i] + c[i] + d[i] ... instead of all the temporaries.
Thanks!
Yep, that's exactly what expression templates (see http://www.drdobbs.com/184401627 or http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Expression-template for example) are for.
The idea is to make operator+ return some kind of proxy object which represents the expression tree to be evaluated. Then operator= is written to take such an expression tree and evaluate it all at once, avoiding the creation of temporaries, and applying any other optimizations that may be applicable.
Have a look at Boost.Proto, which is a library for writing EDSL (embedded domain specific languages) directly in C++. There is even an example showing exactly what you need.
http://codeidol.com/cpp/cpp-template-metaprogramming/Domain-Specific-Embedded-Languages/-10.5.-Blitz-and-Expression-Templates/
If we had to boil the problem solved by Blitz++ down to a single sentence, we'd say, "A naive implementation of array math is horribly inefficient for any interesting computation." To see what we mean, take the boring statement
x = a + b + c;
The problem here is that the operator+ signature above is just too greedy: It tries to evaluate a + b just as soon as it can, rather than waiting until the whole expression, including the addition of c, is available.
In the expression's parse tree, evaluation starts at the leaves and proceeds upwards to the root. What's needed here is some way of delaying evaluation until the library has all of the expression's parts: that is, until the assignment operator is executed. The stratagem taken by Blitz++ is to build a replica of the compiler's parse tree for the whole expression, allowing it to manage evaluation from the top down
This can't be any ordinary parse tree, though: Since array expressions may involve other operations like multiplication, which require their own evaluation strategies, and since expressions can be arbitrarily large and nested, a parse tree built with nodes and pointers would have to be traversed at runtime by the Blitz++ evaluation engine to discover its structure, thereby limiting performance. Furthermore, Blitz++ would have to use some kind of runtime dispatching to handle the different combinations of operation types, again limiting performance.
Instead, Blitz++ builds a compile-time parse tree out of expression templates. Here's how it works in a nutshell: Instead of returning a newly computed Array, operators just package up references to their arguments in an Expression instance, labeled with the operation:
// operation tags
struct plus; struct minus;
// expression tree node
template <class L, class OpTag, class R>
struct Expression
{
Expression(L const& l, R const& r)
: l(l), r(r) {}
float operator[](unsigned index) const;
L const& l;
R const& r;
};
// addition operator
template <class L, class R>
Expression<L,plus,R> operator+(L const& l, R const& r)
{
return Expression<L,plus,R>(l, r);
}
Notice that when we write a + b, we still have all the information needed to do the computationit's encoded in the type Expressionand the data is accessible through the expression's stored references. When we write a + b + c, we get a result of type:
Expression<Expression<Array,plus,Array>,plus,Array>

What is metaprogramming?

With reference to this question, could anybody please explain and post example code of metaprogramming? I googled the term up, but I found no examples to convince me that it can be of any practical use.
On the same note, is Qt's Meta Object System a form of metaprogramming?
jrh
Most of the examples so far have operated on values (computing digits of pi, the factorial of N or similar), and those are pretty much textbook examples, but they're not generally very useful. It's just hard to imagine a situation where you really need the compiler to comput the 17th digit of pi. Either you hardcode it yourself, or you compute it at runtime.
An example that might be more relevant to the real world could be this:
Let's say we have an array class where the size is a template parameter(so this would declare an array of 10 integers: array<int, 10>)
Now we might want to concatenate two arrays, and we can use a bit of metaprogramming to compute the resulting array size.
template <typename T, int lhs_size, int rhs_size>
array<T, lhs_size + rhs_size> concat(const array<T, lhs_size>& lhs, const array<T, rhs_size>& rhs){
array<T, lhs_size + rhs_size> result;
// copy values from lhs and rhs to result
return result;
}
A very simple example, but at least the types have some kind of real-world relevance. This function generates an array of the correct size, it does so at compile-time, and with full type safety. And it is computing something that we couldn't easily have done either by hardcoding the values (we might want to concatenate a lot of arrays with different sizes), or at runtime (because then we'd lose the type information)
More commonly, though, you tend to use metaprogramming for types, rather than values.
A good example might be found in the standard library. Each container type defines its own iterator type, but plain old pointers can also be used as iterators.
Technically an iterator is required to expose a number of typedef members, such as value_type, and pointers obviously don't do that. So we use a bit of metaprogramming to say "oh, but if the iterator type turns out to be a pointer, its value_type should use this definition instead."
There are two things to note about this. The first is that we're manipulating types, not values We're not saying "the factorial of N is so and so", but rather, "the value_type of a type T is defined as..."
The second thing is that it is used to facilitate generic programming. (Iterators wouldn't be a very generic concept if it didn't work for the simplest of all examples, a pointer into an array. So we use a bit of metaprogramming to fill in the details required for a pointer to be considered a valid iterator).
This is a fairly common use case for metaprogramming. Sure, you can use it for a wide range of other purposes (Expression templates are another commonly used example, intended to optimize expensive calculations, and Boost.Spirit is an example of going completely overboard and allowing you to define your own parser at compile-time), but probably the most common use is to smooth over these little bumps and corner cases that would otherwise require special handling and make generic programming impossible.
The concept comes entirely from the name Meta- means to abstract from the thing it is prefixed on.
In more 'conversational style' to do something with the thing rather than the thing itself.
In this regard metaprogramming is essentially writing code, which writes (or causes to be written) more code.
The C++ template system is meta programming since it doesn't simply do textual substitution (as the c preprocessor does) but has a (complex and inefficient) means of interacting with the code structure it parses to output code that is far more complex. In this regard the template preprocessing in C++ is Turing complete. This is not a requirement to say that something is metaprogramming but is almost certainly sufficient to be counted as such.
Code generation tools which are parametrizable may be considered metaprogramming if their template logic is sufficiently complex.
The closer a system gets to working with the abstract syntax tree that represents the language (as opposed to the textual form we represent it in) the more likely it is to be considered metaprogramming.
From looking at the QT MetaObjects code I would not (from a cursory inspection) call it meta programming in the sense usually reserved for things like the C++ template system or Lisp macros. It appears to simply be a form of code generation which injects some functionality into existing classes at the compile stage (it can be viewed as a precursor to the sort of Aspect Oriented Programming style currently in vogue or the prototype based object systems in languages like JavaScripts
As example of the sort of extreme lengths you can take this in C++ there is Boost MPL whose tutorial shows you how to get:
Dimensioned types (Units of Measure)
quantity<float,length> l( 1.0f );
quantity<float,mass> m( 2.0f );
m = l; // compile-time type error
Higher Order Metafunctions
twice(f, x) := f(f(x))
template <class F, class X>
struct twice
: apply1<F, typename apply1<F,X>::type>
{};
struct add_pointer_f
{
template <class T>
struct apply : boost::add_pointer<T> {};
};
Now we can use twice with add_pointer_f to build pointers-to-pointers:
BOOST_STATIC_ASSERT((
boost::is_same<
twice<add_pointer_f, int>::type
, int**
>::value
));
Although it's large (2000loc) I made a reflexive class system within c++ that is compiler independant and includes object marshalling and metadata but has no storage overhead or access time penalties. It's hardcore metaprogramming, and being used in a very big online game for mapping game objects for network transmission and database-mapping (ORM).
Anyways it takes a while to compile, about 5 minutes, but has the benefit of being as fast as hand tuned code for each object. So it saves lots of money by reducing significant CPU time on our servers (CPU usage is 5% of what it used to be).
Here's a common example:
template <int N>
struct fact {
enum { value = N * fact<N-1>::value };
};
template <>
struct fact<1> {
enum { value = 1 };
};
std::cout << "5! = " << fact<5>::value << std::endl;
You're basically using templates to calculate a factorial.
A more practical example I saw recently was an object model based on DB tables that used template classes to model foreign key relationships in the underlying tables.
Another example: in this case the metaprogramming tecnique is used to get an arbitrary-precision value of PI at compile-time using the Gauss-Legendre algorithm.
Why should I use something like that in real world? For example to avoid repeating computations, to obtain smaller executables, to tune up code for maximizing performance on a specific architecture, ...
Personally I love metaprogramming because I hate repeating stuff and because I can tune up constants exploiting architecture limits.
I hope you like that.
Just my 2 cents.
/**
* FILE : MetaPI.cpp
* COMPILE : g++ -Wall -Winline -pedantic -O1 MetaPI.cpp -o MetaPI
* CHECK : g++ -Wall -Winline -pedantic -O1 -S -c MetaPI.cpp [read file MetaPI.s]
* PURPOSE : simple example template metaprogramming to compute the
* value of PI using [1,2].
*
* TESTED ON:
* - Windows XP, x86 32-bit, G++ 4.3.3
*
* REFERENCES:
* [1]: http://en.wikipedia.org/wiki/Gauss%E2%80%93Legendre_algorithm
* [2]: http://www.geocities.com/hjsmithh/Pi/Gauss_L.html
* [3]: http://ubiety.uwaterloo.ca/~tveldhui/papers/Template-Metaprograms/meta-art.html
*
* NOTE: to make assembly code more human-readable, we'll avoid using
* C++ standard includes/libraries. Instead we'll use C's ones.
*/
#include <cmath>
#include <cstdio>
template <int maxIterations>
inline static double compute(double &a, double &b, double &t, double &p)
{
double y = a;
a = (a + b) / 2;
b = sqrt(b * y);
t = t - p * ((y - a) * (y - a));
p = 2 * p;
return compute<maxIterations - 1>(a, b, t, p);
}
// template specialization: used to stop the template instantiation
// recursion and to return the final value (pi) computed by Gauss-Legendre algorithm
template <>
inline double compute<0>(double &a, double &b, double &t, double &p)
{
return ((a + b) * (a + b)) / (4 * t);
}
template <int maxIterations>
inline static double compute()
{
double a = 1;
double b = (double)1 / sqrt(2.0);
double t = (double)1 / 4;
double p = 1;
return compute<maxIterations>(a, b, t, p); // call the overloaded function
}
int main(int argc, char **argv)
{
printf("\nTEMPLATE METAPROGRAMMING EXAMPLE:\n");
printf("Compile-time PI computation based on\n");
printf("Gauss-Legendre algorithm (C++)\n\n");
printf("Pi=%.16f\n\n", compute<5>());
return 0;
}
The following example is lifted from the excellent book C++ Templates - The complete guide.
#include <iostream>
using namespace std;
template <int N> struct Pow3 {
enum { pow = 3 * Pow3<N-1>::pow };
}
template <> struct Pow3<0> {
enum { pow = 1 };
}
int main() {
cout << "3 to the 7 is " << Pow<7>::pow << "\n";
}
The point of this code is that the recursive calculation of the 7th power of 3 takes place at compile time rather than run time. It is thus extremely efficient in terms of runtime performance, at the expense of slower compilation.
Is this useful? In this example, probably not. But there are problems where performing calculations at compile time can be an advantage.
It's hard to say what C++ meta-programming is. More and more I feel it is much like introducing 'types' as variables, in the way functional programming has it. It renders declarative programming possible in C++.
It's way easier to show examples.
One of my favorites is a 'trick' (or pattern:) ) to flatte multiply nested switch/case blocks:
#include <iostream>
using namespace std;
enum CCountry { Belgium, Japan };
enum CEra { ancient, medieval, future };
// nested switch
void historic( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ):
switch( era ) {
case( ancient ): cout << "Ambiorix"; break;
case( medieval ): cout << "Keizer Karel"; break;
}
break;
case( Japan ):
switch( era ) {
case( future ): cout << "another Ruby?"; break;
case( medieval ): cout << "Musashi Mijamoto"; break;
}
break;
}
}
// the flattened, metaprogramming way
// define the conversion from 'runtime arguments' to compile-time arguments (if needed...)
// or use just as is.
template< CCountry country, CEra era > void thistoric();
template<> void thistoric<Belgium, ancient> () { cout << "Ambiorix"; }
template<> void thistoric<Belgium, medieval>() { cout << "Keizer Karel"; }
template<> void thistoric<Belgium, future >() { cout << "Beer, lots of it"; }
template<> void thistoric<Japan, ancient> () { cout << "wikipedia"; }
template<> void thistoric<Japan, medieval>() { cout << "Musashi"; }
template<> void thistoric<Japan, future >() { cout << "another Ruby?"; }
// optional: conversion from runtime to compile-time
//
template< CCountry country > struct SelectCountry {
static void select( CEra era ) {
switch (era) {
case( medieval ): thistoric<country, medieval>(); break;
case( ancient ): thistoric<country, ancient >(); break;
case( future ): thistoric<country, future >(); break;
}
}
};
void Thistoric ( CCountry country, CEra era ) {
switch( country ) {
case( Belgium ): SelectCountry<Belgium>::select( era ); break;
case( Japan ): SelectCountry<Japan >::select( era ); break;
}
}
int main() {
historic( Belgium, medieval ); // plain, nested switch
thistoric<Belgium,medieval>(); // direct compile time switch
Thistoric( Belgium, medieval );// flattened nested switch
return 0;
}
The only time I needed to use Boost.MPL in my day job was when I needed to convert boost::variant to and from QVariant.
Since boost::variant has an O(1) visitation mechanism, the boost::variant to QVariant direction is near-trivial.
However, QVariant doesn't have a visitation mechanism, so in order to convert it into a boost::variant, you need to iterate over the mpl::list of types that the specific boost::variant instantiation can hold, and for each type ask the QVariant whether it contains that type, and if so, extract the value and return it in a boost::variant. It's quite fun, you should try it :)
QtMetaObject basically implements reflection (Reflection) and IS one of the major forms of metaprogramming, quite powerful actually. It is similar to Java's reflection and it's also commonly used in dynamic languages (Python, Ruby, PHP...). It's more readable than templates, but both have their pros and cons.
This is a simple "value computation" along the lines of Factorial. However, it's one you are much more likely to actually use in your code.
The macro CT_NEXTPOWEROFTWO2(VAL) uses template metaprogramming to compute the next power of two greater than or equal to a value for values known at compile time.
template<long long int POW2VAL> class NextPow2Helper
{
enum { c_ValueMinusOneBit = (POW2VAL&(POW2VAL-1)) };
public:
enum {
c_TopBit = (c_ValueMinusOneBit) ?
NextPow2Helper<c_ValueMinusOneBit>::c_TopBit : POW2VAL,
c_Pow2ThatIsGreaterOrEqual = (c_ValueMinusOneBit) ?
(c_TopBit<<1) : c_TopBit
};
};
template<> class NextPow2Helper<1>
{ public: enum { c_TopBit = 1, c_Pow2ThatIsGreaterOrEqual = 1 }; };
template<> class NextPow2Helper<0>
{ public: enum { c_TopBit = 0, c_Pow2ThatIsGreaterOrEqual = 0 }; };
// This only works for values known at Compile Time (CT)
#define CT_NEXTPOWEROFTWO2(VAL) NextPow2Helper<VAL>::c_Pow2ThatIsGreaterOrEqual