Function References - c++

So I was just working with function pointers and I remembered that you could do this:
void Foo()
{
}
int main()
{
void(& func)() = Foo;
func(); //::Foo();
}
The obvious advantage being that references reference valid objects (unless they're misused), or functions in this case.
The obvious disadvantages being that you can't store an array of references and can't use them for member function pointers (at least as far as I can tell).
My question: does anyone use them (i.e., function references, not function pointers), and if so, in what scenarios have you found them useful/helpful?
The only place I can see them being useful off the bat is binding a reference to a certain function when working with conditional compilation.

I've used them before to add customization to classes by passing them to the constructor in a way like the strategy pattern

Function references, unlike function pointers, make it harder to create them from an invalid source. This is useful if you are making a wrapper around a C library - the C++ code can take a callback function by reference and pass the pointer to the C library if the lbrary requires that the passed pointer must not be NULL.
It is also a convenient way to alias a function, especially in C++11 with the new auto keyword:
#include <iostream>
#include <typeinfo>
void f(int i, char c)
{
std::cout << i << ' ' << c << std::endl;
}
int main()
{
std::cout << typeid(f).name() << std::endl; //FvicE
f(0, '1');
void (*pf)(int, char) (&f); //ugly
std::cout << typeid(pf).name() << std::endl; //PFvicE
(*pf)(2, '3');
pf(4, '5'); //works, but I don't recommend it
void (&rf)(int, char) (f); //still ugly
std::cout << typeid(rf).name() << std::endl; //FvicE
rf(6, '7');
auto &af (f); //pretty, but only works in C++11
std::cout << typeid(af).name() << std::endl; //FvicE, same as above
af(8, '9');
}

I think your example usage is quite good. Because if you would use an ordinary function pointer, and you then apply the address-of operator, you would get the address of the function pointer. Using a reference to function will do the expected thing, in that it returns a pointer to the function itself.
I also can't think of many examples. Keeping function references, as you point out, has some ugly consequences. Another possibly unwanted consequence is, if kept as a class-member, your objects will be non-assignable if you don't write your own operator= and refrain from trying to re-assign the function-reference.
I think most uses of function references are implicit, much like most uses of array-references - although much more so, when you accept arguments by-reference:
template<typename T>
void do_something(T const& t) { ... }
While accepting arrays by reference has the advantage of not losing their size information, accepting functions by reference explicitly doesn't seem to have an advantage (at least as far as I can see). I suppose the existence of function references largely is justified by the idealistic view of a reference as an alias-name of some object or function, together with the fact that it allows passing functions to such templates that accept their argument by reference.
I would probably avoid using them if I wouldn't need them inevitably. Constant function pointers also provide non-reassignable callables, and will probably avoid confusions when other programmers, who possibly are not very familiar with this language niches, read your code. Worth to note that Vandervoorde & Josuttis also recommend to avoid them to reduce confusion (in their book C++ Templates - The Complete Guide).

in addition to the use as strategy (as pointed out by Robert Gould), I freqently use them at the entrance point to (template) metaprogramming. A function reference can easily be picked up by a template parameter; from this point on it can be passed through several layers of (metaprogramming) templates. Of course, this holds true for a function pointer as well, but the reference is an alias and thus communicates the intention more clearly.
To give an example: when writing a generic command dispatching system for an application, a lot of different operations need to be announced as commands. We can use a simple "builder function" as front-end for the client code. Behind the scenes, this builder function picks up the actual function signature as template parameter, derives (by template metaprogramming) the actual parameter and return type values and possibly picks the suitable specialisation to store a "memento" and an "undo functor". These functors can than be stored either as function pointers internally, or using boost or tr1 or C++11 function objects. This way, it is possible to build a type safe command invocation and "undo" system.

I've used them in a plug-in system where plug-in DLLs could be loaded/unloaded at run-time. I would look for known symbols in each DLL and cast them to function pointers.

Related

Why member function parameter const mismatch allowed? [duplicate]

From the C++ Primer 5th Edition, it says:
int f(int){ /* can write to parameter */}
int f(const int){ /* cannot write to parameter */}
The two functions are indistinguishable. But as you know, the two functions really differ in how they can update their parameters.
Can someone explains to me?
EDIT
I think I didn't interpret my question well. What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
EDIT
The nature of pass by value is actually pass by copying argument values to parameter values. Even for references and pointers where thee copied values are addresses. From the caller's viewpoint, whether const or non-const is passed to the function does not influence values (and of course types of) copied to parameters.
The distinction between top-level const and low-level const matters when copying objects. More specifically, top-level const(not the case of low-level const) is ignored when copying objects since copying won't influence the copied object. It is immaterial whether the object copied to or copied from is const or not.
So for the caller, differentiating them is not necessary. Likely, from the function viewpoint, the top-level const parameters doesn't influence the interface and/or the functionality of function. The two function actually accomplish the same thing. Why bother implementing two copies?
allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
Overloading of functions is based on the parameters the caller provides. Here, it's true that the caller may provide a const or non-const value but logically it should make no difference to the functionality that the called function provides. Consider:
f(3);
int x = 1 + 2;
f(x);
If f() does different thing in each of these situations, it would be very confusing! The programmer of this code calling f() can have a reasonable expectation of identical behaviour, freely adding or removing variables that pass parameters without it invalidating the program. This safe, sane behaviour is the point of departure that you'd want to justify exceptions to, and indeed there is one - behaviours can be varied when the function's overloaded ala:
void f(const int&) { ... }
void f(int&) { ... }
So, I guess this is what you find non-intuitive: that C++ provides more "safety" (enforced consistent behaviour through supporting only a single implementation) for non-references than references.
The reasons I can think of are:
So when a programmer knows a non-const& parameter will have a longer lifetime, they can select an optimal implementation. For example, in the code below it may be faster to return a reference to a T member within F, but if F is a temporary (which it might be if the compiler matches const F&) then a by-value return is needed. This is still pretty dangerous as the caller has to be aware that the returned reference is only valid as long as the parameter's around.
T f(const F&);
T& f(F&); // return type could be by const& if more appropriate
propagation of qualifiers like const-ness through function calls as in:
const T& f(const F&);
T& f(F&);
Here, some (presumably F member-) variable of type T is being exposed as const or non-const based on the const-ness of the parameter when f() is called. This type of interface might be chosen when wishing to extend a class with non-member functions (to keep the class minimalist, or when writing templates/algos usable on many classes), but the idea is similar to const member functions like vector::operator[](), where you want v[0] = 3 allowed on a non-const vector but not a const one.
When values are accepted by value they go out of scope as the function returns, so there's no valid scenario involving returning a reference to part of the parameter and wanting to propagate its qualifiers.
Hacking the behaviour you want
Given the rules for references, you can use them to get the kind of behaviour you want - you just need to be careful not to modify the by-non-const-reference parameter accidentally, so might want to adopt a practice like the following for the non-const parameters:
T f(F& x_ref)
{
F x = x_ref; // or const F is you won't modify it
...use x for safety...
}
Recompilation implications
Quite apart from the question of why the language forbids overloading based on the const-ness of a by-value parameter, there's the question of why it doesn't insist on consistency of const-ness in the declaration and definition.
For f(const int) / f(int)... if you are declaring a function in a header file, then it's best NOT to include the const qualifier even if the later definition in an implementation file will have it. This is because during maintenance the programmer may wish to remove the qualifier... removing it from the header may trigger a pointless recompilation of client code, so it's better not to insist they be kept in sync - and indeed that's why the compiler doesn't produce an error if they differ. If you just add or remove const in the function definition, then it's close to the implementation where the reader of the code might care about the constness when analysing the function behaviour. If you have it const in both header and implementation file, then the programmer wishes to make it non-const and forgets or decides not to update the header in order to avoid client recompilation, then it's more dangerous than the other way around as it's possible the programmer will have the const version from the header in mind when trying to analyse the current implementation code leading to wrong reasoning about the function behaviour. This is all a very subtle maintainence issue - only really relevant to commercial programming - but that's the basis of the guideline not to use const in the interface. Further, it's more concise to omit it from the interface, which is nicer for client programmers reading over your API.
Since there is no difference to the caller, and no clear way to distinguish between a call to a function with a top level const parameter and one without, the language rules ignore top level consts. This means that these two
void foo(const int);
void foo(int);
are treated as the same declaration. If you were to provide two implementations, you would get a multiple definition error.
There is a difference in a function definition with top level const. In one, you can modify your copy of the parameter. In the other, you can't. You can see it as an implementation detail. To the caller, there is no difference.
// declarations
void foo(int);
void bar(int);
// definitions
void foo(int n)
{
n++;
std::cout << n << std::endl;
}
void bar(const int n)
{
n++; // ERROR!
std::cout << n << std::endl;
}
This is analogous to the following:
void foo()
{
int = 42;
n++;
std::cout << n << std::endl;
}
void bar()
{
const int n = 42;
n++; // ERROR!
std::cout << n << std::endl;
}
In "The C++ Programming Language", fourth edition, Bjarne Stroustrup writes (§12.1.3):
Unfortunately, to preserve C compatibility, a const is ignored at the highest level of an argument type. For example, this is two declarations of the same function:
void f(int);
void f(const int);
So, it seems that, contrarily to some of the other answers, this rule of C++ was not chosen because of the indistinguishability of the two functions, or other similar rationales, but instead as a less-than-optimal solution, for the sake of compatibility.
Indeed, in the D programming language, it is possible to have those two overloads. Yet, contrarily to what other answers to this question might suggest, the non-const overload is preferred if the function is called with a literal:
void f(int);
void f(const int);
f(42); // calls void f(int);
Of course, you should provide equivalent semantics for your overloads, but that is not specific to this overloading scenario, with nearly indistinguishable overloading functions.
As the comments say, inside the first function the parameter could be changed, if it had been named. It is a copy of the callee's int. Inside the second function, any changes to the parameter, which is still a copy of the callee's int, will result in a compile error. Const is a promise you won't change the variable.
A function is useful only from the caller's perspective.
Since there is no difference to the caller, there is no difference, for these two functions.
I think the indistinguishable is used in the terms of overloading and compiler, not in terms if they can be distinguished by caller.
Compiler does not distinguish between those two functions, their names are mangled in the same way. That leads to situation when compiler treats those two declarations as redefinition.
Answering this part of your question:
What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
If you think about it a little more, it isn't at all intinuitive - in fact, it doesn't make much sense. As everybody else has said, a caller is in no way influenced when a functon takes it's parameter by value and it doesn't care, either.
Now, let's suppose for a moment that overload resolution worked on top level const, too. Two declarations like this
int foo(const int);
int foo(int);
would declare two different functions. One of the problems would be which functions would this expression call: foo(42). The language rules could say that literals are const and that the const "overload" would be called in this case. But that's the least of a problem.
A programmer feeling sufficiently evil could write this:
int foo(const int i) { return i*i; }
int foo(int i) { return i*2; }
Now you'd have two overloads that are appear semanticaly equivalent to the caller but do completely different things. Now that would be bad. We'd be able to write interfaces that limit the user by the way they do things, not by what they offer.

const and non-const versions of *static* member functions

I have two versions of the same static member function: one takes a pointer-to-const parameter and that takes a pointer-to-non-const parameter. I want to avoid code duplication.
After reading some stack overflow questions (these were all about non-static member functions though) I came up with this:
class C {
private:
static const type* func(const type* x) {
//long code
}
static type* func(type* x) {
return const_cast<type*>(func(static_cast<const type*>(x)));
}
public:
//some code that uses these functions
};
(I know juggling with pointers is generally a bad idea, but I'm implementing a data structure.)
I found some code in libstdc++ that looks like this:
NOTE: these are not member functions
static type* local_func(type* x)
{
//long code
}
type* func(type* x)
{
return local_func(x);
}
const type* func(const type* x)
{
return local_func(const_cast<type*>(x));
}
In the first approach the code is in a function that takes a pointer-to-const parameter.
In the second approach the code is in a function that takes a pointer-to-non-const parameter.
Which approach should generally be used? Are both correct?
The most important rule is that an interface function (public method, a free function other than one in a detail namespace, etc), should not cast away the constness of its input. Scott Meyer was one of the first to talk about preventing duplication using const_cast, here's a typical example (How do I remove code duplication between similar const and non-const member functions?):
struct C {
const char & get() const {
return c;
}
char & get() {
return const_cast<char &>(static_cast<const C &>(*this).get());
}
char c;
};
This refers to instance methods rather than static/free functions, but the principle is the same. You notice that the non-const version adds const to call the other method (for an instance method, the this pointer is the input). It then casts away constness at the end; this is safe because it knows the original input was not const.
Implementing this the other way around would be extremely dangerous. If you cast away constness of a function parameter you receive, you are taking a big risk in UB if the object passed to you is actually const. Namely, if you call any methods that actually mutate the object (which is very easy to do by accident now that you've cast away constness), you can easily get UB:
C++ standard, section § 5.2.11/7 [const cast]
[ Note: Depending on the type of the object, a write operation through the pointer, lvalue or pointer to data member resulting from a
const_cast that casts away a const-qualifier may produce undefined
behavior. —end note ]
It's not as bad in private methods/implementation functions because perhaps you carefully control how/when its called, but why do it this way? It's more dangerous to no benefit.
Conceptually, it's often the case that when you have a const and non-const version of the same function, you are just passing along internal references of the object (vector::operator[] is a canonical example), and not actually mutating anything, which means that it will be safe either way you write it. But it's still more dangerous to cast away the constness of the input; although you might be unlikely to mess it up yourself, imagine a team setting where you write it the wrong way around and it works fine, and then someone changes the implementation to mutate something, giving you UB.
In summary, in many cases it may not make a practical difference, but there is a correct way to do it that's strictly better than the alternative: add constness to the input, and remove constness from the output.
I have actually only ever seen your first version before, so from my experience it is the more common idiom.
The first version seems correct to me while the second version can result in undefined behavior if (A) you pass an actual const object to the function and (B) the long code writes to that object. Given that in the first case the compiler will tell you if you're trying to write to the object I would never recommend option 2 as it is. You could consider a standalone function that takes/returns const however.

When a C++ lambda expression has a lot of captures by reference, the size of the unnamed function object becomes large

The following code:
int main() {
int a, b, c, d, e, f, g;
auto func = [&](){cout << a << b << c << d << e << f << g << endl;};
cout << sizeof(func) << endl;
return 0;
}
outputs 56 compiled with g++ 4.8.2
Since all local variables are stored in the same stack frame, remembering one pointer is sufficient to locate the addresses of all local variables. Why the lambda expression constructs a so big unnamed function object?
I do not understand why you seem surprised.
The C++ Standard gives a set of requirements, and every single implementation is free to pick any strategy that meets the requirements.
Why would an implementation optimize the size of the lambda object ?
Specifically, do you realize how that would tie down the generated code of this lambda to the generated code for the surrounding function ?
It's easy to say Hey! This could be optimized!, but it's much more difficult to actually optimize and make sure it works in all edge cases. So, personally, I much prefer having a simple and working implementation than a botched attempt at optimizing it...
... especially when the work-around is so easy:
struct S { int a, b, c, d, e, f, g; };
int main() {
S s = {};
auto func = [&](){
std::cout << s.a << s.b << s.c << s.d << s.e << s.f << s.g << "\n";
};
std::cout << sizeof(func) << "\n";
return 0;
}
Look Ma: 4 bytes only!
It is legal for a compiler to capture by reference via stack pointer. There is a slight downside (in that offsets have to be added to said stack pointer).
Under the current C++ standard with defects included, you also have to capture reference variables by pseudo-pointer, as the lifetime of the binding must last as long as the referred-to-data, not the reference it directly binds to.
The simpler implementation, where each captured variable corresponds to a constructor argument and class member variable, has the serious advantage that it lines up with "more normal" C++ code. Some work for magic this need be done, but other than that the lambda closure is a bog-standard object instance with an inline operator(). Optimization strategies on "more normal" C++ code will work, bugs are going to be mostly in common with "more normal" code, etc.
Had the compiler writers gone with the stack-frame implementation, probably reference capture of references in that implementation would have failed to work like it did in every other compiler. When the defect was resolved (in favor of it working), the code would have to be changed again. In essence, the compilers that would have used a simpler implementation would almost certainly have had fewer bugs and more working code than those who used a fancy implementation.
With the stack-frame capture, all optimization for a lambda would have to be customized for that lambda. It would be equivalent to a class that captured a void*, does pointer arithmetic on it, and casts the resulting data to typed pointers. That is going to be extremely hard to optimize, as pointer arithmetic tends to block optimization, especially pointer arithmetic between stack variables (which is usually undefined). What is worse is that such pointer arithmetic means that the optimization of stack variable state (eliminating variables, overlapping lifetime, registers) now has to interact with the optimization of lambdas in entangled ways.
Working on such an optimization would be a good thing. As a bonus, because lambda types are tied to compilation units, messing with the implementation of a lambda will not break binary compatibility between compilation units. So you can do such changes relatively safely, once they are a proven stable improvement. However, if you do implement that optimization, you really really will want the ability to revert to the simpler proven one.
I encourage you to provide patches to your favorite open-source compiler to add this functionality.
Because that's how it's implemented. I don't know if the standard says anything about how it should be implemented but I guess it's implementation defined how big a lambda object will be in that situation.
There would be nothing wrong for a compiler to store a single pointer and use the offsets, to do what you suggest, as an optimization. Perhaps some compilers do that, I don't know.

Top-level const doesn't influence a function signature

From the C++ Primer 5th Edition, it says:
int f(int){ /* can write to parameter */}
int f(const int){ /* cannot write to parameter */}
The two functions are indistinguishable. But as you know, the two functions really differ in how they can update their parameters.
Can someone explains to me?
EDIT
I think I didn't interpret my question well. What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
EDIT
The nature of pass by value is actually pass by copying argument values to parameter values. Even for references and pointers where thee copied values are addresses. From the caller's viewpoint, whether const or non-const is passed to the function does not influence values (and of course types of) copied to parameters.
The distinction between top-level const and low-level const matters when copying objects. More specifically, top-level const(not the case of low-level const) is ignored when copying objects since copying won't influence the copied object. It is immaterial whether the object copied to or copied from is const or not.
So for the caller, differentiating them is not necessary. Likely, from the function viewpoint, the top-level const parameters doesn't influence the interface and/or the functionality of function. The two function actually accomplish the same thing. Why bother implementing two copies?
allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
Overloading of functions is based on the parameters the caller provides. Here, it's true that the caller may provide a const or non-const value but logically it should make no difference to the functionality that the called function provides. Consider:
f(3);
int x = 1 + 2;
f(x);
If f() does different thing in each of these situations, it would be very confusing! The programmer of this code calling f() can have a reasonable expectation of identical behaviour, freely adding or removing variables that pass parameters without it invalidating the program. This safe, sane behaviour is the point of departure that you'd want to justify exceptions to, and indeed there is one - behaviours can be varied when the function's overloaded ala:
void f(const int&) { ... }
void f(int&) { ... }
So, I guess this is what you find non-intuitive: that C++ provides more "safety" (enforced consistent behaviour through supporting only a single implementation) for non-references than references.
The reasons I can think of are:
So when a programmer knows a non-const& parameter will have a longer lifetime, they can select an optimal implementation. For example, in the code below it may be faster to return a reference to a T member within F, but if F is a temporary (which it might be if the compiler matches const F&) then a by-value return is needed. This is still pretty dangerous as the caller has to be aware that the returned reference is only valid as long as the parameter's around.
T f(const F&);
T& f(F&); // return type could be by const& if more appropriate
propagation of qualifiers like const-ness through function calls as in:
const T& f(const F&);
T& f(F&);
Here, some (presumably F member-) variable of type T is being exposed as const or non-const based on the const-ness of the parameter when f() is called. This type of interface might be chosen when wishing to extend a class with non-member functions (to keep the class minimalist, or when writing templates/algos usable on many classes), but the idea is similar to const member functions like vector::operator[](), where you want v[0] = 3 allowed on a non-const vector but not a const one.
When values are accepted by value they go out of scope as the function returns, so there's no valid scenario involving returning a reference to part of the parameter and wanting to propagate its qualifiers.
Hacking the behaviour you want
Given the rules for references, you can use them to get the kind of behaviour you want - you just need to be careful not to modify the by-non-const-reference parameter accidentally, so might want to adopt a practice like the following for the non-const parameters:
T f(F& x_ref)
{
F x = x_ref; // or const F is you won't modify it
...use x for safety...
}
Recompilation implications
Quite apart from the question of why the language forbids overloading based on the const-ness of a by-value parameter, there's the question of why it doesn't insist on consistency of const-ness in the declaration and definition.
For f(const int) / f(int)... if you are declaring a function in a header file, then it's best NOT to include the const qualifier even if the later definition in an implementation file will have it. This is because during maintenance the programmer may wish to remove the qualifier... removing it from the header may trigger a pointless recompilation of client code, so it's better not to insist they be kept in sync - and indeed that's why the compiler doesn't produce an error if they differ. If you just add or remove const in the function definition, then it's close to the implementation where the reader of the code might care about the constness when analysing the function behaviour. If you have it const in both header and implementation file, then the programmer wishes to make it non-const and forgets or decides not to update the header in order to avoid client recompilation, then it's more dangerous than the other way around as it's possible the programmer will have the const version from the header in mind when trying to analyse the current implementation code leading to wrong reasoning about the function behaviour. This is all a very subtle maintainence issue - only really relevant to commercial programming - but that's the basis of the guideline not to use const in the interface. Further, it's more concise to omit it from the interface, which is nicer for client programmers reading over your API.
Since there is no difference to the caller, and no clear way to distinguish between a call to a function with a top level const parameter and one without, the language rules ignore top level consts. This means that these two
void foo(const int);
void foo(int);
are treated as the same declaration. If you were to provide two implementations, you would get a multiple definition error.
There is a difference in a function definition with top level const. In one, you can modify your copy of the parameter. In the other, you can't. You can see it as an implementation detail. To the caller, there is no difference.
// declarations
void foo(int);
void bar(int);
// definitions
void foo(int n)
{
n++;
std::cout << n << std::endl;
}
void bar(const int n)
{
n++; // ERROR!
std::cout << n << std::endl;
}
This is analogous to the following:
void foo()
{
int = 42;
n++;
std::cout << n << std::endl;
}
void bar()
{
const int n = 42;
n++; // ERROR!
std::cout << n << std::endl;
}
In "The C++ Programming Language", fourth edition, Bjarne Stroustrup writes (§12.1.3):
Unfortunately, to preserve C compatibility, a const is ignored at the highest level of an argument type. For example, this is two declarations of the same function:
void f(int);
void f(const int);
So, it seems that, contrarily to some of the other answers, this rule of C++ was not chosen because of the indistinguishability of the two functions, or other similar rationales, but instead as a less-than-optimal solution, for the sake of compatibility.
Indeed, in the D programming language, it is possible to have those two overloads. Yet, contrarily to what other answers to this question might suggest, the non-const overload is preferred if the function is called with a literal:
void f(int);
void f(const int);
f(42); // calls void f(int);
Of course, you should provide equivalent semantics for your overloads, but that is not specific to this overloading scenario, with nearly indistinguishable overloading functions.
As the comments say, inside the first function the parameter could be changed, if it had been named. It is a copy of the callee's int. Inside the second function, any changes to the parameter, which is still a copy of the callee's int, will result in a compile error. Const is a promise you won't change the variable.
A function is useful only from the caller's perspective.
Since there is no difference to the caller, there is no difference, for these two functions.
I think the indistinguishable is used in the terms of overloading and compiler, not in terms if they can be distinguished by caller.
Compiler does not distinguish between those two functions, their names are mangled in the same way. That leads to situation when compiler treats those two declarations as redefinition.
Answering this part of your question:
What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
If you think about it a little more, it isn't at all intinuitive - in fact, it doesn't make much sense. As everybody else has said, a caller is in no way influenced when a functon takes it's parameter by value and it doesn't care, either.
Now, let's suppose for a moment that overload resolution worked on top level const, too. Two declarations like this
int foo(const int);
int foo(int);
would declare two different functions. One of the problems would be which functions would this expression call: foo(42). The language rules could say that literals are const and that the const "overload" would be called in this case. But that's the least of a problem.
A programmer feeling sufficiently evil could write this:
int foo(const int i) { return i*i; }
int foo(int i) { return i*2; }
Now you'd have two overloads that are appear semanticaly equivalent to the caller but do completely different things. Now that would be bad. We'd be able to write interfaces that limit the user by the way they do things, not by what they offer.

C++ Function passed as Template Argument vs Parameter

In C++, there are two ways of passing a function into another function that seem equivalent.
#include <iostream>
int add1(int i){ return i+1; }
int add2(int i){ return i+2; }
template <int (*T)(int) >
void doTemplate(int i){
std::cout << "Do Template: " << T(i) << "\n";
}
void doParam(int i, int (*f)(int)){
std::cout << "Do Param: " << f(i) << "\n";
}
int main(){
doTemplate<add1>(0);
doTemplate<add2>(0);
doParam(0, add1);
doParam(0, add2);
}
doTemplate accepts a function as a template argument, whereas doParam accepts it as a function pointer, and they both seem to give the same result.
What are the trade-offs between using each method?
The template-based version allows the compiler to inline the call, because the address of the function is known at compile-time. Obviously, the disadvantage is that the address of the function has to be known at compile-time (since you are using it as a template argument), and sometimes this may not be possible.
That brings us to the second case, where the function pointer may be determined only at run-time, thus making it impossible for the compiler to perform the inlining, but giving you the flexibility of determining at run-time the function to be called:
bool runtimeBooleanExpr = /* ... */;
doParam(0, runtimeBooleanExpr ? add1 : add2);
Notice, however, that there is a third way:
template<typename F>
void doParam(int i, F f){
std::cout << "Do Param: " << f(i) << "\n";
}
Which gives you more flexibility and still has the advantage of knowing at compile-time what function is going to be called:
doParam(0, add1);
doParam(0, add2);
And it also allows passing any callable object instead of a function pointer:
doParam(0, my_functor());
int fortyTwo = 42;
doParam(0, [=] (int i) { return i + fortyTwo; /* or whatever... */ }
For completeness, there is also a fourth way, using std::function:
void doParam(int x, std::function<int(int)> f);
Which has the same level of generality (in that you can pass any callable object), but also allows you to determine the callable object at run-time - most likely with a performance penalty, since (once again) inlining becomes impossible for the compiler.
For a further discussion of the last two options, also see this Q&A on StackOverflow.
Template parameters
have to be known at compile time.
lead to one function instantation for every distinct value of the parameter (so-called template bloat)
the called function is transparant to the compiler (inlining, but could lead to even more bloat, double-edged sword)
the calling function can be overloaded for particular values of the parameter, without modifying existing code
Function pointers
are passed at run time.
only lead to one calling function (smaller object code)
the called function is typically opaque to the compiler (no inlining)
the calling function needs a runtime if/switch to do special stuff for special values of the parameter, this is brittle
When to use which version: if you need speed and a lot of customization, use templates. If you need flexibility at runtime, but not in the implementation, use function pointers.
As #AndyProwl points out: if you have a C++11 compiler, function pointers are generalized to callable objects such as std::function and lambda expressions. This opens up a whole new can of worms (in a good sense).