Is function overloading by reference allowed when there is no ambiguity? - c++

Consider following code:
#include <iostream>
void foo(int m);
void foo(int &k);
int main()
{
foo(5); // ok, because there is no ambiguity
int m = 5;
//foo(m); // compile-time error, because of ambiguity
foo(m + 0); // ok, because it's an expression of type int and not object's lvalue
}
void foo(int m)
{
std::cout << "by value\n";
}
void foo(int &k)
{
std::cout << "by reference\n";
}
I understand that it introduces ambiguity for foo(m), but is this allowed, when expression is of type int (or another that can be converted to int)?
I have tried to find some standard reference on this, yet with no luck.
Disclaimer: Note that it's not duplicate of Function Overloading Based on Value vs. Const Reference. The const references are different as they can be assigned with rvalues, as opposite to "ordinary", non-const references.

13.1 [over.load] is pretty clear (apart from a multi-page note) about which functions cannot be overloaded in the same scope.
Your case is not listed there, and you can declare those overloads, you just can't necessarily use them easily. You could call the lvalue one like so:
void (*f)(int&) = foo;
f(m);
This avoids the ambiguity that happens when you call foo(m).
Aside: another way to write foo(m + 0) is simply foo(+m), the unary + operator converts the lvalue to an rvalue, so the foo(int) overload is called.

Yes, it is allowed.
There is no rule to prevent this overload.
[C++14: 13.1/1]: Not all function declarations can be overloaded. Those that cannot be overloaded are specified here. [..]
[C++14: 13.1/2]: (blah blah lots of exceptions not including any for this case)
It would be extremely limiting for the language to prohibit function overloads that may be ambiguous in certain scenarios with certain calls, and for no good reason I might add!

Related

Why does the compiler complain for ambiguity in overloaded function? [duplicate]

Does declaring something like the following
void foo(int x) { std::cout << "foo(int)" << std::endl; }
void foo(const int &x) { std::cout << "foo(const int &)" << std::endl; }
ever make sense? How would the caller be able to differentiate between them? I've tried
foo(9); // Compiler complains ambiguous call.
int x = 9;
foo(x); // Also ambiguous.
const int &y = x;
foo(y); // Also ambiguous.
The intent seems to be to differenciate between invocations with temporaries (i.e. 9) and 'regular' argument passing. The first case may allow the function implementation to employ optimizations since it is clear that the arguments will be disposed afterwards (which is absolutely senseless for integer literals, but may make sense for user-defined objects).
However, the current C++ language standard does not offer a way to overload specifically for the 'l/r-valueness' of arguments - any l-value being passed as argument to a function can be implicitly converted to a reference, so the ambiguity is unavoidable.
C++11 introduces a new tool for a similar purpose — using r-value references, you can overload as follows
void foo(int x) { ... }
void foo(const int &&x) { ... }
... and foo(4) (a temporary, r-value passed as argument) would cause the compiler to pick the second overload while int i = 2; foo(i) would pick the first.
(note: even with the new toolchain, it is not possible to differentiate between the cases 2 and 3 in your sample!)
You could do this with a template:
template<typename T> void foo(T x) { ... }
Then you can call this template by value or by reference:
int x = 123;
foo<int>(x); // by value
foo<int const&>(x); // by refernce
How would the caller be able to differentiate between them?
It cannot be differentiated in this case. Both the overloaded functions have the same type of primitive data type as the argument. And taking by reference doesn't count for a different type.
You can use static_cast to explicitly select the overload to be called:
#include <iostream>
void foo(int x) { std::cout << "foo(int)" << std::endl; }
void foo(const int &x) { std::cout << "foo(const int &)" << std::endl; }
int main()
{
int x = 0;
auto f1 = static_cast< void(*)(int) >(foo);
f1(x);
auto f2 = static_cast< void(*)(const int&) >(foo);
f2(x);
}
However, you should ask yourself why you provided those two overloads in the first place. Either you are fine with making a copy or you are not. Both at the same time? Why? Also making it necessary for the caller to explicitly select the overload defeats the purpse of function overloading. If you really want that consider to supply two functions instead:
void foo_copying(int x) { std::cout << "foo(int)" << std::endl; }
void foo_non_copying(const int &x) { std::cout << "foo(const int &)" << std::endl; }
Not in C++. Functional languages such as Erlang and Haskell get closer by allowing you to specify function overloads based on parameter value, but most imperative languages including C++ require overloading based on method signature; that is, the number and type of each parameter and the type of the return value.
The const keyword in the signature defines not the type of the parameter, but its mutability within the function; a "const" parameter will generate a compiler error if modified by the function or passed by reference to any function that doesn't also use const.
The compiler can't.
Both definitions of foo can be used for all 'variants' of int.
In the first foo, a copy of the int is made. Copying an int is always possible.
In the second foo, a reference to a const int is passed. Since any int can be cast to a const int, a reference to it can be passed as well.
Since both variants are valid in all cases, the compiler can't choose.
Things become different if you e.g. use the following definition:
void foo (int &x);
Now calling it with foo(9) will take the first alternative, since you can't pass 9 as a non-const int reference.
Another example, if you replace int by a class where the copy constructor is private, then the caller can't make a copy of the value, and the first foo-variant will not be used.

What type is int(int)& or int(int) const &?

std::is_function is specialized for types which have signature similar to:
int(int) &
see here:std::is_function
But this is neither a pointer to a member method, which signature could be:
int(T::*)(int) &
Nor it can be a reference to a function:
int (&)(int)
So what is this strange signature?
It's a function type which only exists in the type system. It cannot ever be created.
But this is neither a pointer to a member method, which signature could be:
int(T::*)(int) &
It's this, without the pointer. The type system allows you to describe that as a type.
#include <type_traits>
struct T { };
using A = int(int) &;
using B = A T::*;
using C = int(T::*)(int) &;
static_assert(std::is_same_v<B, C>);
#T.C. mentions PR0172R0, which discusses how the presence of these types causes issues for library writers, and suggests several options which might reduce those issues. One of the options is getting rid of them entirely, others reduce their impact. Depending on how this goes, this answer may or may not be correct for future versions of C++.
On the documentation page you link to, you'll see this comment:
// specialization for function types that have ref-qualifiers
above the list the examples you reference come from.
Those are functions with ref-qualifiers, which you can read more about here.
In short, they are similar to const qualified functions. Here's an example:
struct foo
{
void bar() & { std::cout << "this is an lvalue instance of foo" << "\n"; }
void bar() && { std::cout << "this is an rvalue instance of foo" << "\n"; }
};
int main(int argc, char* argv[])
{
foo f{};
f.bar(); // prints "this is an lvalue instance of foo"
std::move(f).bar(); // prints "this is an rvalue instance of foo"
return 0;
}
I can't think of a great use case for this feature, but it is possible to use.
Since the beginning of times (referring to the first C++ standard) you could declare such "strange" function types as, for example
typedef int F() const;
Despite the fact that the above declaration does not immediately involve any classes, the trailing const in this case can only serve as const-qualification of a non-static class member function. This restricts the usage of the above typedef-name to class member declarations. For example, one could use it as follows
struct S {
F foo; // Declares an `int S::foo() const` member function
};
int S::foo() const { // Defines it
return 42;
}
F S::*p = &S::foo; // Declares 'p' as `int (S::*)() const` pointer
Note that, however obscure, this is a "classic" C++ feature that's been in the language for a long time.
What you have in your example is effectively the same thing, but with C++11 ref-qualifier in place of const qualifier.

Function Matching for parameters of type const T& and T

I have a question regarding the c++ function matching for parameters of types T and const T&.
Let's say I have the following two functions:
void f(int i) {}
void f(const int &ri) {}
If I call f with an argument of type const int then this call is of course ambiguous. But why is a call of f with an argument of type int also ambiguous? Wouldn't be the first version of f be an exact match and the second one a worse match, because the int argument must be converted to a const int?
const int ci = 0;
int i = 0;
f(ci); // of course ambiguous
f(i); // why also ambiguous?
I know that such kind of overloading doesn't make much sense, because calls of f are almost always ambiguous unless the parameter type T doesn't have an accessible copy constructor. But I'm just studying the rules of function matching.
Regards,
Kevin
EDIT: To make my question more clear. If I have the two functions:
void f(int *pi) {}
void f(const int *pi) {}
Then the following call is not ambiguous:
int i = 0;
f(&i); // not ambiguous, first version f(int*) chosen
Although both versions of f could be called with &i the first version is chosen, because the second version of f would include a conversion to const. That is, the first version is a "better match". But in the two functions:
void f(int i) {} and
void f(const int &ri) {}
This additional conversion to const seems to be ignored for some reason. Again both versions of f could be called with an int. But again, the second version of f would require a conversion to const which would make it a worse match than the first version f(int).
int i = 1;
// f(int) requires no conversion
// f(const int &) does require a const conversion
// so why are both versions treated as "equally good" matches?
// isnt this analogous to the f(int*) and f(const int*) example?
f(i); // why ambiguous this time?
One call involves an "lvalue-to-rvalue conversion", the other requires an identity conversion (for references) or a "qualification adjustment" (for pointers), and according to the Standard these are treated equally when it comes to overload resolution.
So, neither is better on the basis of differing conversions.
There is, however, a special rule in the Standard, section 13.3.3.2, that applies only if both candidates being compared take the parameter by reference.
Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if ... S1 and S2 are reference bindings (8.5.3), and the types to which the references refer are the same type except for top-level cv-qualifiers, and the type to which the reference initialized by S2 refers is more cv-qualified than the type to which the reference initialized by S1 refers.
There's an identical rule for pointers.
Therefore the compiler will prefer
f(int*);
f(int&);
over
f(const int*);
f(const int&);
respectively, but there's no preference for f(int) vs f(const int) vs f(const int&), because lvalue-to-rvalue transformation and qualification adjustment are both considered "Exact Match".
Also relevant, from section 13.3.3.1.4:
When a parameter of reference type binds directly to an argument expression, the implicit conversion sequence is the identity conversion, unless the argument expression has a type that is a derived class of the parameter type, in which case the implicit conversion sequence is a derived-to-base Conversion.
The second call f(i) is also ambiguous because void f(const int &ri) indicates that ri is a reference to i and is a constant. Meaning it says that it will not modify the original i which is passed to that function.
The choice whether to modify the passed argument or not is in the hands of the implementer of the function not the client programmer who mearly uses that function.
The reason the second call f(i) is ambiguous is because to the compiler, both functions would be acceptable. const-ness can't be used to overload functions because different const versions of functions can be used in a single cause. So in your example:
int i = 0;
fi(i);
How would the compiler know which function you intended in invoking? The const qualifier is only relevant to the function definition.
See const function overloading for a more detailed explanation.

How does surrogate call function work?

Here is the source code similar to the surrogate call functions that I read on the post "Hidden features in C++"
The only part that confuses me is those operator overloaded functions.
What kind of operators are they? (They certainly don't seem like ordinary operator()'s, and why is it returning a function pointer even though there is no return type specified?
Thanks!
template <typename Fcn1, typename Fcn2>
class Surrogate {
public:
Surrogate(Fcn1 *f1, Fcn2 *f2) : f1_(f1), f2_(f2) {}
// Overloaded operators.
// But what does this do? What kind of operators are they?
operator Fcn1*() { return f1_; }
operator Fcn2*() { return f2_; }
private:
Fcn1 *f1_;
Fcn2 *f2_;
};
void foo (int i)
{
std::cout << "foo: " << i << std::endl;
}
void bar (double i)
{
std::cout << "bar: " << i << std::endl;
}
int main ()
{
Surrogate<void(int), void(double)> callable(foo, bar);
callable(10); // calls foo
callable(10.1); // calls bar
return 0;
}
They are implicit type conversion operators to Fcn1* and Fcn2*.
In the expression "callable(10)" callable is converted by the compiler to pointer to function with int parameter, using the first one of the type conversion operators defined in Surrogate. That function is then invoked.
The call callable(10); actually is *(callable.operator void(*)(int))(10);.
The compiler has figured out that callable is used in a function call expression. Now, for a function call expression the compiler would like a function, function pointer, or object with operator() - as you already know.
In this case, callable is none of these. But callable can be converted to one of these, namely to a function pointer. Given the call expression, in particular the int argument, overload resolution selects void(*)(int).
These are just user-defined conversion operators. User-defined conversion operators is a basic feature of C++ language, meaning that you can read about them in C++ book or in some tutorial.
Section 12.3.2 of language specification describes the syntax, but the rules that govern their usage by the compiler are scattered across the entire document and are relatively extensive. I.e. it is not something that can or should be explained in a SO post.
Find a book. Come back here if something in the book is not clear to you.

Function Overloading Based on Value vs. Const Reference

Does declaring something like the following
void foo(int x) { std::cout << "foo(int)" << std::endl; }
void foo(const int &x) { std::cout << "foo(const int &)" << std::endl; }
ever make sense? How would the caller be able to differentiate between them? I've tried
foo(9); // Compiler complains ambiguous call.
int x = 9;
foo(x); // Also ambiguous.
const int &y = x;
foo(y); // Also ambiguous.
The intent seems to be to differenciate between invocations with temporaries (i.e. 9) and 'regular' argument passing. The first case may allow the function implementation to employ optimizations since it is clear that the arguments will be disposed afterwards (which is absolutely senseless for integer literals, but may make sense for user-defined objects).
However, the current C++ language standard does not offer a way to overload specifically for the 'l/r-valueness' of arguments - any l-value being passed as argument to a function can be implicitly converted to a reference, so the ambiguity is unavoidable.
C++11 introduces a new tool for a similar purpose — using r-value references, you can overload as follows
void foo(int x) { ... }
void foo(const int &&x) { ... }
... and foo(4) (a temporary, r-value passed as argument) would cause the compiler to pick the second overload while int i = 2; foo(i) would pick the first.
(note: even with the new toolchain, it is not possible to differentiate between the cases 2 and 3 in your sample!)
You could do this with a template:
template<typename T> void foo(T x) { ... }
Then you can call this template by value or by reference:
int x = 123;
foo<int>(x); // by value
foo<int const&>(x); // by refernce
How would the caller be able to differentiate between them?
It cannot be differentiated in this case. Both the overloaded functions have the same type of primitive data type as the argument. And taking by reference doesn't count for a different type.
You can use static_cast to explicitly select the overload to be called:
#include <iostream>
void foo(int x) { std::cout << "foo(int)" << std::endl; }
void foo(const int &x) { std::cout << "foo(const int &)" << std::endl; }
int main()
{
int x = 0;
auto f1 = static_cast< void(*)(int) >(foo);
f1(x);
auto f2 = static_cast< void(*)(const int&) >(foo);
f2(x);
}
However, you should ask yourself why you provided those two overloads in the first place. Either you are fine with making a copy or you are not. Both at the same time? Why? Also making it necessary for the caller to explicitly select the overload defeats the purpse of function overloading. If you really want that consider to supply two functions instead:
void foo_copying(int x) { std::cout << "foo(int)" << std::endl; }
void foo_non_copying(const int &x) { std::cout << "foo(const int &)" << std::endl; }
Not in C++. Functional languages such as Erlang and Haskell get closer by allowing you to specify function overloads based on parameter value, but most imperative languages including C++ require overloading based on method signature; that is, the number and type of each parameter and the type of the return value.
The const keyword in the signature defines not the type of the parameter, but its mutability within the function; a "const" parameter will generate a compiler error if modified by the function or passed by reference to any function that doesn't also use const.
The compiler can't.
Both definitions of foo can be used for all 'variants' of int.
In the first foo, a copy of the int is made. Copying an int is always possible.
In the second foo, a reference to a const int is passed. Since any int can be cast to a const int, a reference to it can be passed as well.
Since both variants are valid in all cases, the compiler can't choose.
Things become different if you e.g. use the following definition:
void foo (int &x);
Now calling it with foo(9) will take the first alternative, since you can't pass 9 as a non-const int reference.
Another example, if you replace int by a class where the copy constructor is private, then the caller can't make a copy of the value, and the first foo-variant will not be used.