When I compile this program with either gcc-4.6.3 or gcc-4.7.2 the compiler gives me an error about the overloaded call being ambiguous:
#include <iostream>
#include <functional>
class Scott
{
public:
void func(const bool b = true)
{
std::cout << "Called func() with a boolean arg" << std::endl;
}
void func(std::function<void(void)> f)
#ifdef WITH_CONST
const
#endif
{
std::cout << "Called func() with a std::function arg" << std::endl;
}
};
int main (int argc, char *argv[])
{
Scott s;
s.func([] (void) { });
}
However, if I make the overloaded function const, it compiles fine & calls the method I did not expect!
devaus120>> g++ -Wall -std=c++11 -DWITH_CONST wtf.cxx
devaus120>> ./a.out
Called func() with a boolean arg
So, I have 2 questions:
Is it a compiler bug that this compiles when the overloaded method is made const?
How can I ensure the correct overloaded function gets invoked? (Need to cast the argument somehow?)
TIA.
Scott. :)
Actually gcc is correct! Because lambda is not a function but a closure object of class type! Really! You can even inherit from it :) ... even multiple times from different lambdas...
So, according 8.5/16:
[...]
— If the destination type is a (possibly cv-qualified) class type:
[...]
— Otherwise, if the source type is a (possibly cv-qualified) class type, conversion functions are considered. The applicable conversion functions are enumerated (13.3.1.5), and the best one is chosen through overload resolution (13.3). The user-defined conversion so selected is called to convert the initializer expression into the object being initialized. If the conversion cannot be done or is ambiguous, the initialization is ill-formed.
and 13.3.1.5:
Under the conditions specified in 8.5, as part of an initialization of an object of nonclass type, a conversion function can be invoked to convert an initializer expression of class type to the type of the object being initialized. Overload resolution is used to select the conversion function to be invoked. Assuming that “cv1 T” is the type of the object being initialized, and “cv S” is the type of the initializer expression, with S a class type, the candidate functions are selected as follows:
-- The conversion functions of S and its base classes are considered. Those non-explicit conversion functions that are not hidden within S and yield type T or a type that can be converted to type T via a standard conversion sequence (13.3.3.1.1) are candidate functions. For direct-initialization, those explicit conversion functions that are not hidden within S and yield type T or a type that can be converted to type T with a qualification conversion (4.4) are also candidate functions. Conversion functions that return a cv-qualified type are considered to yield the cv-unqualified version of that type for this process of selecting candidate functions. Conversion functions that return “reference to cv2 X” return lvalues or xvalues, depending on the type of reference, of type “cv2 X” and are therefore considered to yield X for this process of selecting candidate functions.
so finally, result of conversion function is a function pointer which would implicitly converted to bool...
you may check this series of conversions with the following simple code:
#include <iostream>
#include <iomanip>
int main()
{
std::cout << std::boolalpha << []{ return 0; } << '\n';
}
the output will be true...
Here is few ways to workaround... you definitely need something because both functions are suitable after overload resolution. Btw, adding const, to signature of the second one, just exclude it because you've got a mutable instance of Scott and again you'll get compile error if declare it w/ const modifier.
So, you can do:
explicit cast (as mentioned in comments)... yeah, long to type...
declare the second foo w/ template parameter Func. Depending on what you are going to do then, here is few options: it can be converted to std::function on assign (if you want to store it to some member), or in case of immediate call you'll even get some optimization (by eliminate conversion to std::function)
more complex way is to declare both functions w/ template parameter and use std::enable_if to turn one of them OFF depending on std::is_same<bool, T> for example (or check for callable/function type)
use type dispatching (yeah, again w/ templated functions)
... I guess it's enough :)
Related
The piece of code below dereferences a nullptr.
struct Foo {
int *bar;
operator int&() {
return *bar;
}
explicit operator bool() const {
return bar;
}
};
int main() {
Foo f {nullptr};
auto _ = bool(f);
}
Why does bool(f) call bool(f.operator int&()) and not f.operator bool() as was intended?
Is there a way to make bool(f) call f.operator bool() as intended without marking operator int& as explicit?
Argument conversion ranking takes precedence over return type conversion ranking in overload resolution for user-defined conversions
All standard references below refers to N4659: March 2017 post-Kona working draft/C++17 DIS.
Preparations
To avoid having to deal with segfaults, auto type deduction and consideration of explicit marked conversion functions, consider the following simplified variation of your example, which shows the same behaviour:
#include <iostream>
struct Foo {
int bar{42};
operator int&() {
std::cout << __PRETTY_FUNCTION__;
return bar;
}
operator bool() const {
std::cout << __PRETTY_FUNCTION__;
return true;
}
};
int main() {
Foo f {};
// (A):
bool a = f; // Foo::operator int&()
}
TLDR
Why does bool(f) call bool(f.operator int&()) and not f.operator bool() as was intended?
The viable candidate functions to the initialization conversion sequence are
operator int&(Foo&)
operator bool(const Foo&)
and [over.match.best]/1.3, ranking on function arguments, takes precedence over [over.match.best]/1.4, ranking on conversion from return type, meaning operator int&(Foo&) will be chosen as it is a non-ambiguous perfect match, for an argument of type Foo&, by the [over.match.best]/1.3 rule, whereas operator bool(const Foo&) is not.
In this regard, as we are relying on [over.match.best]/1.3, it is no different than simply overloading solely on const-qualification:
#include <iostream>
struct Foo {};
void f(Foo&) { std::cout << __PRETTY_FUNCTION__ << "\n"; }
void f(const Foo&) { std::cout << __PRETTY_FUNCTION__ << "\n"; }
int main() {
Foo f1 {};
const Foo f2{};
f(f1); // void f(Foo&)
f(f2); // void f(const Foo&)
}
Is there a way to make bool(f) call f.operator bool() as intended without marking operator int& as explicit?
As per above, if you provide matching cv-qualifiers for the member function overloads, there can no longer be any differentiation on the implicit object argument as per [over.match.best]/1.3, and the bool conversion function will be chosen as most viable as per [over.match.best]/1.4. Note that by marking the int& conversion functions as explicit, it will no longer be a viable candidate, and the choice for the bool conversion function will not be due to it being the most viable overload, but by it being the only viable overload.
Details
The expression at (A) is initialization, with semantics governed specifically by [dcl.init]/17.7 [extract, emphasis mine]:
[dcl.init]/17 The semantics of initializers are as follows. The destination
type is the type of the object or reference being initialized and the
source type is the type of the initializer expression.
[...]
/17.7 Otherwise, if the source type is a (possibly cv-qualified) class type, conversion functions are considered. The applicable
conversion functions are enumerated ([over.match.conv]), and the best
one is chosen through overload resolution. The user-defined conversion
so selected is called to convert the initializer expression into the
object being initialized. [...]
where [over.match.conv]/1 describes which conversion functions that are considered candidate functions in overload resolution:
[over.match.conv]/1 Under the conditions specified in [dcl.init], as part of an initialization of an object of non-class type, a conversion function can be invoked to convert an initializer expression of class type to the type of the object being initialized. Overload resolution is used to select the conversion function to be invoked. Assuming that “cv1 T” is the type of the object being initialized, and “cv S” is the type of the initializer expression, with S a class type, the candidate functions are selected as follows:
/1.1 The conversion functions of S and its base classes are considered. Those non-explicit conversion functions that are not
hidden within S and yield type T or a type that can be converted
to type T via a standard conversion sequence are candidate
functions. [...] Conversion functions that return “reference to cv2
X” return lvalues or xvalues, depending on the type of reference, of
type “cv2 X” and are therefore considered to yield X for this
process of selecting candidate functions.
In this example, cv T, the type of the object being initialized, is bool, and thus both used-defined conversion functions are viable candidates, as one directly yields type bool and the other yields a type that can be converted to type bool via a standard conversion sequence (int to bool); as per [conv.bool]:
A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type bool. [...]
Moreover, the type of the initializer expression is Foo, and [over.match.funcs]/4 governs that the cv-qualification of the type of the implicit object parameter for user-defined conversion functions is that of the cv-qualification of the respective function:
For non-static member functions, the type of the implicit object
parameter is
“lvalue reference to cv X” for functions declared without a ref-qualifier or with the & ref-qualifier
[...]
where X is the class of which the function is a member and cv is
the cv-qualification on the member function declaration. [...] For
conversion functions, the function is considered to be a member of the
class of the implied object argument for the purpose of defining the
type of the implicit object parameter. [...]
Thus, w.r.t. overload resolution we may summarize as follows:
// Target type: bool
// Source type: Foo
// Viable candidate functions:
operator int&(Foo&)
operator bool(const Foo&)
where we have added the implied object parameter as explicit function parameter, without loss of generality (as per [over.match.funcs]/5), when continuing for how overload resolution picks the best viable candidate.
Now, [over.ics.user], particularly [over.ics.user]/2 summarizes this:
[over.ics.user]/2 [...] Since an implicit conversion sequence is an initialization, the special rules for initialization by user-defined conversion apply when selecting the best user-defined conversion for a user-defined conversion sequence (see [over.match.best] and [over.best.ics]).
particularly that the rules for selecting the best viable candidate is governed by [over.match.best], particularly [over.match.best]/1:
[over.match.best]/1 [...] Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
/1.3 for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
/1.4 the context is an initialization by user-defined conversion (see [dcl.init], [over.match.conv], and [over.match.ref])
and the standard conversion sequence from the return type of F1 to
the destination type (i.e., the type of the entity being
initialized) is a better conversion sequence than the standard
conversion sequence from the return type of F2 to the destination
type
The key here is that [over.match.best]/1.4, regarding conversion on the return type of the candidate (to the target type) only applies if the overloads cannot be disambiguated by means of [over.match.best]/1.3. In our example, however, recall that the viable candidate functions are:
operator int&(Foo&)
operator bool(const Foo&)
As per [over.ics.rank]/3.2, particularly [over.ics.rank]/3.2.6:
[over.ics.rank]/3 Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of the
following rules applies:
[...]
/3.2 Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if
[...]
/3.2.6 S1 and S2 are reference bindings ([dcl.init.ref]), and the types to which the references refer are the same type
except for top-level cv-qualifiers, and the type to which the
reference initialized by S2 refers is more cv-qualified than the
type to which the reference initialized by S1 refers.
meaning, that for an argument of type Foo&, operator int&(Foo&) will be a better (/1.3: exact) match, whereas for e.g. an argument of type const Foo&, operator bool(const Foo&) will the only match (Foo& will not be viable).
This question already has answers here:
Overload resolution: assignment of empty braces
(2 answers)
Closed 5 years ago.
I ran into a real-life WTF moment when I discovered that the code below outputs "pointer".
#include <iostream>
#include <utility>
template<typename T>
struct bla
{
static void f(const T*) { std::cout << "pointer\n"; }
static void f(const T&) { std::cout << "reference\n"; }
};
int main()
{
bla<std::pair<int,int>>::f({});
}
Changing the std::pair<int,int> template argument to an int or any other primitive type, gives the (for me at least) expected "ambiguous overload" error. It seems that builtin types are special here, because any user-defined type (aggregate, non-trivial, with defaulted constructor, etc...) all lead to the pointer overload being called. I believe the template is not necessary to reproduce it, it just makes it simple to try out different types.
Personally, I don't think that is logical and I would expect the ambiguous overload error in all cases, regardless of the template argument. GCC and Clang (and I believe MSVC) all disagree with me, across C++11/14/1z. Note I am fully aware of the bad API these two overloads present, and I would never write something like this, I promise.
So the question becomes: what is going on?
Oh, this is nasty.
Per [over.ics.list]p4 and p7:
4 Otherwise, if the parameter is a non-aggregate class X and overload resolution per 13.3.1.7 chooses a single best constructor of X to perform the initialization of an object of type X from the argument initializer list, the implicit conversion sequence is a user-defined conversion sequence with the second standard conversion sequence an identity conversion. [...]
[...]
6 Otherwise, if the parameter is a reference, see 13.3.3.1.4. [Note: The rules in this section will apply for initializing the underlying temporary for the reference. -- end note] [...]
[...]
7 Otherwise, if the parameter type is not a class:
[...]
(7.2) -- if the initializer list has no elements, the implicit conversion sequence is the identity conversion. [...]
The construction of a const std::pair<int,int> temporary from {} is considered a user-defined conversion. The construction of a const std::pair<int,int> * prvalue, or a const int * prvalue, or a const int temporary object are all considered standard conversions.
Standard conversions are preferred over user-defined conversions.
Your own find of CWG issue 1536 is relevant, but mostly for language lawyers. It's a gap in the wording, where the standard doesn't really say what happens for initialisation of a reference parameter from {}, since {} is not an expression. It's not what makes the one call ambiguous and the other not though, and implementations are managing to apply common sense here.
Apparently, overloading on ref-qualifiers is not allowed – this code won't compile if you remove either & or && (just the tokens, not their functions):
#include <iostream>
struct S {
void f() & { std::cout << "Lvalue" << std::endl; }
void f() && { std::cout << "Rvalue" << std::endl; }
};
int main()
{
S s;
s.f(); // prints "Lvalue"
S().f(); // prints "Rvalue"
}
In other words, if you have two functions of the same name and type, you have to define both if you define either. I assume this is deliberate, but what's the reason? Why not allow, say, calling the && version for rvalues if it's defined, and the "primary" f() on everything else in the following variation (and vice-versa – although that would be confusing):
struct S {
void f() { std::cout << "Lvalue" << std::endl; }
void f() && { std::cout << "Rvalue" << std::endl; }
};
In other words, let them act similar to template specializations with respect to the primary template.
It's not any different to the following situation:
struct S {};
void g(S s);
void g(S& s);
int main()
{
S s;
g(s); // ambiguous
}
Overload resolution has always worked this way; passing by reference is not preferred to passing by value (or vice versa).
(Overload resolution for ref-qualified functions works as if it were a normal function with an implicit first parameter whose argument is *this; lvalue-ref qualified is like a first parameter S &, const & is like S const & etc.)
I guess you are saying that g(s) should call g(S&) instead of being ambiguous.
I don't know the exact rationale, but overload resolution is complicated enough as it is without adding more special cases (especially ones that may silently compile to not what the coder intended).
As you note in your question, the problem can be easily avoided by using the two versions S & and S &&.
You can have both, or either. There is no specific requirement to include both if you implement the one.
The catch is that a member method (non-static member) that is not marked with a qualifier, is suitable for use with both lvalues and rvalues. Once you overload a method with a ref-qualifier, unless you mark the others as well, you run into ambiguity issues.
During overload resolution, non-static cv-qualified member function of class X is treated as a function that takes an implicit parameter of type lvalue reference to cv-qualified X if it has no ref-qualifiers or if it has the lvalue ref-qualifier. Otherwise (if it has rvalue ref-qualifier), it is treated as a function taking an implicit parameter of type rvalue reference to cv-qualified X.
So basically, if you have one method that is qualified (e.g. for an lvalue &) and one that is not qualified, the rules are such that they are effectively both qualified and hence ambiguous.
Similar rationale is applied to the const qualifier. You can implement a method and have one "version" for a const object, and one for a non-const object. The standard library containers are good examples of this, in particular the begin(), end() and other iterator related methods.
One particular use case is when the logic applied to the method is different between when the object is a temporary (or expiring) object and when it is not. You may wish to optimise away certain calls and data processing internally if you know the lifetime is about to end.
Another is to limit the use of a method to lvalues. A certain piece of application or object logic may not make sense or be useful if the entity is about to expire or is a temporary.
The wording in the standard (taken from the draft N4567) from §13.4.1/4 is:
For non-static member functions, the type of the implicit object parameter is
“lvalue reference to cv X” for functions declared without a ref-qualifier or with the & ref-qualifier
“rvalue reference to cv X” for functions declared with the && ref-qualifier
Let's start with what it means to define a basic non-static member function without any ref qualifiers.
§13.3.1 [4]
For non-static member functions, the type of the implicit object parameter is
— "lvalue reference to cv X” for functions declared without a ref-qualifier or with the & ref-qualifier
— "rvalue reference to cv X” for functions declared with the && ref-qualifier
But wait, there's more.
[5] For non-static member functions declared without a ref-qualifier, an additional rule applies:
even if the implicit object parameter is not const-qualified, an rvalue can be bound to the parameter as
long as in all other respects the argument can be converted to the type of the implicit object parameter.
Therefore
You can overload for just one ref type: lvalue, rvalue
You can't overload one or the other and then also add in another that's not ref qualified, because the one that's not ref qualified is defined to bind both types, and hence the ambiguity.
I was trying to post this code as an answer to this question, by making this pointer wrapper (replacing raw pointer). The idea is to delegate const to its pointee, so that the filter function can't modify the values.
#include <iostream>
#include <vector>
template <typename T>
class my_pointer
{
T *ptr_;
public:
my_pointer(T *ptr = nullptr) : ptr_(ptr) {}
operator T* &() { return ptr_; }
operator T const*() const { return ptr_; }
};
std::vector<my_pointer<int>> filter(std::vector<my_pointer<int>> const& vec)
{
//*vec.front() = 5; // this is supposed to be an error by requirement
return {};
}
int main()
{
std::vector<my_pointer<int>> vec = {new int(0)};
filter(vec);
delete vec.front(); // ambiguity with g++ and clang++
}
Visual C++ 12 and 14 compile this without an error, but GCC and Clang on Coliru claim that there's an ambiguity. I was expecting them to choose non-const std::vector::front overload and then my_pointer::operator T* &, but no. Why's that?
[expr.delete]/1:
The operand shall be of pointer to object type or of class type. If of
class type, the operand is contextually implicitly converted (Clause
[conv]) to a pointer to object type.
[conv]/5, emphasis mine:
Certain language constructs require conversion to a value having one
of a specified set of types appropriate to the construct. An
expression e of class type E appearing in such a context is said
to be contextually implicitly converted to a specified type T and
is well-formed if and only if e can be implicitly converted to a type
T that is determined as follows: E is searched for non-explicit
conversion functions whose return type is cv T or reference to cv T
such that T is allowed by the context. There shall be exactly
one such T.
In your code, there are two such Ts (int * and const int *). It is therefore ill-formed, before you even get to overload resolution.
Note that there's a change in this area between C++11 and C++14. C++11 [expr.delete]/1-2 says
The operand shall have a pointer to object type, or a class type
having a single non-explicit conversion function (12.3.2) to a pointer
to object type. [...]
If the operand has a class type, the operand is converted to a pointer type by calling the above-mentioned conversion function, [...]
Which would, if read literally, permit your code and always call operator const int*() const, because int* & is a reference type, not a pointer to object type. In practice, implementations consider conversion functions to "reference to pointer to object" like operator int*&() as well, and then reject the code because it has more than one qualifying non-explicit conversion function.
The delete expression takes a cast expression as argument, which can be const or not.
vec.front() is not const, but it must first be converted to a pointer for delete. So both candidates const int* and int* are possible candidates; the compiler cannot choose which one you want.
The eaiest to do is to use a cast to resolve the choice. For example:
delete (int*)vec.front();
Remark: it works when you use a get() function instead of a conversion, because the rules are different. The choice of the overloaded function is based on the type of the parameters and the object and not on the return type. Here the non const is the best viable function as vec.front()is not const.
In the following code, the first call to foo is ambiguous, and therefore fails to compile.
The second, with the added + before the lambda, resolves to the function pointer overload.
#include <functional>
void foo(std::function<void()> f) { f(); }
void foo(void (*f)()) { f(); }
int main ()
{
foo( [](){} ); // ambiguous
foo( +[](){} ); // not ambiguous (calls the function pointer overload)
}
What is the + notation doing here?
The + in the expression +[](){} is the unary + operator. It is defined as follows in
[expr.unary.op]/7:
The operand of the unary + operator shall have arithmetic, unscoped enumeration, or pointer type and the result is the value of the argument.
The lambda is not of arithmetic type etc., but it can be converted:
[expr.prim.lambda]/3
The type of the lambda-expression [...] is a unique, unnamed non-union class type — called the closure type — whose properties are described below.
[expr.prim.lambda]/6
The closure type for a lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function having the same parameter and return types as the closure type's function call operator. The value returned by this conversion function shall be the address of a function that, when invoked, has the same effect as invoking the closure type’s function call operator.
Therefore, the unary + forces the conversion to the function pointer type, which is for this lambda void (*)(). Therefore, the type of the expression +[](){} is this function pointer type void (*)().
The second overload void foo(void (*f)()) becomes an Exact Match in the ranking for overload resolution and is therefore chosen unambiguously (as the first overload is NOT an Exact Match).
The lambda [](){} can be converted to std::function<void()> via the non-explicit template ctor of std::function, which takes any type that fulfils the Callable and CopyConstructible requirements.
The lambda can also be converted to void (*)() via the conversion function of the closure type (see above).
Both are user-defined conversion sequences, and of the same rank. That's why overload resolution fails in the first example due to ambiguity.
According to Cassio Neri, backed up by an argument by Daniel Krügler, this unary + trick should be specified behaviour, i.e. you can rely on it (see discussion in the comments).
Still, I'd recommend using an explicit cast to the function pointer type if you want to avoid the ambiguity: you don't need to ask on SO what is does and why it works ;)