Consider this example program:
#include <iostream>
typedef enum { A, B, C } MyEnum;
struct S
{
S(int) { std::cout << "int" << std::endl; }
S(MyEnum) { std::cout << "MyEnum" << std::endl; }
};
S f()
{
return A;
}
int main()
{
S const s = f();
}
Compiled with both clang and gcc this produces an executable that prints "MyEnum" when run. Is this behavior guaranteed by the C++ standard?
Yes, S::S(MyEnum) wins in overload resolution because it's an exact match. While S::S(int) requires one more implicit conversion (integral promotion) from enum to int.
Each type of standard conversion sequence is assigned one of three ranks:
Exact match: no conversion required, lvalue-to-rvalue conversion, qualification > conversion, function pointer conversion, (since C++17) user-defined conversion of class type to the same class
Promotion: integral promotion, floating-point promotion
Conversion: integral conversion, floating-point conversion, floating-integral conversion, pointer conversion, pointer-to-member conversion, boolean conversion, user-defined conversion of a derived class to its base
A standard conversion sequence S1 is better than a standard conversion sequence S2 if
a) S1 is a subsequence of S2, excluding lvalue transformations. The identity conversion sequence is considered a subsequence of any other conversion
b) Or, if not that, the rank of S1 is better than the rank of S2
F1 is determined to be a better function than F2 if implicit conversions for all arguments of F1 are not worse than the implicit conversions for all arguments of F2, and
there is at least one argument of F1 whose implicit conversion is better than the corresponding implicit conversion for that argument of F2
These pair-wise comparisons are applied to all viable functions. If exactly one viable function is better than all others, overload resolution succeeds and this function is called. Otherwise, compilation fails.
Yes, of course. return statement allows implicit construction and S(MyEnum) is the exact match.
Same would work with return {A};
But if you were to make S(MyEnum) explicit, then:
return A; will call S(int) as a fallback because MyEnum is implicitly convertible to integers. But this is worse overload candidate than S(MyEnum) due to the extra conversion, chosen only from necessity.
return {A}; represents copy-list initialization. It will fail because it forbids explicit constructors and implicit conversions.
return S{A}; represents direct-list initialization, it will call S(MyEnum), although it limits some implicit conversion, it does not impact this example and S(int) would be called had S(MyEnum) was removed.
return S(A); is essentially the same as return A; given the specified return type S.
Related
In the following program struct S provides two conversion operators: in double and in long long int. Then an object of type S is passed to a function f, overloaded for float and double:
struct S {
operator double() { return 3; }
operator long long int() { return 4; }
};
void f( double ) {}
void f( float ) {}
int main() {
S s;
f( s );
}
MSVC compiler accepts the program fine, selecting f( double ) overload. However both GCC and Clang see an ambiguity in the calling of f, demo: https://gcc.godbolt.org/z/5csd5dfYz
It seems that MSVC is right here, because the conversion:
operator long long int() -> f( float ) is not a promotion. Is it wrong?
There is a similar question Overload resolution with multiple functions and multiple conversion operators, but there is a promotion case in it and all compilers agree now, unlike the case in this question.
GCC and Clang are correct. The implicit conversion sequences (user-defined conversion sequences) are indistinguishable.
[over.ics.rank]/3:
(emphasis mine)
Two implicit conversion sequences of the same form are
indistinguishable conversion sequences unless one of the following
rules applies:
...
(3.3) User-defined conversion sequence U1 is a better conversion sequence
than another user-defined conversion sequence U2 if they contain the
same user-defined conversion function or constructor or they
initialize the same class in an aggregate initialization and in either
case the second standard conversion sequence of U1 is better than the
second standard conversion sequence of U2.
The user-defined conversion sequences involves two different user-defined conversion functions (operator double() and operator long long int()), so compilers can't select one; the 2nd standard conversion sequence won't be considered.
I'm wondering why there's no ambiguity in this function call:
#include <iostream>
#include <vector>
template <class T>
class C
{
public:
typedef char c;
typedef double d;
int fun() {}
static c testFun( decltype(&C::fun) ) {return c();}
static d testFun(...) { return d(); }
};
int main() {
C<int>::testFun(0); // Why no ambiguity?
}
http://coliru.stacked-crooked.com/a/241ce5ab82b4a018
There's a ranking of implicit conversion sequences, as defined in [over.ics.rank], emphasis mine:
When comparing the basic forms of implicit conversion sequences...
- a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence, and
- a user-defined conversion sequence (13.3.3.1.2) is a better conversion sequence than an ellipsis conversion
sequence (13.3.3.1.3).
So we have two functions:
static char testFun( int (C::*)() ) { return char(); }
static double testFun( ... ) { return double(); }
Both functions are viable for testFun(0). The first would involve a "null member pointer conversion" as per [conv.mem], and is a standard conversion sequence. The second would match the ellipsis and be an ellipsis conversion sequence. By [over.ics.rank], the former is preferred. There's no ambiguity, the one is strictly better than the other.
An ambiguous overload would arise if we had two equivalent conversion sequences that the compiler could not decide between. Consider if we had something like:
static char testFun(int* ) { return 0; }
static int testFun(char* ) { return 0; }
testFun(0);
Now both overloads would be equivalent as far as the conversion sequences go, so we'd have two viable candidates.
You have a standard conversion vs an ellipsis conversion. The standard says that a standard conversion is a better conversion sequence than the latter. [over.ics.rank]/p2:
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence
A pointer-to-member conversion is a standard conversion sequence. 0 is a null pointer constant and can be converted to a pointer-to-member. [conv.mem]/p1:
A null pointer constant (4.10) can be converted to a pointer to member type; the result is the null member
pointer value of that type and is distinguishable from any pointer to member not created from a null pointer
constant. Such a conversion is called a null member pointer conversion.
Therefore the first overload is preferred.
13.3.2 Viable functions
A candidate function having fewer than m parameters is viable only if it has an ellipsis in its parameter list (8.3.5). For the purposes of overload resolution, any argument for which there is no corresponding parameter is considered to “match the ellipsis” (13.3.3.1.3).
And also
Viable functions
Given the set of candidate functions, constructed as described above, the next step of overload resolution is examining arguments and parameters to reduce the set to the set of viable functions
To be included in the set of viable functions, the candidate function must satisfy the following:
1) If there are M arguments, the candidate function that has exactly M parameters is viable
2) If the candidate function has less than M parameters, but has an ellipsis parameter, it is viable.
[...]
Overloading resolution
A null pointer literal 0 should be an exact match to a function accepting a function pointer and treated as stronger than matching anything (...).
The following C++ program compiles without warnings in all compilers I have tried (gcc 4.6.3, llvm 3.0, icc 13.1.1, SolarisStudio 12.1/12.3):
struct CClass
{
template<class T>
operator T() const { return 1; }
operator int() const { return 2; }
};
int main(void)
{
CClass x;
return static_cast<char>(x);
}
However, all but the SolarisStudio compilers return 2, SolarisStudio (either version) returns 1, which I would consider the most logical result.
Using return x.operator char(); results in all compilers returning 1.
Obviously, since figuring this out, I have been using the latter notation. However, I would like to know which of compilers is correct and why. (One would think that majority rules, but this still doesn't explain the why.)
This question seems to be related to the SO questions here, here, and here, but these "only" give solutions to problems, no explanations (that I was able to apply to my particular problem anyway).
Note that adding an additional overloaded casting operator, say operator float() const { return 3; } results in all compilers except SolarisStudio complaining about ambiguity.
The first (template) overload should be picked.
Paragraph 13.3.3/1 of the C++11 Standard specifies:
[...] a viable function F1 is defined to be a better function than another viable function
F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
— for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
— the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type. [ Example:
struct A {
A();
operator int();
operator double();
} a;
int i = a; // a.operator int() followed by no conversion
// is better than a.operator double() followed by
// a conversion to int
float x = a; // ambiguous: both possibilities require conversions,
// and neither is better than the other
—end example ] or, if not that,
— F1 is a non-template function and F2 is a function template specialization, or, if not that,
[...]
As you can see the, fact that the first conversion operator is a template only becomes relevant when the standard conversion sequence from its return type (char, in this case) to the destination type (char, in this case) is not better than the standard conversion sequence from the return type of the non-template overload (int, in this case) to the destination type (char, in this case).
However, a standard conversion from char to char is an Exact Match, while a standard conversion from int to char is not. Therefore, the third item of § 13.3.3/1 does not apply, and the second item does.
This means that the first (template) overload should be picked.
The first is an exact match, the second requires a conversion. Exact matches have priority over conversions.
Those other questions you linked are mostly unrelated to yours.
Some advice: don't use template conversion operators. Name it convert_to instead.
Given the following code (in GCC 4.3) , why is the conversion to reference called in both cases?
class A { };
class B {
public:
operator A() {}
operator A&() {}
};
int main() {
B b;
(A) b;
(A&) b;
}
http://ideone.com/d6iF8
Your code is ambiguous and should not compile (it is ill-formed per 13.3.3:2).
lvalue-to-rvalue conversion has the same rank as identity conversion, so (per 13.3.3:1) there is no way to choose between them.
Comeau C++ (probably the most standards-compliant compiler) gives the following error:
"ComeauTest.c", line 11: error: more than one user-defined conversion from "B" to
"A" applies:
function "B::operator A()"
function "B::operator A &()"
(A) b;
^
Here's the relevant text from the standard:
c++11
13.3.3 Best viable function [over.match.best]
[...] Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 [...]
2 -
If there is exactly one viable function that is a better function than all other viable functions, then it is the
one selected by overload resolution; otherwise the call is ill-formed.
The definitions themselves are complicated, but there's two things to note with user-defined conversions:
First, the application of user-defined conversion as a conversion sequence is specified to decompose into a sequence S_a - U - S_b of a standard conversion sequence followed by a user-defined conversion followed by another standard conversion sequence. This covers all the cases; you can't have more than one user-defined conversion in a conversion sequence, and a standard conversion sequence can be the "identity conversion" i.e. no conversion required.
Second, when comparing user-defined conversion sequences the only part that matters is the second standard conversion sequence. This is in 13.3.3:
c++11
13.3.3 Best viable function [over.match.best]
[...] a viable function F1 is defined to be a better function than another viable function
F2 if [...]
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type.
and in 13.3.3.2:
c++11
13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
3 - Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of
the following rules applies: [...]
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate
initialization and the second standard conversion sequence of U1 is better than the second standard
conversion sequence of U2.
So when comparing conversion sequences U1 = (S1_a - U'1 - S1_b) and U2 = (S2_a - U'2 - S2_b) the only thing that matters is the relative rank of S1_b and S2_b; the standard conversion sequences required to arrive at the parameter of the user-defined conversions do not matter.
So the possible conversion sequences for (A) b, requiring a conversion sequence yielding B -> A, are:
U1: B -> B [identity], B::operator A() [user-defined], A -> A [identity]
U2: B -> B [identity], B::operator A &() [user-defined], A & -> A [rvalue-to-lvalue]
Now, how do we rank standard conversion sequences? The place to look is table 12 in 13.3.3.1.1, which specifies that lvalue-to-rvalue conversion has the same rank ("Exact Match") as identity conversion. So the two user-defined conversion sequences cannot be distinguished, and the program is ill-formed.
Sidebar
What's the difference between 13.3.3 and 13.3.3.2 as regards ranking user-defined conversion sequences?
13.3.3 allows the compiler to distinguish between different user-defined conversion operators; 13.3.3.2 allows the compiler to distinguish between different functions that each require a user-defined conversion in their arguments.
So, in the code
struct A {
operator int();
operator float();
} a;
void f(int);
f(a);
13.3.3 applies and A::operator int() is selected over A::operator float(); in the code
struct A {
operator int();
} a;
void f(int);
void f(double);
f(a);
13.3.3.2 applies and void f(int) is selected over void f(double). However in the code
struct A {
operator int();
operator float();
} a;
void f(int);
void f(double);
f(a);
even though 13.3.3 prefers A::operator int() -> void f(int) over A::operator float() -> void f(int) and float -> double over int -> double, and 13.3.3.2 prefers int -> int over int -> double and float -> double over float -> int, there is no way to distinguish between the int -> int and float -> double conversion sequences (because they contain neither the same user-defined conversion operator nor the same overload of f), and so the code is ill-formed.
The standard appears to provide two rules for distinguishing between implicit conversion sequences that involve user-defined conversion operators:
c++11
13.3.3 Best viable function [over.match.best]
[...] a viable function F1 is defined to be a better function than another viable function
F2 if [...]
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type.
13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
3 - Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of
the following rules applies: [...]
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate
initialization and the second standard conversion sequence of U1 is better than the second standard
conversion sequence of U2.
As I understand it, 13.3.3 allows the compiler to distinguish between different user-defined conversion operators, while 13.3.3.2 allows the compiler to distinguish between different functions (overloads of some function f) that each require a user-defined conversion in their arguments (see my sidebar to Given the following code (in GCC 4.3) , why is the conversion to reference called twice?).
Are there any other rules that can distinguish between user-defined conversion sequences? The answer at https://stackoverflow.com/a/1384044/567292 indicates that 13.3.3.2:3 can distinguish between user-defined conversion sequences based on the cv-qualification of the implicit object parameter (to a conversion operator) or of the single non-default parameter to a constructor or aggregate initialisation, but I don't see how that can be relevant given that that would require comparison between the first standard conversion sequences of the respective user-defined conversion sequences, which the standard doesn't appear to mention.
Supposing that S1 is better than S2, where S1 is the first standard conversion sequence of U1 and S2 is the first standard conversion sequence of U2, does it follow that U1 is better than U2? In other words, is this code well-formed?
struct A {
operator int();
operator char() const;
} a;
void foo(double);
int main() {
foo(a);
}
g++ (4.5.1), Clang (3.0) and Comeau (4.3.10.1) accept it, preferring the non-const-qualified A::operator int(), but I'd expect it to be rejected as ambiguous and thus ill-formed. Is this a deficiency in the standard or in my understanding of it?
The trick here is that converting from a class type to a non-class type doesn't actually rank any user-defined conversions as implicit conversion sequences.
struct A {
operator int();
operator char() const;
} a;
void foo(double);
int main() {
foo(a);
}
In the expression foo(a), foo obviously names a non-overloaded non-member function. The call requires copy-initializing (8.5p14) the function parameter, of type double, using the single expression a, which is an lvalue of class type A.
Since the destination type double is not a cv-qualified class type but the source type A is, the candidate functions are defined by section 13.3.1.5, with S=A and T=double. Only conversion functions in class A and any base classes of A are considered. A conversion function is in the set of candidates if:
It is not hidden in class A, and
It is not explicit (since the context is copy-initialization), and
A standard conversion sequence can convert the function's return type (not including any reference qualifiers) to the destination type double.
Okay, both conversion functions qualify, so the candidate functions are
A::operator int(); // F1
A::operator char() const; // F2
Using the rules from 13.3.1p4, each function has the implicit object parameter as the only thing in its parameter list. F1's parameter list is "(lvalue reference to A)" and F2's parameter list is "(lvalue reference to const A)".
Next we check that the functions are viable (13.3.2). Each function has one type in its parameter list, and there is one argument. Is there an implicit conversion sequence for each argument/parameter pair? Sure:
ICS1(F1): Bind the implicit object parameter (type lvalue reference to A) to expression a (lvalue of type A).
ICS1(F2): Bind the implicit object parameter (type lvalue reference to const A) to expression a (lvalue of type A).
Since there's no derived-to-base conversion going on, these reference bindings are considered special cases of the identity conversion (13.3.3.1.4p1). Yup, both functions are viable.
Now we have to determine if one implicit conversion sequence is better than the other. This falls under the fifth sub-item in the big list in 13.3.3.2p3: both are reference bindings to the same type except for top-level cv-qualifiers. Since the referenced type for ICS1(F2) is more cv-qualified than the referenced type for ICS1(F1), ICS1(F1) is better than ICS1(F2).
Therefore F1, or A::operator int(), is the most viable function. And no user-defined conversions (with the strict definition of a type of ICS composed of SCS + (converting constructor or conversion function) + SCS) were even to be compared.
Now if foo were overloaded, user-defined conversions on the argument a would need to be compared. So then the user-defined conversion (identity + A::operator int() + int to double) would be compared to other implicit conversion sequences.