Overloaded function and multiple conversion operators ambiguity in C++, compilers disagree - c++

In the following program struct S provides two conversion operators: in double and in long long int. Then an object of type S is passed to a function f, overloaded for float and double:
struct S {
operator double() { return 3; }
operator long long int() { return 4; }
};
void f( double ) {}
void f( float ) {}
int main() {
S s;
f( s );
}
MSVC compiler accepts the program fine, selecting f( double ) overload. However both GCC and Clang see an ambiguity in the calling of f, demo: https://gcc.godbolt.org/z/5csd5dfYz
It seems that MSVC is right here, because the conversion:
operator long long int() -> f( float ) is not a promotion. Is it wrong?
There is a similar question Overload resolution with multiple functions and multiple conversion operators, but there is a promotion case in it and all compilers agree now, unlike the case in this question.

GCC and Clang are correct. The implicit conversion sequences (user-defined conversion sequences) are indistinguishable.
[over.ics.rank]/3:
(emphasis mine)
Two implicit conversion sequences of the same form are
indistinguishable conversion sequences unless one of the following
rules applies:
...
(3.3) User-defined conversion sequence U1 is a better conversion sequence
than another user-defined conversion sequence U2 if they contain the
same user-defined conversion function or constructor or they
initialize the same class in an aggregate initialization and in either
case the second standard conversion sequence of U1 is better than the
second standard conversion sequence of U2.
The user-defined conversion sequences involves two different user-defined conversion functions (operator double() and operator long long int()), so compilers can't select one; the 2nd standard conversion sequence won't be considered.

Related

Does C++ guarantee this enum vs int constructor overload resolution?

Consider this example program:
#include <iostream>
typedef enum { A, B, C } MyEnum;
struct S
{
S(int) { std::cout << "int" << std::endl; }
S(MyEnum) { std::cout << "MyEnum" << std::endl; }
};
S f()
{
return A;
}
int main()
{
S const s = f();
}
Compiled with both clang and gcc this produces an executable that prints "MyEnum" when run. Is this behavior guaranteed by the C++ standard?
Yes, S::S(MyEnum) wins in overload resolution because it's an exact match. While S::S(int) requires one more implicit conversion (integral promotion) from enum to int.
Each type of standard conversion sequence is assigned one of three ranks:
Exact match: no conversion required, lvalue-to-rvalue conversion, qualification > conversion, function pointer conversion, (since C++17) user-defined conversion of class type to the same class
Promotion: integral promotion, floating-point promotion
Conversion: integral conversion, floating-point conversion, floating-integral conversion, pointer conversion, pointer-to-member conversion, boolean conversion, user-defined conversion of a derived class to its base
A standard conversion sequence S1 is better than a standard conversion sequence S2 if
a) S1 is a subsequence of S2, excluding lvalue transformations. The identity conversion sequence is considered a subsequence of any other conversion
b) Or, if not that, the rank of S1 is better than the rank of S2
F1 is determined to be a better function than F2 if implicit conversions for all arguments of F1 are not worse than the implicit conversions for all arguments of F2, and
there is at least one argument of F1 whose implicit conversion is better than the corresponding implicit conversion for that argument of F2
These pair-wise comparisons are applied to all viable functions. If exactly one viable function is better than all others, overload resolution succeeds and this function is called. Otherwise, compilation fails.
Yes, of course. return statement allows implicit construction and S(MyEnum) is the exact match.
Same would work with return {A};
But if you were to make S(MyEnum) explicit, then:
return A; will call S(int) as a fallback because MyEnum is implicitly convertible to integers. But this is worse overload candidate than S(MyEnum) due to the extra conversion, chosen only from necessity.
return {A}; represents copy-list initialization. It will fail because it forbids explicit constructors and implicit conversions.
return S{A}; represents direct-list initialization, it will call S(MyEnum), although it limits some implicit conversion, it does not impact this example and S(int) would be called had S(MyEnum) was removed.
return S(A); is essentially the same as return A; given the specified return type S.

Worst conversion sequence in list-initialization

I don't understand how worst conversion sequences for initializer_list are ranked during overload resolution. For example:
#include <initializer_list>
void fnc (std::initializer_list<int>){}
void fnc (std::initializer_list<long>){}
struct CL1 {
operator short(){return 1;} };
struct CL2 {
operator short(){return 1;} };
int main() {
CL1 cl1;
CL2 cl2;
fnc ({cl1, cl2});
}
Here is a call of overloaded fnc function and overload resolution finds the best-viable function. The candidates are 2 functions which are ranked by comparing their needed conversion sequences.
The Standard (n4296) 13.3.3.1.5/4 [over.ics.list] says:
Otherwise, if the parameter type is std::initializer_list and all the
elements of the initializer list can be implicitly converted to X, the
implicit conversion sequence is the worst conversion necessary to
convert an element of the list to X
For std::initializer_list<int> both conversions (CL1 -> int, CL2 -> int) are indistinguishable and both can be worst conversion (user-defined conversion + promotion). Simularly for std::initializer_list<long>, but with worst conversion sequence as user-defined conversion + standard conversion. And then the main question: which conversion (for cl1 or cl2) is selected as worst? Assume that in both initializer lists the worst conversion is selected as first (for cl1). Then, according to 13.3.3.2/3.3 [over.ics.rank]
User-defined conversion sequence U1 is a better conversion sequence
than another user-defined conversion sequence U2 if they contain the
same user-defined conversion function or constructor or they
initialize the same class in an aggregate initialization and in either
case the second standard conversion sequence of U1 is better than the
second standard conversion sequence of U2.
the case with int is better because its second standard conversion sequence is better (promotion against standard conversion) and both conversions contain the same operator short(). Then assume that for std::initializer_list<long> worst conversion is for second element (cl2), and here is ambuguity because conversion functions are different.

Different casting operators used by different compilers

The following C++ program compiles without warnings in all compilers I have tried (gcc 4.6.3, llvm 3.0, icc 13.1.1, SolarisStudio 12.1/12.3):
struct CClass
{
template<class T>
operator T() const { return 1; }
operator int() const { return 2; }
};
int main(void)
{
CClass x;
return static_cast<char>(x);
}
However, all but the SolarisStudio compilers return 2, SolarisStudio (either version) returns 1, which I would consider the most logical result.
Using return x.operator char(); results in all compilers returning 1.
Obviously, since figuring this out, I have been using the latter notation. However, I would like to know which of compilers is correct and why. (One would think that majority rules, but this still doesn't explain the why.)
This question seems to be related to the SO questions here, here, and here, but these "only" give solutions to problems, no explanations (that I was able to apply to my particular problem anyway).
Note that adding an additional overloaded casting operator, say operator float() const { return 3; } results in all compilers except SolarisStudio complaining about ambiguity.
The first (template) overload should be picked.
Paragraph 13.3.3/1 of the C++11 Standard specifies:
[...] a viable function F1 is defined to be a better function than another viable function
F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
— for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
— the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type. [ Example:
struct A {
A();
operator int();
operator double();
} a;
int i = a; // a.operator int() followed by no conversion
// is better than a.operator double() followed by
// a conversion to int
float x = a; // ambiguous: both possibilities require conversions,
// and neither is better than the other
—end example ] or, if not that,
— F1 is a non-template function and F2 is a function template specialization, or, if not that,
[...]
As you can see the, fact that the first conversion operator is a template only becomes relevant when the standard conversion sequence from its return type (char, in this case) to the destination type (char, in this case) is not better than the standard conversion sequence from the return type of the non-template overload (int, in this case) to the destination type (char, in this case).
However, a standard conversion from char to char is an Exact Match, while a standard conversion from int to char is not. Therefore, the third item of § 13.3.3/1 does not apply, and the second item does.
This means that the first (template) overload should be picked.
The first is an exact match, the second requires a conversion. Exact matches have priority over conversions.
Those other questions you linked are mostly unrelated to yours.
Some advice: don't use template conversion operators. Name it convert_to instead.

Precedence of overloaded cast operators

In the code below, I would expect to get a compiler error if more than one cast operator is defined because of the ambiguity.
#include <iostream>
#include <sstream>
struct A
{
operator const char*() { return "hello world\n"; }
operator float() { return 123.0F; }
//operator int() { return 49; }
};
int main()
{
A a;
std::stringstream ss;
ss << a;
std::cout << ss.str();
return 0;
}
Instead, as long as only one numeric cast operator is defined then it compiles with no errors, no warnings, and the numeric cast is used in preference to the operator const char *(). The order of the declared operators makes no difference.
However if operator int() and operator float() are both defined then I get what I expected from the start:
'operator <<' is ambiguous
Are there precedence rules for casts, or why does the compiler choose the numeric cast by default? I do understand that I should explicitly state which cast I mean, but my question is on the default choice the compiler makes.
Edit: Using compiler MSVC 2010
Conversions are ranked according to § 13.3.3.1 of the C++ Standard. In particular, user-defined conversion sequences pertinent to your example are regulated by § 13.3.3.1.2/1:
"A user-defined conversion sequence consists of an initial standard conversion sequence followed by a user-defined conversion (12.3) followed by a second standard conversion sequence. [...] If the user-defined conversion is specified by a conversion function (12.3.2), the initial standard conversion sequence converts the source type to the implicit object parameter of the conversion function."
All conversions sequences here involve:
a fictitious conversion to the source type of the implicit object parameter of the conversion function;
a user-defined conversion;
an identity conversion to the input type of operator <<.
These conversion sequences all have the same rank. Thus, the call should be ambiguous. If it is not, for me it is a compiler bug.

Given the following code (in GCC 4.3) , why is the conversion to reference called twice?

Given the following code (in GCC 4.3) , why is the conversion to reference called in both cases?
class A { };
class B {
public:
operator A() {}
operator A&() {}
};
int main() {
B b;
(A) b;
(A&) b;
}
http://ideone.com/d6iF8
Your code is ambiguous and should not compile (it is ill-formed per 13.3.3:2).
lvalue-to-rvalue conversion has the same rank as identity conversion, so (per 13.3.3:1) there is no way to choose between them.
Comeau C++ (probably the most standards-compliant compiler) gives the following error:
"ComeauTest.c", line 11: error: more than one user-defined conversion from "B" to
"A" applies:
function "B::operator A()"
function "B::operator A &()"
(A) b;
^
Here's the relevant text from the standard:
c++11
13.3.3 Best viable function [over.match.best]
[...] Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 [...]
2 -
If there is exactly one viable function that is a better function than all other viable functions, then it is the
one selected by overload resolution; otherwise the call is ill-formed.
The definitions themselves are complicated, but there's two things to note with user-defined conversions:
First, the application of user-defined conversion as a conversion sequence is specified to decompose into a sequence S_a - U - S_b of a standard conversion sequence followed by a user-defined conversion followed by another standard conversion sequence. This covers all the cases; you can't have more than one user-defined conversion in a conversion sequence, and a standard conversion sequence can be the "identity conversion" i.e. no conversion required.
Second, when comparing user-defined conversion sequences the only part that matters is the second standard conversion sequence. This is in 13.3.3:
c++11
13.3.3 Best viable function [over.match.best]
[...] a viable function F1 is defined to be a better function than another viable function
F2 if [...]
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type.
and in 13.3.3.2:
c++11
13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
3 - Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of
the following rules applies: [...]
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate
initialization and the second standard conversion sequence of U1 is better than the second standard
conversion sequence of U2.
So when comparing conversion sequences U1 = (S1_a - U'1 - S1_b) and U2 = (S2_a - U'2 - S2_b) the only thing that matters is the relative rank of S1_b and S2_b; the standard conversion sequences required to arrive at the parameter of the user-defined conversions do not matter.
So the possible conversion sequences for (A) b, requiring a conversion sequence yielding B -> A, are:
U1: B -> B [identity], B::operator A() [user-defined], A -> A [identity]
U2: B -> B [identity], B::operator A &() [user-defined], A & -> A [rvalue-to-lvalue]
Now, how do we rank standard conversion sequences? The place to look is table 12 in 13.3.3.1.1, which specifies that lvalue-to-rvalue conversion has the same rank ("Exact Match") as identity conversion. So the two user-defined conversion sequences cannot be distinguished, and the program is ill-formed.
Sidebar
What's the difference between 13.3.3 and 13.3.3.2 as regards ranking user-defined conversion sequences?
13.3.3 allows the compiler to distinguish between different user-defined conversion operators; 13.3.3.2 allows the compiler to distinguish between different functions that each require a user-defined conversion in their arguments.
So, in the code
struct A {
operator int();
operator float();
} a;
void f(int);
f(a);
13.3.3 applies and A::operator int() is selected over A::operator float(); in the code
struct A {
operator int();
} a;
void f(int);
void f(double);
f(a);
13.3.3.2 applies and void f(int) is selected over void f(double). However in the code
struct A {
operator int();
operator float();
} a;
void f(int);
void f(double);
f(a);
even though 13.3.3 prefers A::operator int() -> void f(int) over A::operator float() -> void f(int) and float -> double over int -> double, and 13.3.3.2 prefers int -> int over int -> double and float -> double over float -> int, there is no way to distinguish between the int -> int and float -> double conversion sequences (because they contain neither the same user-defined conversion operator nor the same overload of f), and so the code is ill-formed.