The following code
#include <string>
struct Foo {
operator double() {
return 1;
}
int operator[](std::string x) {
return 1;
}
};
int main() {
Foo()["abcd"];
}
Compiles fine with g++ but fails with clang and intel compilers because of an ambiguity between the declared method and native operator [].
It would be clear for me if Foo had an implicit conversion to int, but here the conversion is to double. Doesn't that solve the ambiguity?
§13.3.3.1.2 [over.ics.user]/p1-2:
A user-defined conversion sequence consists of an initial standard
conversion sequence followed by a user-defined conversion (12.3)
followed by a second standard conversion sequence. If the user-defined
conversion is specified by a constructor (12.3.1), the initial
standard conversion sequence converts the source type to the type
required by the argument of the constructor. If the user-defined
conversion is specified by a conversion function (12.3.2), the initial
standard conversion sequence converts the source type to the implicit
object parameter of the conversion function.
The second standard conversion sequence converts the result of the
user-defined conversion to the target type for the sequence.
In particular, there's an implicit conversion from floating point to integral type (§4.9 [conv.fpint]/p1):
A prvalue of a floating point type can be converted to a prvalue of an
integer type. The conversion truncates; that is, the fractional part
is discarded. The behavior is undefined if the truncated value cannot
be represented in the destination type.
For overload resolution purposes, the applicable candidates are:
Foo::operator[](std::string x) // overload
operator[](std::ptrdiff_t, const char *); // built-in
Given an argument list of types (Foo, const char [5]).
To match the first operator function, the first argument is an exact match; the second requires a user-defined conversion.
To match the second built-in function, the first argument requires a user-defined conversion sequence (the user-defined conversion to double followed by a standard conversion to std::ptrdiff_t, a floating-integral conversion). The second argument requires a standard array-to-pointer conversion (still exact match rank), which is better than a user-defined conversion.
Thus for the first argument the first function is better; for the second argument the second function is better, we have a criss-cross situation, overload resolution fails, and the program is ill-formed.
Note that, while for the purposes of operator overload resolution, a user-defined conversion sequence can have two standard conversion sequences (one before and one after the user-defined conversion), and operands of non-class-type can be converted to match the candidates, if a built-in operator is selected, the second standard conversion sequence is not applied for operands of class type, and no conversion at all is applied for operands for non-class type before the operator is interpreted as a built-in (§13.3.1.2 [over.match.oper]/p7):
If a built-in candidate is selected by overload resolution, the
operands of class type are converted to the types of the corresponding
parameters of the selected operation function, except that the second
standard conversion sequence of a user-defined conversion sequence
(13.3.3.1.2) is not applied. Then the operator is treated as the
corresponding built-in operator and interpreted according to Clause 5.
Thus if Foo::operator[](std::string x) is removed, the compiler should report an error, though clang doesn't. This is an obvious clang bug, as it fails to reject the example given in the standard.
Related
Edit: I have reformatted the post to be clearer.
Why does this work:
struct A {};
struct B {
B(A){}
};
void operator+(const B&, const B&) {}
int main()
{
A a1, a2;
a1 + a2;
}
and this does not?
struct B {
B(const char*){}
};
void operator+(const B&, const B&) {} //error: invalid operands of types 'const char [6]' and 'const char [6]' to binary 'operator+'|
int main()
{
"Hello" + "world";
}
Essentially, in the first example a1 and a2 both convert to B objects through the implicit conversion and use the operator+(const B&, const B&) to add.
Following from this example, I would have expected "Hello" and "world" to convert to B objects, again through the implicit constructor, and use operator+(const B&, const B&) to add to each other. Instead there is an error, which indicates the C-style strings do not attempt a user-defined conversion to B in order to add. Why is this? Is there a fundamental property that prevents this?
In your first example, overload resolution is allowed to find your operator+:
[C++14: 13.3.1.2/2]: If either operand has a type that is a class or an enumeration, a user-defined operator function might be declared that implements this operator or a user-defined conversion can be necessary to convert the operand to a type that is appropriate for a built-in operator. In this case, overload resolution is used to determine which operator function or built-in operator is to be invoked to implement the operator. [..]
[C++14: 13.3.2/1]: From the set of candidate functions constructed for a given context (13.3.1), a set of viable functions is chosen, from which the best function will be selected by comparing argument conversion sequences for the best fit (13.3.3). The selection of viable functions considers relationships between arguments and function parameters other than the ranking of conversion sequences.
[C++14: 13.3.2/2]: First, to be a viable function, a candidate function shall have enough parameters to agree in number with the arguments in the list.
If there are m arguments in the list, all candidate functions having exactly m parameters are viable.
[..]
[C++14: 13.3.2/3]: Second, for F to be a viable function, there shall exist for each argument an implicit conversion sequence (13.3.3.1) that converts that argument to the corresponding parameter of F. [..]
(You may examine the wording for "implicit conversion sequence" yourself to see that the operator+ call is permissible; the rules are too verbose to warrant verbatim reproduction here.)
However, in your second example, overload resolution is constrained to a basic arithmetic addition mechanism (one which is not defined for const char[N] or const char*), effectively prohibiting any operator+ function from being considered:
[C++14: 13.3.1.2/1]: If no operand of an operator in an expression has a type that is a class or an enumeration, the operator is assumed to be a built-in operator and interpreted according to Clause 5.
[C++14: 5.7/1]: [..] For addition, either both operands shall have arithmetic or unscoped enumeration type, or one operand shall be a pointer to a completely-defined object type and the other shall have integral or unscoped enumeration type. [..]
[C++14: 5.7/3]: The result of the binary + operator is the sum of the operands.
1. Explaining your compiler error:
The reason you can't concatenate two string literals using the '+' operator,
is because string literals are simply arrays of characters, and you can't concatenate two arrays.
Arrays will be implicitly converted to the pointer of their first element.
Or as the standard describes it:
[conv.array]
An lvalue or rvalue of type “array of N T” or “array of unknown bound
of T” can be converted to a prvalue of type “pointer to T”. The result
is a pointer to the first element of the array.
What you are really doing in the example above,
is trying to add two const char pointers together, and that is not possible.
2. Why the string literals aren't implicitly converted:
Since arrays and pointers are fundamental types, you can't provide an implicit conversation operator as you have done in your class example.
The main thing to keep in mind, is that std::string knows how to take in char[], but char[] does not know how to become a std::string. In your example, you've used B, as a replacement to char[], but you've also given it the ability to convert itself to A.
3. Alternatives:
You can concatenate string literals by leaving out the plus operator.
"stack" "overflow"; //this will work as you indented
Optionally, you could make "stack" a std::string, and then use the std::string's overloaded '+' operator:
std::string("stack") + "overflow"; //this will work
I'm wondering why there's no ambiguity in this function call:
#include <iostream>
#include <vector>
template <class T>
class C
{
public:
typedef char c;
typedef double d;
int fun() {}
static c testFun( decltype(&C::fun) ) {return c();}
static d testFun(...) { return d(); }
};
int main() {
C<int>::testFun(0); // Why no ambiguity?
}
http://coliru.stacked-crooked.com/a/241ce5ab82b4a018
There's a ranking of implicit conversion sequences, as defined in [over.ics.rank], emphasis mine:
When comparing the basic forms of implicit conversion sequences...
- a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence, and
- a user-defined conversion sequence (13.3.3.1.2) is a better conversion sequence than an ellipsis conversion
sequence (13.3.3.1.3).
So we have two functions:
static char testFun( int (C::*)() ) { return char(); }
static double testFun( ... ) { return double(); }
Both functions are viable for testFun(0). The first would involve a "null member pointer conversion" as per [conv.mem], and is a standard conversion sequence. The second would match the ellipsis and be an ellipsis conversion sequence. By [over.ics.rank], the former is preferred. There's no ambiguity, the one is strictly better than the other.
An ambiguous overload would arise if we had two equivalent conversion sequences that the compiler could not decide between. Consider if we had something like:
static char testFun(int* ) { return 0; }
static int testFun(char* ) { return 0; }
testFun(0);
Now both overloads would be equivalent as far as the conversion sequences go, so we'd have two viable candidates.
You have a standard conversion vs an ellipsis conversion. The standard says that a standard conversion is a better conversion sequence than the latter. [over.ics.rank]/p2:
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion
sequence or an ellipsis conversion sequence
A pointer-to-member conversion is a standard conversion sequence. 0 is a null pointer constant and can be converted to a pointer-to-member. [conv.mem]/p1:
A null pointer constant (4.10) can be converted to a pointer to member type; the result is the null member
pointer value of that type and is distinguishable from any pointer to member not created from a null pointer
constant. Such a conversion is called a null member pointer conversion.
Therefore the first overload is preferred.
13.3.2 Viable functions
A candidate function having fewer than m parameters is viable only if it has an ellipsis in its parameter list (8.3.5). For the purposes of overload resolution, any argument for which there is no corresponding parameter is considered to “match the ellipsis” (13.3.3.1.3).
And also
Viable functions
Given the set of candidate functions, constructed as described above, the next step of overload resolution is examining arguments and parameters to reduce the set to the set of viable functions
To be included in the set of viable functions, the candidate function must satisfy the following:
1) If there are M arguments, the candidate function that has exactly M parameters is viable
2) If the candidate function has less than M parameters, but has an ellipsis parameter, it is viable.
[...]
Overloading resolution
A null pointer literal 0 should be an exact match to a function accepting a function pointer and treated as stronger than matching anything (...).
I have a misunderstanding about standard-conversion sequence terms. I have come across the following quote N3797 §8.5.3/5 [dcl.init.ref]:
— If the initializer expression
— is an xvalue (but not a bit-field), class prvalue, array prvalue or
function lvalue and “cv1 T1” is reference-compatible with “cv2
T2”, or
— has a class type (i.e., T2 is a class type), where T1 is not
reference-related to T2, and can be converted to an xvalue, class
prvalue, or function lvalue of type “cv3 T3”, where “cv1 T1” is
reference-compatible with “cv3 T3” (see 13.3.1.6),
[..]
In the second case, if the reference is an rvalue reference and the second standard conversion sequence of the user-defined conversion
sequence includes an lvalue-to-rvalue conversion, the program is
ill-formed.
What is the second standard conversion sequence here? I thought there is a standard conversion sequence including all necessary standard conversions (user-defined and implicit).
[over.ics.user]:
A user-defined conversion consists of an initial standard conversion sequence followed by a user-defined conversion (12.3) followed by a second standard conversion sequence. […]
The second standard conversion sequence converts the result of the
user-defined conversion to the target type for the sequence.
E.g. for
struct A
{
operator int();
};
void foo(short);
foo(A());
The second standard conversion sequence converts the int prvalue to the parameter of foo, that is, short. The first standard conversion sequence converts the A object to A (the implicit object parameter of operator int), which constitutes an identity conversion. For references, the rule quoted by you provides an example:
struct X {
operator B();
operator int&();
} x;
int&& rri2 = X(); // error: lvalue-to-rvalue conversion applied to
// the result of operator int&
Here, operator int& is selected by overload resolution. The return value is an lvalue reference to int. For that to initialize the reference, an lvalue-to-rvalue conversion would be necessary, and that's precisely what the quote prevents: Lvalues shall not be bound to rvalue references.
What the standard conversion term means is covered in the following clause of the C++ Standard:
4 Standard conversions [conv]
Standard conversions are implicit conversions with built-in meaning. Clause 4 enumerates the full set of such conversions. A standard conversion sequence is a sequence of standard conversions in the following order:
— Zero or one conversion from the following set: lvalue-to-rvalue conversion, array-to-pointer conversion,
and function-to-pointer conversion.
— Zero or one conversion from the following set: integral promotions, floating point promotion, integral conversions, floating point conversions, floating-integral conversions, pointer conversions, pointer to member conversions, and boolean conversions.
— Zero or one qualification conversion.
[ Note: A standard conversion sequence can be empty, i.e., it can consist of no conversions. — end note ]
A standard conversion sequence will be applied to an expression if necessary to convert it to a required
destination type.
In other words, the standard conversion is a set of built-in rules the compiler can apply when converting one type to another. Those built-in conversions include:
No conversions
Lvalue-to-rvalue conversion
Array-to-pointer conversion
Function-to-pointer conversion
Qualification conversions
Integral promotions
Floating point promotion
Integral conversions
Floating point conversions
Floating-integral conversions
Pointer conversions
Pointer to member conversions
Boolean conversions
The standard conversion sequence can appear twice during the user-defined conversion sequence - either before and/or after the user-defined conversion:
§ 13.3.3.1.2 User-defined conversion sequences [over.ics.user]
A user-defined conversion sequence consists of an initial standard conversion sequence followed by a user-defined conversion (12.3) followed by a second standard conversion sequence. If the user-defined conversion is specified by a constructor (12.3.1), the initial standard conversion sequence converts the source type to the type required by the argument of the constructor. If the user-defined conversion is specified by a conversion function (12.3.2), the initial standard conversion sequence converts the source type to the implicit object parameter of the conversion function.
The second standard conversion sequence converts the result of the user-defined conversion to the target type for the sequence. Since an implicit conversion sequence is an initialization, the special rules for initialization by user-defined conversion apply when selecting the best user-defined conversion for a user-defined conversion sequence (see 13.3.3 and 13.3.3.1).
Having that said, for the following conversion:
A a;
B b = a;
the compiler will search for the conversion constructor in B that can take an instance of A (source type) through some initial standard conversion sequence so that it could then perform that user-defined conversion through selected constructor, and then apply another standard conversion - second standard conversion - for converting the resultant type of the user-defined conversion to the target type;
or:
the compiler will search for the conversion function in A that is callable after some initial standard conversion sequence of the implicit context, that could then convert an instance of A to some type convertible through another standard conversion - the second standard conversion - to the target type B.
As a tangible example let's consider the below conversion:
struct A
{
operator int() const;
};
A a;
bool b = a;
The compiler considers the following user-defined conversion sequence:
Initial standard conversion: Qualification conversion of A* to const A* to call const-qualified operator int() const.
User-defined conversion: conversion of A to int, through user-defined conversion function.
Second standard conversion: Boolean conversion of int to bool.
The case you are asking about can be split as follows:
struct A
{
operator int&();
};
int&& b = A();
The source type is A.
The target type is int&&.
The user-defined conversion sequence is the conversion of A to int&&.
The initial standard conversion sequence is No conversion at all.
The user-defined conversion is the conversion of A to int&.
The second standard conversion sequence (converting the result of the user-defined conversion to the target type) that is a part of the overall user-defined conversion sequence would be here the standard conversion of int& to int&& - an Lvalue-to-rvalue conversion. That conversion is considered since int& and int&& are reference-compatible types. According to the below statement §8.5.3 [dcl.init.ref]/p5:
[...] if the reference is an rvalue reference and the second standard conversion sequence of the user-defined conversion sequence includes an lvalue-to-rvalue conversion, the program is ill-formed.
that conversion is not applicable in the overall user-defined conversion sequence.
The question arose while I was researching the answer to this SO question. Consider the following code:
struct A{
operator char() const{ return 'a'; }
operator int() const{ return 10; }
};
struct B {
void operator<< (int) { }
};
int main()
{
A a;
B b;
b << a;
}
The conversion of a to int can be either via a.operator char() followed by an integral promotion, or a.operator int() followed by an identity conversion (i.e., no conversion at all). The standard says that (§13.3.3.1 [over.best.ics]/p10, footnote omitted, bolding mine; all quotes are from N3936):
If several different sequences of conversions exist that each convert
the argument to the parameter type, the implicit conversion sequence
associated with the parameter is defined to be the unique conversion
sequence designated the ambiguous conversion sequence. For the
purpose of ranking implicit conversion sequences as described in
13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is indistinguishable from any other
user-defined conversion sequence. If a function that uses the
ambiguous conversion sequence is selected as the best viable function,
the call will be ill-formed because the conversion of one of the
arguments in the call is ambiguous.
Here, B::operator<<(int) is the only viable candidate - and hence is the best viable candidate, even though the conversion sequence for the parameter is the ambiguous conversion sequence. According to the bolded sentence, then, the call should be ill-formed because "the conversion of one of the arguments in the call is ambiguous".
Yet no compiler that I tested (g++, clang, and MSVC) actually reports an error, which makes sense because after the function to call is selected through overload resolution, the function's "parameter (8.3.5) shall be initialized (8.5, 12.8, 12.1) with its corresponding
argument" (§5.2.2 [expr.call]/p4). This initialization is copy-initialization (§8.5 [dcl.init]/p15), and according to §8.5 [dcl.init]/p17, results in a new round of overload resolution to determine the conversion function to use:
The semantics of initializers are as follows. The destination type
is the type of the object or reference being initialized and the
source type is the type of the initializer expression. If the initializer is not a single (possibly parenthesized) expression, the
source type is not defined.
[...]
If the destination type is a (possibly cv-qualified) class type: [...]
Otherwise, if the source type is a (possibly cv-qualified) class type, conversion functions are considered. The applicable conversion
functions are enumerated (13.3.1.5), and the best one is chosen
through overload resolution (13.3). The user-defined conversion so
selected is called to convert the initializer expression into the
object being initialized. If the conversion cannot be done or is
ambiguous, the initialization is ill-formed.
[...]
And in this round of overload resolution, there is a tiebreaker in §13.3.3 [over.match.best]/p1:
a viable function F1 is defined to be a better function than another
viable function F2 if for all arguments i, ICSi(F1) is not a worse
conversion sequence than ICSi(F2), and then
for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the
standard conversion sequence from the return type of F2 to the
destination type.
(Example and remainder of the list omitted)
Since the standard conversion sequence from int to int (Exact Match rank) is better than the standard conversion sequence from char to int (Promotion rank), the first beats the second, and there should be no ambiguity - the conversion defined by operator int() will be used for the initialization, which then contradicts the sentence in §13.3.3.1 [over.best.ics]/p10 that says the function call will be ill-formed because of ambiguity.
Is there anything wrong in the above analysis, or is that sentence a bug in the standard?
When deciding the best user-defined conversion for a user-defined conversion sequence we have an overload candidate set.
§13.3.3/p1 says:
Define ICSi(F) as follows:
[...]
let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the
type of the i-th parameter of viable function F. 13.3.3.1 defines the implicit conversion sequences and
13.3.3.2 defines what it means for one implicit conversion sequence to be a better conversion sequence
or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a better function than another viable function
F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
— [...]
— the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type.
This applies since
§13.3.3.1.2/p2
2 The second standard conversion sequence converts the result of the user-defined conversion to the target
type for the sequence. Since an implicit conversion sequence is an initialization, the special rules for
initialization by user-defined conversion apply when selecting the best user-defined conversion for a user-defined
conversion sequence (see 13.3.3 and 13.3.3.1).
and therefore the conversion sequence involving operator int is selected as best match.
Finally I would rephrase §13.3.3.1/p10 as
If several different sequences of conversions exist that each convert the argument to the parameter type and it was not possible to determine a best candidate,
the implicit conversion sequence associated with the parameter is defined to be the unique conversion sequence
designated the ambiguous conversion sequence. For the purpose of ranking implicit conversion sequences
as described in 13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is
indistinguishable from any other user-defined conversion sequence134. If a function that uses the ambiguous
conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion
of one of the arguments in the call is ambiguous.
The standard appears to provide two rules for distinguishing between implicit conversion sequences that involve user-defined conversion operators:
c++11
13.3.3 Best viable function [over.match.best]
[...] a viable function F1 is defined to be a better function than another viable function
F2 if [...]
the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the
standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the
entity being initialized) is a better conversion sequence than the standard conversion sequence from
the return type of F2 to the destination type.
13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
3 - Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of
the following rules applies: [...]
User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate
initialization and the second standard conversion sequence of U1 is better than the second standard
conversion sequence of U2.
As I understand it, 13.3.3 allows the compiler to distinguish between different user-defined conversion operators, while 13.3.3.2 allows the compiler to distinguish between different functions (overloads of some function f) that each require a user-defined conversion in their arguments (see my sidebar to Given the following code (in GCC 4.3) , why is the conversion to reference called twice?).
Are there any other rules that can distinguish between user-defined conversion sequences? The answer at https://stackoverflow.com/a/1384044/567292 indicates that 13.3.3.2:3 can distinguish between user-defined conversion sequences based on the cv-qualification of the implicit object parameter (to a conversion operator) or of the single non-default parameter to a constructor or aggregate initialisation, but I don't see how that can be relevant given that that would require comparison between the first standard conversion sequences of the respective user-defined conversion sequences, which the standard doesn't appear to mention.
Supposing that S1 is better than S2, where S1 is the first standard conversion sequence of U1 and S2 is the first standard conversion sequence of U2, does it follow that U1 is better than U2? In other words, is this code well-formed?
struct A {
operator int();
operator char() const;
} a;
void foo(double);
int main() {
foo(a);
}
g++ (4.5.1), Clang (3.0) and Comeau (4.3.10.1) accept it, preferring the non-const-qualified A::operator int(), but I'd expect it to be rejected as ambiguous and thus ill-formed. Is this a deficiency in the standard or in my understanding of it?
The trick here is that converting from a class type to a non-class type doesn't actually rank any user-defined conversions as implicit conversion sequences.
struct A {
operator int();
operator char() const;
} a;
void foo(double);
int main() {
foo(a);
}
In the expression foo(a), foo obviously names a non-overloaded non-member function. The call requires copy-initializing (8.5p14) the function parameter, of type double, using the single expression a, which is an lvalue of class type A.
Since the destination type double is not a cv-qualified class type but the source type A is, the candidate functions are defined by section 13.3.1.5, with S=A and T=double. Only conversion functions in class A and any base classes of A are considered. A conversion function is in the set of candidates if:
It is not hidden in class A, and
It is not explicit (since the context is copy-initialization), and
A standard conversion sequence can convert the function's return type (not including any reference qualifiers) to the destination type double.
Okay, both conversion functions qualify, so the candidate functions are
A::operator int(); // F1
A::operator char() const; // F2
Using the rules from 13.3.1p4, each function has the implicit object parameter as the only thing in its parameter list. F1's parameter list is "(lvalue reference to A)" and F2's parameter list is "(lvalue reference to const A)".
Next we check that the functions are viable (13.3.2). Each function has one type in its parameter list, and there is one argument. Is there an implicit conversion sequence for each argument/parameter pair? Sure:
ICS1(F1): Bind the implicit object parameter (type lvalue reference to A) to expression a (lvalue of type A).
ICS1(F2): Bind the implicit object parameter (type lvalue reference to const A) to expression a (lvalue of type A).
Since there's no derived-to-base conversion going on, these reference bindings are considered special cases of the identity conversion (13.3.3.1.4p1). Yup, both functions are viable.
Now we have to determine if one implicit conversion sequence is better than the other. This falls under the fifth sub-item in the big list in 13.3.3.2p3: both are reference bindings to the same type except for top-level cv-qualifiers. Since the referenced type for ICS1(F2) is more cv-qualified than the referenced type for ICS1(F1), ICS1(F1) is better than ICS1(F2).
Therefore F1, or A::operator int(), is the most viable function. And no user-defined conversions (with the strict definition of a type of ICS composed of SCS + (converting constructor or conversion function) + SCS) were even to be compared.
Now if foo were overloaded, user-defined conversions on the argument a would need to be compared. So then the user-defined conversion (identity + A::operator int() + int to double) would be compared to other implicit conversion sequences.