Consider the following code (https://godbolt.org/z/s17aoczj6):
template<class T>
class Wrapper {
public:
explicit Wrapper(T t): _value(t) {}
template<class S = T>
operator T() { return _value; }
private:
T _value;
};
auto main() -> int
{
auto i = int{0};
auto x = Wrapper<int>(i);
return x + i;
}
It compiles with clang, but not with gcc (all versions).
It works in gcc when removing the template<class S = T>.
Is this code ill-formed or is one compiler wrong?
The error is in gcc is error: no match for 'operator+' (operand types are 'Wrapper<int>' and 'int') return x + i;.
I want a conversion to T. The template is not necessary in this example, but in a non-minimal example I would like to use SFINAE and therefore need a template here.
When you have x + i, since x is of class type, overload resolution kicks in:
The specific details from the standard ([over.match.oper]p2)
If either operand has a type that is a class or an enumeration, a user-defined operator function might be declared that implements this operator or a user-defined conversion can be necessary to convert the operand to a type that is appropriate for a built-in operator.
In this case, overload resolution is used to determine which operator function or built-in operator is to be invoked to implement the operator.
The built-in candidates are defined in paragraph 3.3:
For the operator ,, the unary operator &, or the operator ->, the built-in candidates set is empty. For all other operators, the built-in candidates include all of the candidate operator functions defined in [over.built] that, compared to the given operator,
have the same operator name, and
accept the same number of operands, and
accept operand types to which the given operand or operands can be converted according to [over.best.ics], and
do not have the same parameter-type-list as any non-member candidate that is not a function template specialization.
The built-in candidate functions could consist of, according to [over.built]p13:
For every pair of types L and R, where each of L and R is a floating-point or promoted integral type, there exist candidate operator functions of the form
LR operator*(L, R);
...
LR operator+(L, R);
...
bool operator>=(L, R);
where LR is the result of the usual arithmetic conversions ([expr.arith.conv]) between types L and R.
So there is a built-in function int operator+(int, int).
As to what possible implicit conversion sequences there are:
[over.best.ics]p3:
A well-formed implicit conversion sequence is one of the following forms:
a standard conversion sequence,
a user-defined conversion sequence, or
an ellipsis conversion sequence.
And here a user-defined conversion sequence is used, defined by [over.ics.user]:
A user-defined conversion sequence consists of an initial standard conversion sequence followed by a user-defined conversion ([class.conv]) followed by a second standard conversion sequence.
(Here, both standard conversion sequences are empty, and your user defined conversion to an int can be used)
So when checking if int operator+(int, int) is a built-in candidate, it is since there exists a conversion between your class type and int.
As for the actual overload resolution, from [over.match.oper]:
The set of candidate functions for overload resolution for some operator # is the union of the member candidates, the non-member candidates, the built-in candidates, and the rewritten candidates for that operator #.
The argument list contains all of the operands of the operator.
The best function from the set of candidate functions is selected according to [over.match.viable] and [over.match.best].
And int operator+(int, int) is obviously the best match, since it requires no conversion for the second argument and only a user defined conversion for the first, so it beats other candidates like long operator+(long, int) and long operator+(int, long)
You can see the problem that the built-in candidate set is empty, since the GCC error reports that there are no viable candidates. If you instead had:
auto add(int a, int b) -> int
{
return a + b;
}
auto main() -> int
{
auto i = int{0};
auto x = Wrapper<int>(i);
return add(x, i);
}
it now compiles fine with GCC since ::add(int, int) is considered a candidate, even though it should be no different from the built-in operator int operator+(int, int).
If you instead had:
template<class S = T>
operator S() { return _value; } // Can convert to any type
Clang now has the error:
<source>:16:14: error: use of overloaded operator '+' is ambiguous (with operand types 'Wrapper<int>' and 'int')
return x + i;
~ ^ ~
<source>:16:14: note: built-in candidate operator+(float, int)
<source>:16:14: note: built-in candidate operator+(double, int)
<source>:16:14: note: built-in candidate operator+(long double, int)
<source>:16:14: note: built-in candidate operator+(__float128, int)
<source>:16:14: note: built-in candidate operator+(int, int)
<source>:16:14: note: built-in candidate operator+(long, int)
<source>:16:14: note: built-in candidate operator+(long long, int)
<source>:16:14: note: built-in candidate operator+(__int128, int)
<source>:16:14: note: built-in candidate operator+(unsigned int, int)
<source>:16:14: note: built-in candidate operator+(unsigned long, int)
<source>:16:14: note: built-in candidate operator+(unsigned long long, int)
<source>:16:14: note: built-in candidate operator+(unsigned __int128, int)
(Note this error message excludes conversions of the second argument, but since these will never be chosen, they are probably not considered as an optimisation)
And GCC still says there are no candidates at all, even though all of these built-in candidates exist.
Related
template<typename Integral>
struct IntegralWrapper {
Integral _value;
IntegralWrapper() = default;
IntegralWrapper(Integral value)
: _value(value) {}
operator Integral() const {
return _value;
}
operator bool() const = delete;
};
int main() {
IntegralWrapper<int> i1, i2;
i1 * i2;
}
It's compiled successfully by gcc, but failed by MSVC and clang, with error overloaded operator '*' is ambiguous. The problem comes from the explicit deleted operator bool.
https://godbolt.org/z/nh6M11d98
Which side (gcc or clang/MSVC) is right? And why?
First of all: Deleting a function does not prevent it from being considered in overload resolution (with some minor exceptions not relevant here). The only effect of = delete is that the program will be ill-formed if the conversion function is chosen by overload resolution.
For the overload resolution:
There are candidate built-in overloads for the * operator for all pairs of promoted arithmetic types.
So, instead of using * we could also consider
auto mul(int a, int b) { return a*b; } // (1)
auto mul(long a, long b) { return a*b; } // (2)
// further overloads, also with non-matching parameter types
mul(i1, i2);
Notably there are no overloads including bool, since bool is promoted to int.
For (1) the chosen conversion function for both arguments is operator int() const instantiated from operator Integral() const since conversion from int to int is better than bool to int. (Or at least that seems to be the intent, see e.g. https://github.com/cplusplus/draft/issues/2288 and In overload resolution, does selection of a function that uses the ambiguous conversion sequence necessarily result in the call being ill-formed?).
For (2) however, neither conversion from int or bool to long is better than the other. As a result the implicit conversion sequences will for the purpose of overload resolution be the ambiguous conversion sequence. This conversion sequence is considered distinct from all other user-defined conversion sequences.
When then comparing which of the overloads is the better one, neither can be considered better than the other, because both use user-defined conversion sequences for both parameters, but the used conversion sequences are not comparable.
As a result overload resolution should fail. If I completed the list of built-in operator overloads I started above, nothing would change. The same logic applies to all of them.
So MSVC and Clang are correct to reject and GCC is wrong to accept. Interestingly with the explicit example of functions I gave above GCC does reject as expected.
To disallow implicit conversions to bool you could use a constrained conversion function template, which will not allow for another standard conversion sequence after the user-defined conversion:
template<std::same_as<int> T>
operator T() const { return _value; }
This will allow only conversions to int. If you can't use C++20, you will need to replace the concept with SFINAE via std::enable_if.
Consider the struct S with two operator== overloads of same && qualifier and different const qualifier:
struct S {
bool operator==(const S&) && {
return true;
}
bool operator==(const S&) const && {
return true;
}
};
If I compare the two S with operator==:
S{} == S{};
gcc and msvc accept this code, clang rejects it with:
<source>:14:7: error: use of overloaded operator '==' is ambiguous (with operand types 'S' and 'S')
S{} == S{};
~~~ ^ ~~~
Why does clang think there is an ambiguous overload resolution here? Shouldn't the non-const one be the best candidate in this case?
Similarly, if I compare two S with the synthesized operator!=:
S{} != S{};
gcc still accept this code, but msvc and clang doesn't:
<source>:14:7: error: use of overloaded operator '!=' is ambiguous (with operand types 'S' and 'S')
S{} != S{};
~~~ ^ ~~~
It seems weird that the synthesized operator!= suddenly causes the ambiguity for msvc. Which compiler is right?
The example would be unambiguous in C++17. C++20 brings change:
[over.match.oper]
For a unary operator # with an operand of type cv1 T1, and for a binary operator # with a left operand of type cv1 T1 and a right operand of type cv2 T2, four sets of candidate functions, designated member candidates, non-member candidates, built-in candidates, and rewritten candidates, are constructed as follows:
...
For the operator ,, the unary operator &, or the operator ->, the built-in candidates set is empty. For all other operators, the built-in candidates include all of the candidate operator functions defined in [over.built] that, compared to the given operator,
have the same operator name, and
accept the same number of operands, and
accept operand types to which the given operand or operands can be converted according to [over.best.ics], and
do not have the same parameter-type-list as any non-member candidate that is not a function template specialization.
The rewritten candidate set is determined as follows:
...
For the equality operators, the rewritten candidates also include a synthesized candidate, with the order of the two parameters reversed, for each non-rewritten candidate for the expression y == x.
Thus, the rewritten candidate set includes these:
implicit object parameter
|||
(S&&, const S&); // 1
(const S&&, const S&); // 2
// candidates that match with reversed arguments
(const S&, S&&); // 1 reversed
(const S&, const S&&); // 2 reversed
The overload 1 is better match than 2, but the synthesised reversed overload of 1 is ambiguous with the original non-reversed overload because both have const conversion to one parameter. Note that this is actually ambiguous even if overload 2 doesn't exist.
Thus, Clang is correct.
This is also covered by the informative compatibility annex:
Affected subclause: [over.match.oper] Change: Equality and inequality expressions can now find reversed and rewritten candidates.
Rationale: Improve consistency of equality with three-way comparison and make it easier to write the full complement of equality
operations.
Effect on original feature: Equality and inequality expressions between two objects of different types, where one is convertible to
the other, could invoke a different operator. Equality and inequality
expressions between two objects of the same type could become
ambiguous.
struct A {
operator int() const;
};
bool operator==(A, int); // #1
// #2 is built-in candidate: bool operator==(int, int);
// #3 is built-in candidate: bool operator!=(int, int);
int check(A x, A y) {
return (x == y) + // ill-formed; previously well-formed
(10 == x) + // calls #1, previously selected #2
(10 != x); // calls #1, previously selected #3
}
I am trying to use custom operators in C++ for a project I am working on. This project uses the ROCm/HIP stack (so, under the hood, the clang compiler).
Here's the error message:
src/zlatrd.cpp:359:32: error: use of overloaded operator '*' is ambiguous (with operand types 'magmaDoubleComplex' (aka 'hip_complex_number<double>') and 'float')
alpha = tau[i] * -0.5f * value;
~~~~~~ ^ ~~~~~
./include/magma_operators.h:190:1: note: candidate function
operator * (const magmaDoubleComplex a, const double s)
^
./include/magma_operators.h:183:1: note: candidate function
operator * (const magmaDoubleComplex a, const magmaDoubleComplex b)
^
./include/magma_operators.h:437:1: note: candidate function
operator * (const magmaFloatComplex a, const float s)
^
./include/magma_operators.h:430:1: note: candidate function
operator * (const magmaFloatComplex a, const magmaFloatComplex b)
^
It seems to me that it is not ambiguous; it should select the third candidate function, as the argument is a float.
Here is the type definition for the hip_complex_number template:
template <typename T>
struct hip_complex_number
{
T x, y;
template <typename U>
hip_complex_number(U a, U b)
: x(a)
, y(b)
{
}
template <typename U>
hip_complex_number(U a)
: x(a)
, y(0)
{
}
hip_complex_number()
: x(0)
, y(0)
{
}
};
I notice it has an implicit constructor that will convert a float, but I assumed that given a candidate function that matches the type exactly (not including the const modifier), that it would obviously select that function overload over those which require an implicit cast.
EDIT: Also, I know that by default C/C++ convert from float/double to each other if the function is defined in that way, so 'matches the type exactly' was definitely not the right wording.
Can someone explain why C++ thinks this is ambiguous?
EDIT: People have asked for the definition of magmaFloatComplex, which is hip_complex_number<float>
Please note that I don't know anything about the library that these types are from. I will explain the ambiguity purely based on the information in the question.
The first and third overload are ambiguous.
In the overload operator * (const magmaDoubleComplex a, const double s) a floating-point promotion from float to double is required in the second argument.
In the overload operator * (const magmaFloatComplex a, const float s) a user-defined conversion to an unrelated type from magmaDoubleComplex to magmaFloatComplex is required. This conversion is possible, because of the non-explicit converting constructor
template <typename U>
hip_complex_number(U a)
The corresponding other parameters don't need any conversion aside from potentially lvalue-to-rvalue conversions or user-defined conversion to the same type, which are considered exact match.
Exact match is better than either user-defined conversion to an unrelated type or floating-point promotion, meaning that each overload has one parameter that is better than the other one's and one that is worse than the other one's.
Overload resolution is ambiguous if not at least one overload has all parameters not worse than all other overload's parameters. Therefore the two mentioned overloads are ambiguous here.
The second overload has exact match in the first argument and requires a user-defined conversion to unrelated type in the second argument, which is again possible because of the converting constructor mentioned above. However the floating-point promotion of the first overload is considered better than a user-defined conversion to unrelated type and therefore the second overload looses against the first one in overload resolution, but would be ambiguous with the third one as well.
The fourth overload is worse than all the others, because it requires user-defined conversions to unrelated types in both parameters.
Note that if overload 3 would be selected as expect in your question, it would result in an error, because the converting constructor chosen for magmaFloatComplex will try to initialize the x member which is of type float with a magmaDoubleComplex, which (at least based on your shown code) doesn't have a conversion operator to float.
operator bool breaks the use of operator< in the following example. Can anyone explain why bool is just as relevant in the if (a < 0) expression as the specific operator, an whether there is a workaround?
struct Foo {
Foo() {}
Foo(int x) {}
operator bool() const { return false; }
friend bool operator<(const Foo& a, const Foo& b) {
return true;
}
};
int main() {
Foo a, b;
if (a < 0) {
a = 0;
}
return 1;
}
When I compile, I get:
g++ foo.cpp
foo.cpp: In function 'int main()':
foo.cpp:18:11: error: ambiguous overload for 'operator<' (operand types are 'Foo' and 'int')
if (a < 0) {
^
foo.cpp:18:11: note: candidate: operator<(int, int) <built-in>
foo.cpp:8:17: note: candidate: bool operator<(const Foo&, const Foo&)
friend bool operator<(const Foo& a, const Foo& b)
The problem here is that C++ has two options to deal with a < 0 expression:
Convert a to bool, and compare the result to 0 with built-in operator < (one conversion)
Convert 0 to Foo, and compare the results with < that you defined (one conversion)
Both approaches are equivalent to the compiler, so it issues an error.
You can make this explicit by removing the conversion in the second case:
if (a < Foo(0)) {
...
}
The important points are:
First, there are two relevant overloads of operator <.
operator <(const Foo&, const Foo&). Using this overload requires a user-defined conversion of the literal 0 to Foo using Foo(int).
operator <(int, int). Using this overload requires converting Foo to bool with the user-defined operator bool(), followed by a promotion to int (this is, in standardese, different from a conversion, as has been pointed out by Bo Persson).
The question here is: From whence does the ambiguity arise? Certainly, the first call, which requires only a user-defined conversion, is more sensible than the second, which requires a user-defined conversion followed by a promotion?
But that is not the case. The standard assigns a rank to each candidate. However, there is no rank for "user-defined conversion followed by a promotion". This has the same rank as only using a user-defined conversion. Simply (but informally) put, the ranking sequence looks a bit like this:
exact match
(only) promotion required
(only) implicit conversion required (including "unsafe" ones inherited from C such as float to int)
user-defined conversion required
(Disclaimer: As mentioned, this is informal. It gets significantly more complex when multiple arguments are involved, and I also didn't mention references or cv-qualification. This is just intended as a rough overview.)
So this, hopefully, explains why the call is ambiguous. Now for the practical part of how to fix this. Almost never does someone who provides operator bool() want it to be implicitly used in expressions involving integer arithmetic or comparisons. In C++98, there were obscure workarounds, ranging from std::basic_ios<CharT, Traits>::operator void * to "improved" safer versions involving pointers to members or incomplete private types. Fortunately, C++11 introduced a more readable and consistent way of preventing integer promotion after implicit uses of operator bool(), which is to mark the operator as explicit. This will remove the operator <(int, int) overload entirely, rather than just "demoting" it.
As others have mentioned, you can also mark the Foo(int) constructor as explicit. This will have the converse effect of removing the operator <(const Foo&, const Foo&) overload.
A third solution would be to provide additional overloads, e.g.:
operator <(int, const Foo&)
operator <(const Foo&, int)
The latter, in this example, will then be preferred over the above-mentioned overloads as an exact match, even if you did not introduce explicit. The same goes e.g. for
operator <(const Foo&, long long)
which would be preferred over operator <(const Foo&, const Foo&) in a < 0 because its use requires only a promotion.
Because compiler can not choose between bool operator <(const Foo &,const Foo &) and operator<(bool, int) which both fits in this situation.
In order to fix the issue make second constructor explicit:
struct Foo
{
Foo() {}
explicit Foo(int x) {}
operator bool() const { return false; }
friend bool operator<(const Foo& a, const Foo& b)
{
return true;
}
};
Edit:
Ok, at last I got a real point of the question :) OP asks why his compiler offers operator<(int, int) as a candidate, though "multi-step conversions are not allowed".
Answer:
Yes, in order to call operator<(int, int) object a needs to be converted Foo -> bool -> int. But, C++ Standard does not actually say that "multi-step conversions are illegal".
§ 12.3.4 [class.conv]
At most one user-defined conversion (constructor or conversion
function) is implicitly applied to a single value.
bool to int is not user-defined conversion, hence it is legal and compiler has the full right to chose operator<(int, int) as a candidate.
This is exactly what the compiler tells you.
One approach for solving the if (a < 0) for the compiler is to use the Foo(int x) constructor you've provided to create object from 0.
The second one is to use the operator bool conversion and compare it against the int (promotion). You can read more about it in Numeric promotions section.
Hence, it is ambiguous for the compiler and it cannot decide which way you want it to go.
Consider I have the following minimal code:
#include <boost/type_traits.hpp>
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[1][1] = 5;
return 0;
}
GNU C++ gives me the error:
test.cpp: In function 'int main(int, char**)':
test.cpp:16: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for second:
test.cpp:9: note: candidate 1: typename boost::remove_extent<ptr_t>::type& TData<ptr_t>::operator[](size_t) [with ptr_t = float [100][100]]
test.cpp:16: note: candidate 2: operator[](float (*)[100], int) <built-in>
My questions are:
Why GNU C++ gives the error, but Intel C++ compiler is not?
Why changing operator[] to the following leads to compiling without errors?
value_type & operator [] ( int id ) { return data[id]; }
Links to the C++ Standard are appreciated.
As I can see here are two conversion paths:
(1)int to size_t and (2)operator[](size_t).
(1)operator ptr_t&(), (2)int to size_t and (3)build-in operator[](size_t).
It's actually quite straight forward. For t[1], overload resolution has these candidates:
Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):
Parameter list: (T*, ptrdiff_t)
Candidate 2 (your operator)
Parameter list: (TData<float[100][100]>&, something unsigned)
The argument list is given by 13.3.1.2/6:
The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.
Argument list: (TData<float[100][100]>, int)
You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.
You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:
ptrdiff_t is int: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion.
ptrdiff_t is long: Neither candidate wins, because both require an integral conversion.
Now, 13.3.3/1 says
Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...
For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.
For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.
That last point might still confuse you, because of all the fuss around, so let's make an example
void f(int, int) { }
void f(long, char) { }
int main() { f(0, 'a'); }
This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0 converts to long worse than 'a' to int - yet you get an ambiguity, because you are in a criss-cross situation.
With the expression:
t[1][1] = 5;
The compiler must focus on the left hand side to determine what goes there, so the = 5; is ignored until the lhs is resolved. Leaving us with the expression: t[1][1], which represents two operations, with the second one operating on the result from the first one, so the compiler must only take into account the first part of the expression: t[1].The actual type is (TData&)[(int)]
The call does not match exactly any functions, as operator[] for TData is defined as taking a size_t argument, so to be able to use it the compiler would have to convert 1 from int to size_t with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]> into float[100][100].
The int to size_t conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]> to float[100][100] conversion according to §13.3.3.1.2/4. The conversion from float [100][100]& to float (*)[100] is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.
Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.
Q2: When you change the signature of the user defined operator[], the argument matches exactly the passed in type. t[1] is a perfect match for t.operator[](1) with no conversions whatsoever, so the compiler must follow that path.
I don't know what's the exact answer, but...
Because of this operator:
operator ptr_t & () { return data; }
there exist already built-in [] operator (array subscription) which accepts size_t as index. So we have two [] operators, the built-in and defined by you. Booth accepts size_t so this is considered as illegal overload probably.
//EDIT
this should work as you intended
template<typename ptr_t>
struct TData
{
ptr_t data;
operator ptr_t & () { return data; }
};
It seems to me that with
t[1][1] = 5;
the compiler has to choose between.
value_type & operator [] ( size_t id ) { return data[id]; }
which would match if the int literal were to be converted to size_t, or
operator ptr_t & () { return data; }
followed by normal array indexing, in which case the type of the index matches exactly.
As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.
(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)
I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity
I think the catch here is that the built-in [] operator as per 13.6/13 is defined as
T& operator[](T*, ptrdiff_t);
On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
typedef float (&ATYPE) [100][100];
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator
t[1][1] = 5; // error, as per the logic given below for Candidate 1 and Candidate 2
// Candidate 1 (CONVERSION rank)
// User defined conversion from 'TData' to float array
(t.operator[](1))[1] = 5;
// Candidate 2 (CONVERSION rank)
// User defined conversion from 'TData' to ATYPE
(t.operator ATYPE())[1][1] = 6;
return 0;
}
EDIT:
Here is what I think:
For candidate 1 (operator []) the conversion sequence S1 is
User defined conversion - Standard Conversion (int to size_t)
For candidate 2, the conversion sequence S2 is
User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)
The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...
Here the below quote from Standard should help.
$13.3.3.2/3 states - Standard
conversion sequence S1 is a better
conversion sequence than standard
conversion sequence S2 if — S1 is a
proper subsequence of S2 (comparing
the conversion sequences in the
canonical form defined by 13.3.3.1.1,
excluding any Lvalue Transformation;
the identity conversion sequence is
considered to be a subsequence of any
non-identity conversion sequence) or,
if not that...
$13.3.3.2 states- " User-defined
conversion sequence U1 is a better
conversion sequence than another
user-defined conversion sequence U2 if
they contain the same user-defined
conversion function or constructor and
if the second standard conversion
sequence of U1 is better than the
second standard conversion sequence of
U2."
Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.
That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"
This explains the ambiguity quiet well IMHO
Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]) which is too specific to the example (literals are type int but most variables you'll be using aren't), maybe you can generalize it:
template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }
Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic. Which really raises the question whether it's a compiler bug or not.
Also, once I eliminated the Boost dependency, it passed Comeau.