c++ overload resolution and constness - c++

(all tests are performed on Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x86)
consider this minimal example:
struct myString
{
operator const char *( ) const { return &dummy; }
char& operator[]( unsigned int ) { return dummy; }
const char& operator[]( unsigned int ) const { return dummy; }
char dummy;
};
int main()
{
myString str;
const char myChar = 'a';
if( str[(int) 0] == myChar ) return 0; //error, multiple valid overloads
}
according to overload resolution rules (from cppreference)
F1 is determined to be a better function than F2 if implicit
conversions for all arguments of F1 are not worse than the implicit
conversions for all arguments of F2, and
1) there is at least one
argument of F1 whose implicit conversion is better than the
corresponding implicit conversion for that argument of F2
2) or. if
not that, (only in context of non-class initialization by conversion),
the standard conversion sequence from the return type of F1 to the
type being initialized is better than the standard conversion sequence
from the return type of F2
char& operator[]( unsigned int ) should be better, according to 1).
Of the two arguments (this = myString) do not need to be converted at all while operator const char *( ) const converts it to const char* and const char& operator[]( unsigned int ) const converts it to const myString, therefore there is one argument without any implicit conversion, which happens to be the best conversion
However my compiler yells the following error:
1> [///]\sandbox\sandbox\sandbox.cpp(29): error C2666: 'myString::operator []': 3 overloads have similar conversions
1> [///]\sandbox\sandbox\sandbox.cpp(19): note: could be 'const char &myString::operator [](unsigned int) const'
1> [///]\sandbox\sandbox\sandbox.cpp(18): note: or 'char &myString::operator [](unsigned int)'
1> [///]\sandbox\sandbox\sandbox.cpp(29): note: while trying to match the argument list '(myString, int)'
also note that using if( str[0u] == myChar ) return 0; or removing operator const char *( ) const resolve the error
why is there an error here and what am I getting wrong about overload resolution rules?
edit: it might be a visual C++ bug in this version, any definitive confirmation on this?

Here's a minified version of the problem, that reproduces on all compilers I threw at it.
#include <stddef.h>
struct myString
{
operator char *( );
char& operator[]( unsigned ptrdiff_t );
};
int main()
{
myString str;
if( str[(ptrdiff_t) 0] == 'a' ) return 0; //error, multiple valid overloads
}
Basically, you have two candidate functions to get the char for bool operator==(char,char): [over.match.oper]/3
char& myString::operator[]( unsigned ptrdiff_t ) ([over.match.oper]/3.1 => [over.sub])
char& operator[]( char*, ptrdiff_t) ([over.match.oper]/3.3 => [over.built]/14)
Note that if myString::operator[] took a ptrdiff_t instead of an unsigned ptrdiff_t, then it would have hidden the built-in operator per [over.built]/1. So if all you want to do is avoid issues like this, simply ensure any operator[] overload that takes an integral value, takes a ptrdiff_t.
I'll skip the viability check [over.match.viable], and go straight to conversion ranking.
char& myString::operator[]( unsigned ptrdiff_t )
For overloading, this is considered to have a leading implicit object paramter, so the signature to be matched is
(myString&, unsigned ptrdiff_t)
myString& => myString&
Standard conversion sequence: Identity (Rank: Exact match) - directly bound reference
ptrdiff_t => unsigned ptrdiff_t
Standard conversion sequence: Lvalue Transformation -> Integral conversion (Rank: Conversion) - signed lvalue to unsigned prvalue
char& operator[]( char*, ptrdiff_t)
myString& => char*
User-defined conversion sequence: Identity + operator char*(myString&)
Note that per [over.match.oper]/7 we don't get a second standard conversion sequence.
ptrdiff_t => ptrdiff_t
Standard conversion sequence: Identity (Rank: Exact match)
Best viable function
First argument
Standard Conversion Sequence is better than User-defined conversion sequence ([over.ics.rank]/2.1)
Second argument
Rank Conversion is worse than Rank Exact Match ([over.ics.rank]/3.2.2)
Result
We cannot satisfy the requirement
if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2)
so neither function is a better function.
Hence, per [over.match.best]/2 it's ambiguous.
How to fix this?
Well, the easiest solution is to never let the parameter to an operator[] overload be something that could be converted to from ptrdiff_t by something other than an Exact Match-ranked conversion. Looking at the conversions table that appears to mean that you should always declare your operator[] member function as X& T::operator[]( ptrdiff_t ). That covers the usual use-case of "Act like an array". As noted above, using precisely ptrdiff_t will suppress even searching for an operator T* candidate by taking the built-in subscript operator off the table.
The other option is to not have both T1 operator[] and operator T2* defined for the class, where T1 and T2 may both fulfill the same parameter of a (possibly implicit) function call. That covers cases where you are using operator[] for clever syntactic things, and end up with things like T T::operator[](X). If X::operator ptrdiff_t() exists for example, and so does T::operator T*(), then you're ambiguous again.
The only use-case for T::operator T*() I can imagine is if you want your type to implicitly convert into a pointer to itself, like a function. Don't do that...

Related

Implicit const/nonconst operator bool

Consider the following class:
class foo
{
public:
constexpr operator bool() const noexcept;
constexpr operator void * &() noexcept;
constexpr operator void * const &() const noexcept;
constexpr operator void const * &() noexcept;
constexpr operator void const * const &() const noexcept;
};
A foo object would be used like this:
void bar(bool);
// ...
foo f;
bar(f); // error C2664: cannot convert argument 1 from 'foo' to 'bool'
// message : Ambiguous user-defined-conversion
// message : see declaration of 'bar'
The issue is that the operator bool is not considered because of its constness. If I make another function without the const qualifier, the problem is solved. If I make f const, the problem is solved as well. If I explicitly cast f to bool, the problem is solved.
Sample to work with
What are my other options and what is causing this ambiguity?
First you should have a look at this question regarding conversion precedence in C++.
Then as pointed by some comments, operator bool() was not selected by the compiler for this conversion and the ambiguous resolution message is not about it.
The ambiguity comes from constexpr operator void * &() noexcept versus constexpr operator void const * &() noexcept. The idea being: let's try to convert a non const object to something and then see if this something can be converted to bool.
Also operator void const*() and operator void*() are redundant because there is no situation when you can call one and not the other and you can get a const void* from a void*.
Also returning void const * const & does not have any advantage over returning void const* as you won't be able to modify the reference. Returning a pointer instead of a reference to a pointer at worst does not change anything, at best, prevent you from doing a double indirection.
My advice would be to remove the non const operators and replace them by an explicit setter when you want to change the underlying pointer stored in foo. But that might not be the solution you are looking for depending on your actual problem and your design choices.
Take a look at the following, simplified, code:
struct foo {
constexpr operator bool() const noexcept; // (1)
constexpr operator void * &() noexcept; // (2)
constexpr operator void const * &() noexcept; // (3)
};
void bar(bool);
// ...
foo f;
bar(f);
The implicit conversion rules state the following:
Order of the conversions
Implicit conversion sequence consists of the following, in this order:
zero or one standard conversion sequence;
zero or one user-defined conversion;
zero or one standard conversion sequence.
The conversion of the argument in bar is one user-defined conversion, followed by one standard conversion sequence.
Continued:
... When converting from one built-in type to another built-in type, only one standard conversion sequence is allowed.
A standard conversion sequence consists of the following, in this order:
zero or one conversion from the following set: lvalue-to-rvalue conversion, array-to-pointer conversion, and function-to-pointer conversion;
zero or one numeric promotion or numeric conversion;
zero or one function pointer conversion; (since C++17)
zero or one qualification adjustment.
There are planty of types that can be used where a bool is required, and pointers are amongst them, so for overloads #2 and #3 all that is required is one "lvalue-to-rvalue conversion". To use overload #1 the compiler will have to perform a "qualification adjustment" (const bool to bool).
How to fix this:
Remove ambiguity by adding the constexpr operator bool() noexcept;, which fits bar exactly.

Error when compiling for x86 but not for x64

I have written the following code:
struct Element
{
int value;
};
struct Array
{
operator Element*();
operator const Element*() const;
Element& operator[](const size_t nIndex);
const Element& operator[](const size_t nIndex) const;
};
int main()
{
Array values;
if (values[0].value == 10)
{
}
return 0;
}
This works fine in x64. But in x86 I get a compiler error:
error C2666: 'Array::operator []': 4 overloads have similar conversions
note: could be 'const Element &Array::operator [](const std::size_t) const'
note: or 'Element &Array::operator [](const std::size_t)'
note: while trying to match the argument list '(Array, int)'
error C2228: left of '.value' must have class/struct/union
If I comment out the implicit conversions func, or add the explicit prefix, the code is compilable in x86.
But I can't understand why this code makes trouble.
Why can't the compiler decide to use an implicit conversion first, or array accessor first? I thought operator[] is higher in precedence.
The type of 0 which is an int needs doesn't directly match the type of the arguments to your [] operator and therefore a conversion is needed.
However it does match the built in [] operator for the Element pointer type. Clang gives a more descriptive error message in this case: https://godbolt.org/z/FB3DzG (note I've changed the parameter to int64_t to make this fail on x64).
The compiler has to do one conversion to use your class operator (the index from int to size_t) and one conversion to use the built in operator (from Array to Element*) it is ambiguous which one it should use.
It works on x64 as your class operator still only requires a single conversion for the index but the built in operator needs 2, 1 from Array to Element* and one for the index from int to int64_t, this makes your class operator a better match and is therefore not ambiguous.
The solution is to either make the conversion operator explicit (which is a good idea anyway) or to ensure that the type you are passing as your index matches the type your operator is expecting. In your example case you can simply pass 0U instead of 0.

Change member function to const silently breaks code [duplicate]

This question already has answers here:
C++ overload resolution, conversion operators and const
(3 answers)
Closed 5 years ago.
I'm writing a interface to a 3rd party library. It manipulates objects through a C interface which essentially is a void*. Here is the code simplified:
struct LibIntf
{
LibIntf() : opaquePtr{nullptr} {}
operator void *() /* const */ { return opaquePtr; }
operator void **() { return &opaquePtr; }
void *opaquePtr;
};
int UseMe(void *ptr)
{
if (ptr == (void *)0x100)
return 1;
return 0;
}
void CreateMe(void **ptr)
{
*ptr = (void *)0x100;
}
int main()
{
LibIntf lib;
CreateMe(lib);
return UseMe(lib);
}
Everything works great until I add the const on the operator void *() line. The code then defaults silently to using the operator void **() breaking the code.
My question is why?
I'm returning a pointer through a function that doesn't modify the object. Should be able to mark it const. If that changes it to a const pointer, the compiler should error because operator void **() shouldn't be a good match for function CallMe() that just want a void *.
This is what the standard says should happen, but this is far from obvious. For quick readers, jump to the "How to fix it?" at the end.
Understanding why the const matters
Once you add the const qualifier, when you call UseMe with an instance of LibIntf, the compiler have the two following possibilities:
LibIntf →1 LibIntf →2 void** →3 void* (through operator void**())
LibIntf →3 const LibIntf →2 void* →1 void* (through operator void* const())
1) No conversion needed.
2) User-defined conversion operator.
3) Legal conversions.
Those two conversion paths are legal, so which one to choose?
The standard defining C++ answers:
[over.match.best]/1
Define ICSi(F) as follows:
[...]
let ICSi(F) denote the implicit conversion sequence that converts the ith argument in the list to the type of the ith parameter of viable function F. [over.best.ics] defines the implicit conversion sequences and [over.ics.rank] defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a
better function than another viable function F2 if for all arguments
i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
the context is an initialization by user-defined conversion (see [dcl.init], [over.match.conv], and [over.match.ref]) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type.
(I had to read it a couple times before getting it.)
This all means in your specific case than option #1 is better than option #2 because for user-defined conversion operators, the conversion of the return type (void** to void* in option #1) is considered after the conversion of the parameter type (LibIntf to const LibIntf in option #2).
In chain, this means in option #1 there is nothing to convert (latter in the conversion chain, there will but this is not yet considered) but in option #2 a conversion from non-const to const is needed. Option #1 is, thus, dubbed better.
How to fix it?
Simply remove the need to consider the non-const to const conversion by casting the variable to const (explicitly (casts are always explicit (or are called conversions))):
struct LibIntf
{
LibIntf() : opaquePtr{nullptr} {}
operator void *() const { return opaquePtr; }
operator void **() { return &opaquePtr; }
void *opaquePtr;
};
int UseMe(void *ptr)
{
if (ptr == (void *)0x100)
return 1;
return 0;
}
void CreateMe(void **ptr)
{
*ptr = (void *)0x100;
}
int main()
{
LibIntf lib;
CreateMe(lib);
// unfortunately, you cannot const_cast an instance, only refs & ptrs
return UseMe(static_cast<const LibIntf>(lib));
}

Why is this C++ expression involving overloaded operators and implicit conversions ambiguous?

operator bool breaks the use of operator< in the following example. Can anyone explain why bool is just as relevant in the if (a < 0) expression as the specific operator, an whether there is a workaround?
struct Foo {
Foo() {}
Foo(int x) {}
operator bool() const { return false; }
friend bool operator<(const Foo& a, const Foo& b) {
return true;
}
};
int main() {
Foo a, b;
if (a < 0) {
a = 0;
}
return 1;
}
When I compile, I get:
g++ foo.cpp
foo.cpp: In function 'int main()':
foo.cpp:18:11: error: ambiguous overload for 'operator<' (operand types are 'Foo' and 'int')
if (a < 0) {
^
foo.cpp:18:11: note: candidate: operator<(int, int) <built-in>
foo.cpp:8:17: note: candidate: bool operator<(const Foo&, const Foo&)
friend bool operator<(const Foo& a, const Foo& b)
The problem here is that C++ has two options to deal with a < 0 expression:
Convert a to bool, and compare the result to 0 with built-in operator < (one conversion)
Convert 0 to Foo, and compare the results with < that you defined (one conversion)
Both approaches are equivalent to the compiler, so it issues an error.
You can make this explicit by removing the conversion in the second case:
if (a < Foo(0)) {
...
}
The important points are:
First, there are two relevant overloads of operator <.
operator <(const Foo&, const Foo&). Using this overload requires a user-defined conversion of the literal 0 to Foo using Foo(int).
operator <(int, int). Using this overload requires converting Foo to bool with the user-defined operator bool(), followed by a promotion to int (this is, in standardese, different from a conversion, as has been pointed out by Bo Persson).
The question here is: From whence does the ambiguity arise? Certainly, the first call, which requires only a user-defined conversion, is more sensible than the second, which requires a user-defined conversion followed by a promotion?
But that is not the case. The standard assigns a rank to each candidate. However, there is no rank for "user-defined conversion followed by a promotion". This has the same rank as only using a user-defined conversion. Simply (but informally) put, the ranking sequence looks a bit like this:
exact match
(only) promotion required
(only) implicit conversion required (including "unsafe" ones inherited from C such as float to int)
user-defined conversion required
(Disclaimer: As mentioned, this is informal. It gets significantly more complex when multiple arguments are involved, and I also didn't mention references or cv-qualification. This is just intended as a rough overview.)
So this, hopefully, explains why the call is ambiguous. Now for the practical part of how to fix this. Almost never does someone who provides operator bool() want it to be implicitly used in expressions involving integer arithmetic or comparisons. In C++98, there were obscure workarounds, ranging from std::basic_ios<CharT, Traits>::operator void * to "improved" safer versions involving pointers to members or incomplete private types. Fortunately, C++11 introduced a more readable and consistent way of preventing integer promotion after implicit uses of operator bool(), which is to mark the operator as explicit. This will remove the operator <(int, int) overload entirely, rather than just "demoting" it.
As others have mentioned, you can also mark the Foo(int) constructor as explicit. This will have the converse effect of removing the operator <(const Foo&, const Foo&) overload.
A third solution would be to provide additional overloads, e.g.:
operator <(int, const Foo&)
operator <(const Foo&, int)
The latter, in this example, will then be preferred over the above-mentioned overloads as an exact match, even if you did not introduce explicit. The same goes e.g. for
operator <(const Foo&, long long)
which would be preferred over operator <(const Foo&, const Foo&) in a < 0 because its use requires only a promotion.
Because compiler can not choose between bool operator <(const Foo &,const Foo &) and operator<(bool, int) which both fits in this situation.
In order to fix the issue make second constructor explicit:
struct Foo
{
Foo() {}
explicit Foo(int x) {}
operator bool() const { return false; }
friend bool operator<(const Foo& a, const Foo& b)
{
return true;
}
};
Edit:
Ok, at last I got a real point of the question :) OP asks why his compiler offers operator<(int, int) as a candidate, though "multi-step conversions are not allowed".
Answer:
Yes, in order to call operator<(int, int) object a needs to be converted Foo -> bool -> int. But, C++ Standard does not actually say that "multi-step conversions are illegal".
§ 12.3.4 [class.conv]
At most one user-defined conversion (constructor or conversion
function) is implicitly applied to a single value.
bool to int is not user-defined conversion, hence it is legal and compiler has the full right to chose operator<(int, int) as a candidate.
This is exactly what the compiler tells you.
One approach for solving the if (a < 0) for the compiler is to use the Foo(int x) constructor you've provided to create object from 0.
The second one is to use the operator bool conversion and compare it against the int (promotion). You can read more about it in Numeric promotions section.
Hence, it is ambiguous for the compiler and it cannot decide which way you want it to go.

Why is this ambiguity here?

Consider I have the following minimal code:
#include <boost/type_traits.hpp>
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[1][1] = 5;
return 0;
}
GNU C++ gives me the error:
test.cpp: In function 'int main(int, char**)':
test.cpp:16: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for second:
test.cpp:9: note: candidate 1: typename boost::remove_extent<ptr_t>::type& TData<ptr_t>::operator[](size_t) [with ptr_t = float [100][100]]
test.cpp:16: note: candidate 2: operator[](float (*)[100], int) <built-in>
My questions are:
Why GNU C++ gives the error, but Intel C++ compiler is not?
Why changing operator[] to the following leads to compiling without errors?
value_type & operator [] ( int id ) { return data[id]; }
Links to the C++ Standard are appreciated.
As I can see here are two conversion paths:
(1)int to size_t and (2)operator[](size_t).
(1)operator ptr_t&(), (2)int to size_t and (3)build-in operator[](size_t).
It's actually quite straight forward. For t[1], overload resolution has these candidates:
Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):
Parameter list: (T*, ptrdiff_t)
Candidate 2 (your operator)
Parameter list: (TData<float[100][100]>&, something unsigned)
The argument list is given by 13.3.1.2/6:
The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.
Argument list: (TData<float[100][100]>, int)
You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.
You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:
ptrdiff_t is int: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion.
ptrdiff_t is long: Neither candidate wins, because both require an integral conversion.
Now, 13.3.3/1 says
Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...
For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.
For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.
That last point might still confuse you, because of all the fuss around, so let's make an example
void f(int, int) { }
void f(long, char) { }
int main() { f(0, 'a'); }
This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0 converts to long worse than 'a' to int - yet you get an ambiguity, because you are in a criss-cross situation.
With the expression:
t[1][1] = 5;
The compiler must focus on the left hand side to determine what goes there, so the = 5; is ignored until the lhs is resolved. Leaving us with the expression: t[1][1], which represents two operations, with the second one operating on the result from the first one, so the compiler must only take into account the first part of the expression: t[1].The actual type is (TData&)[(int)]
The call does not match exactly any functions, as operator[] for TData is defined as taking a size_t argument, so to be able to use it the compiler would have to convert 1 from int to size_t with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]> into float[100][100].
The int to size_t conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]> to float[100][100] conversion according to §13.3.3.1.2/4. The conversion from float [100][100]& to float (*)[100] is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.
Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.
Q2: When you change the signature of the user defined operator[], the argument matches exactly the passed in type. t[1] is a perfect match for t.operator[](1) with no conversions whatsoever, so the compiler must follow that path.
I don't know what's the exact answer, but...
Because of this operator:
operator ptr_t & () { return data; }
there exist already built-in [] operator (array subscription) which accepts size_t as index. So we have two [] operators, the built-in and defined by you. Booth accepts size_t so this is considered as illegal overload probably.
//EDIT
this should work as you intended
template<typename ptr_t>
struct TData
{
ptr_t data;
operator ptr_t & () { return data; }
};
It seems to me that with
t[1][1] = 5;
the compiler has to choose between.
value_type & operator [] ( size_t id ) { return data[id]; }
which would match if the int literal were to be converted to size_t, or
operator ptr_t & () { return data; }
followed by normal array indexing, in which case the type of the index matches exactly.
As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.
(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)
I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity
I think the catch here is that the built-in [] operator as per 13.6/13 is defined as
T& operator[](T*, ptrdiff_t);
On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
typedef float (&ATYPE) [100][100];
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator
t[1][1] = 5; // error, as per the logic given below for Candidate 1 and Candidate 2
// Candidate 1 (CONVERSION rank)
// User defined conversion from 'TData' to float array
(t.operator[](1))[1] = 5;
// Candidate 2 (CONVERSION rank)
// User defined conversion from 'TData' to ATYPE
(t.operator ATYPE())[1][1] = 6;
return 0;
}
EDIT:
Here is what I think:
For candidate 1 (operator []) the conversion sequence S1 is
User defined conversion - Standard Conversion (int to size_t)
For candidate 2, the conversion sequence S2 is
User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)
The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...
Here the below quote from Standard should help.
$13.3.3.2/3 states - Standard
conversion sequence S1 is a better
conversion sequence than standard
conversion sequence S2 if — S1 is a
proper subsequence of S2 (comparing
the conversion sequences in the
canonical form defined by 13.3.3.1.1,
excluding any Lvalue Transformation;
the identity conversion sequence is
considered to be a subsequence of any
non-identity conversion sequence) or,
if not that...
$13.3.3.2 states- " User-defined
conversion sequence U1 is a better
conversion sequence than another
user-defined conversion sequence U2 if
they contain the same user-defined
conversion function or constructor and
if the second standard conversion
sequence of U1 is better than the
second standard conversion sequence of
U2."
Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.
That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"
This explains the ambiguity quiet well IMHO
Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]) which is too specific to the example (literals are type int but most variables you'll be using aren't), maybe you can generalize it:
template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }
Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic. Which really raises the question whether it's a compiler bug or not.
Also, once I eliminated the Boost dependency, it passed Comeau.