I have written the following code:
struct Element
{
int value;
};
struct Array
{
operator Element*();
operator const Element*() const;
Element& operator[](const size_t nIndex);
const Element& operator[](const size_t nIndex) const;
};
int main()
{
Array values;
if (values[0].value == 10)
{
}
return 0;
}
This works fine in x64. But in x86 I get a compiler error:
error C2666: 'Array::operator []': 4 overloads have similar conversions
note: could be 'const Element &Array::operator [](const std::size_t) const'
note: or 'Element &Array::operator [](const std::size_t)'
note: while trying to match the argument list '(Array, int)'
error C2228: left of '.value' must have class/struct/union
If I comment out the implicit conversions func, or add the explicit prefix, the code is compilable in x86.
But I can't understand why this code makes trouble.
Why can't the compiler decide to use an implicit conversion first, or array accessor first? I thought operator[] is higher in precedence.
The type of 0 which is an int needs doesn't directly match the type of the arguments to your [] operator and therefore a conversion is needed.
However it does match the built in [] operator for the Element pointer type. Clang gives a more descriptive error message in this case: https://godbolt.org/z/FB3DzG (note I've changed the parameter to int64_t to make this fail on x64).
The compiler has to do one conversion to use your class operator (the index from int to size_t) and one conversion to use the built in operator (from Array to Element*) it is ambiguous which one it should use.
It works on x64 as your class operator still only requires a single conversion for the index but the built in operator needs 2, 1 from Array to Element* and one for the index from int to int64_t, this makes your class operator a better match and is therefore not ambiguous.
The solution is to either make the conversion operator explicit (which is a good idea anyway) or to ensure that the type you are passing as your index matches the type your operator is expecting. In your example case you can simply pass 0U instead of 0.
Related
I have a class that has both implicit conversion operator() to intrinsic types and the ability to access by a string index operator[] that is used for a settings store. It compiles and works very well in unit tests on gcc 6.3 & MSVC however the class causes some ambiguity warnings on intellisense and clang which is not acceptable for use.
Super slimmed down version:
https://onlinegdb.com/rJ-q7svG8
#include <memory>
#include <unordered_map>
#include <string>
struct Setting
{
int data; // this in reality is a Variant of intrinsic types + std::string
std::unordered_map<std::string, std::shared_ptr<Setting>> children;
template<typename T>
operator T()
{
return data;
}
template<typename T>
Setting & operator=(T val)
{
data = val;
return *this;
}
Setting & operator[](const std::string key)
{
if(children.count(key))
return *(children[key]);
else
{
children[key] = std::shared_ptr<Setting>(new Setting());
return *(children[key]);
}
}
};
Usage:
Setting data;
data["TestNode"] = 4;
data["TestNode"]["SubValue"] = 55;
int x = data["TestNode"];
int y = data["TestNode"]["SubValue"];
std::cout << x <<std::endl;
std::cout << y;
output:
4
55
Error message is as follows:
more than one operator "[]" matches these operands:
built-in operator "integer[pointer-to-object]" function
"Setting::operator[](std::string key)"
operand types are: Setting [ const char [15] ]
I understand why the error/warning exists as it's from the ability to reverse the indexer on an array with the array itself (which by itself is extremely bizarre syntax but makes logical sense with pointer arithmetic).
char* a = "asdf";
char b = a[5];
char c = 5[a];
b == c
I am not sure how to avoid the error message it's presenting while keeping with what I want to accomplish. (implicit assignment & index by string)
Is that possible?
Note: I cannot use C++ features above 11.
The issue is the user-defined implicit conversion function template.
template<typename T>
operator T()
{
return data;
}
When the compiler considers the expression data["TestNode"], some implicit conversions need to take place. The compiler has two options:
Convert the const char [9] to a const std::string and call Setting &Setting::operator[](const std::string)
Convert the Setting to an int and call const char *operator[](int, const char *)
Both options involve an implicit conversion so the compiler can't decide which one is better. The compiler says that the call is ambiguous.
There a few ways to get around this.
Option 1
Eliminate the implicit conversion from const char [9] to std::string. You can do this by making Setting::operator[] a template that accepts a reference to an array of characters (a reference to a string literal).
template <size_t Size>
Setting &operator[](const char (&key)[Size]);
Option 2
Eliminate the implicit conversion from Setting to int. You can do this by marking the user-defined conversion as explicit.
template <typename T>
explicit operator T() const;
This will require you to update the calling code to use direct initialization instead of copy initialization.
int x{data["TestNode"]};
Option 3
Eliminate the implicit conversion from Setting to int. Another way to do this is by removing the user-defined conversion entirely and using a function.
template <typename T>
T get() const;
Obviously, this will also require you to update the calling code.
int x = data["TestNode"].get<int>();
Some other notes
Some things I noticed about the code is that you didn't mark the user-defined conversion as const. If a member function does not modify the object, you should mark it as const to be able to use that function on a constant object. So put const after the parameter list:
template<typename T>
operator T() const {
return data;
}
Another thing I noticed was this:
std::shared_ptr<Setting>(new Setting())
Here you're mentioning Setting twice and doing two memory allocations when you could be doing one. It is preferable for code cleanliness and performance to do this instead:
std::make_shared<Setting>()
One more thing, I don't know enough about your design to make this decision myself but do you really need to use std::shared_ptr? I don't remember the last time I used std::shared_ptr as std::unique_ptr is much more efficient and seems to be enough in most situations. And really, do you need a pointer at all? Is there any reason for using std::shared_ptr<Setting> or std::unique_ptr<Setting> over Setting? Just something to think about.
Consider the following code:
#include <stdio.h>
#include <stdint.h>
class test_class
{
public:
test_class() {}
~test_class() {}
const int32_t operator[](uint32_t index) const
{
return (int32_t)index;
}
operator const char *() const
{
return "Hello World";
}
};
int main(void)
{
test_class tmp;
printf("%d\n", tmp[3]);
return 0;
}
When I use command clang++ -arch i386 test.cc to build those codes, it yields the following on clang++ (Apple LLVM version 9.1.0 (clang-902.0.39.1)):
test.cc:24:21: error: use of overloaded operator '[]' is ambiguous (with operand types 'test_class' and 'int')
printf("%d\n", tmp[3]);
~~~^~
test.cc:10:17: note: candidate function
const int32_t operator[](uint32_t index) const
^
test.cc:24:21: note: built-in candidate operator[](const char *, int)
printf("%d\n", tmp[3]);
^
test.cc:24:21: note: built-in candidate operator[](const volatile char *, int)
But there is no error if I just use command clang++ test.cc
It seems that overloading operator '[]' on i386 is different from on x86_64 and I want to know what the exactly distinction is.
There are two possible interpretations of tmp[3]: the "obvious" one, calling test_class::operator[](int32_t), and the less obvious one, calling test_class::operator const char*() to convert the object to a const char*, and applying the index to that pointer.
To decide which of the overloads to use, the compiler looks at the conversions involved. There are two arguments for each overload: tmp and 3. For the first overload, tmp needs no conversion, but 3 has to be converted from int to int32_t. For the second overload, tmp needs to be converted to const char*, and 3 does not have to be converted.
To choose the proper overload, the compiler has to look at the conversion set for each argument. For the first argument, tmp, the first overload requires no conversion, and the second requires an integral conversion. So the first overload wins here. For the second argument, the first overload requires a user-defined conversion and the second requires no conversion. So the first conversion wins.
In short: the first overload wins on the first argument, and the second overload wins on the second argument. So the call is ambiguous.
You could add an overloaded operator[](int), which would resolve this particular complaint, but it would be an error with a compiler where int32_t is a synonym for int.
Your best bet is probably to get rid of operator[](int32_t) and replace it with operator[](int).
This is why you have to think carefully about fixed-size types: you can get conversions that you aren't expecting.
(all tests are performed on Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24215.1 for x86)
consider this minimal example:
struct myString
{
operator const char *( ) const { return &dummy; }
char& operator[]( unsigned int ) { return dummy; }
const char& operator[]( unsigned int ) const { return dummy; }
char dummy;
};
int main()
{
myString str;
const char myChar = 'a';
if( str[(int) 0] == myChar ) return 0; //error, multiple valid overloads
}
according to overload resolution rules (from cppreference)
F1 is determined to be a better function than F2 if implicit
conversions for all arguments of F1 are not worse than the implicit
conversions for all arguments of F2, and
1) there is at least one
argument of F1 whose implicit conversion is better than the
corresponding implicit conversion for that argument of F2
2) or. if
not that, (only in context of non-class initialization by conversion),
the standard conversion sequence from the return type of F1 to the
type being initialized is better than the standard conversion sequence
from the return type of F2
char& operator[]( unsigned int ) should be better, according to 1).
Of the two arguments (this = myString) do not need to be converted at all while operator const char *( ) const converts it to const char* and const char& operator[]( unsigned int ) const converts it to const myString, therefore there is one argument without any implicit conversion, which happens to be the best conversion
However my compiler yells the following error:
1> [///]\sandbox\sandbox\sandbox.cpp(29): error C2666: 'myString::operator []': 3 overloads have similar conversions
1> [///]\sandbox\sandbox\sandbox.cpp(19): note: could be 'const char &myString::operator [](unsigned int) const'
1> [///]\sandbox\sandbox\sandbox.cpp(18): note: or 'char &myString::operator [](unsigned int)'
1> [///]\sandbox\sandbox\sandbox.cpp(29): note: while trying to match the argument list '(myString, int)'
also note that using if( str[0u] == myChar ) return 0; or removing operator const char *( ) const resolve the error
why is there an error here and what am I getting wrong about overload resolution rules?
edit: it might be a visual C++ bug in this version, any definitive confirmation on this?
Here's a minified version of the problem, that reproduces on all compilers I threw at it.
#include <stddef.h>
struct myString
{
operator char *( );
char& operator[]( unsigned ptrdiff_t );
};
int main()
{
myString str;
if( str[(ptrdiff_t) 0] == 'a' ) return 0; //error, multiple valid overloads
}
Basically, you have two candidate functions to get the char for bool operator==(char,char): [over.match.oper]/3
char& myString::operator[]( unsigned ptrdiff_t ) ([over.match.oper]/3.1 => [over.sub])
char& operator[]( char*, ptrdiff_t) ([over.match.oper]/3.3 => [over.built]/14)
Note that if myString::operator[] took a ptrdiff_t instead of an unsigned ptrdiff_t, then it would have hidden the built-in operator per [over.built]/1. So if all you want to do is avoid issues like this, simply ensure any operator[] overload that takes an integral value, takes a ptrdiff_t.
I'll skip the viability check [over.match.viable], and go straight to conversion ranking.
char& myString::operator[]( unsigned ptrdiff_t )
For overloading, this is considered to have a leading implicit object paramter, so the signature to be matched is
(myString&, unsigned ptrdiff_t)
myString& => myString&
Standard conversion sequence: Identity (Rank: Exact match) - directly bound reference
ptrdiff_t => unsigned ptrdiff_t
Standard conversion sequence: Lvalue Transformation -> Integral conversion (Rank: Conversion) - signed lvalue to unsigned prvalue
char& operator[]( char*, ptrdiff_t)
myString& => char*
User-defined conversion sequence: Identity + operator char*(myString&)
Note that per [over.match.oper]/7 we don't get a second standard conversion sequence.
ptrdiff_t => ptrdiff_t
Standard conversion sequence: Identity (Rank: Exact match)
Best viable function
First argument
Standard Conversion Sequence is better than User-defined conversion sequence ([over.ics.rank]/2.1)
Second argument
Rank Conversion is worse than Rank Exact Match ([over.ics.rank]/3.2.2)
Result
We cannot satisfy the requirement
if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2)
so neither function is a better function.
Hence, per [over.match.best]/2 it's ambiguous.
How to fix this?
Well, the easiest solution is to never let the parameter to an operator[] overload be something that could be converted to from ptrdiff_t by something other than an Exact Match-ranked conversion. Looking at the conversions table that appears to mean that you should always declare your operator[] member function as X& T::operator[]( ptrdiff_t ). That covers the usual use-case of "Act like an array". As noted above, using precisely ptrdiff_t will suppress even searching for an operator T* candidate by taking the built-in subscript operator off the table.
The other option is to not have both T1 operator[] and operator T2* defined for the class, where T1 and T2 may both fulfill the same parameter of a (possibly implicit) function call. That covers cases where you are using operator[] for clever syntactic things, and end up with things like T T::operator[](X). If X::operator ptrdiff_t() exists for example, and so does T::operator T*(), then you're ambiguous again.
The only use-case for T::operator T*() I can imagine is if you want your type to implicitly convert into a pointer to itself, like a function. Don't do that...
I am trying to implement custom reference in C++. What I want to achieve is to have reference, which does not need to be set at creation. It looks like this
template<typename T>
class myreference
{
public:
myreference() : data(), isset(false) { }
explicit myreference(const T& _data) : data(&_data), isset(true) { }
myreference<T>& operator=(T& t)
{
if (!isset)
{
isset= true;
data = &t;
}
else
*data = t;
return *this;
}
operator T() const { return *data; }
operator T&() { return *data; }
private:
T* data;
bool isset;
};
It works fine. I can do this, except for last statement.
myreference<int> myref;
int data = 7, test = 3;
myref = data; // reference is set
myref = test; // data is now 3
int& i = myref;
i = 4; // data is now 4
cout << myref; // implicit int conversion
myref = 42; // error C2679: binary '=' : no operator found which takes a right-hand operand of type 'int'
Full error
error C2679: binary '=' : no operator found which takes a right-hand operand of type 'int' (or there is no acceptable conversion)
1> d:\...\main.cpp(33): could be 'myreference<int> &myreference<int>::operator =(const myreference<int> &)'
1> d:\...\main.cpp(16): or 'myreference<int> &myreference<int>::operator =(T &)'
1> with
1> [
1> T=int
1> ]
1> while trying to match the argument list 'myreference<int>, int'
I was searching through the web and found similar errors (with different operators) and the result was to put the operator outside of its class, but that is not possible for operator= for some (I believe good) reason. My question is, what is the compiler complaining about? Argument list is myreference<int>, int and I have operator defined for Ts which, in this case, is int.
You instantiated an object of type myreference<int>
myreference<int> myref;
So the assignment operator is specialized for parameter type int &. However you are trying to assogn a temporary object that may be bound to a constant reference. So the assignment operator has to be specialized for parameter type const int &.
Thus you need to define another object of type myreference<const int> that the code would be at least compiled. For example
myreference<const int> myref1;
myref1 = 42;
However in any case the code will have undefined behaviour because the temporary object will be deleted after the executing this assignment statement and data member data will have invalid value.
Usually such claases suppress using temporary objects that to avoid the undefined behaviour.
Your problem is bigger than the compiler error. The compiler error is telling you that you have a fundamental design error.
Your reference = both acts as a reference rebinder (changes what an empty reference is attached to) and an assignment (chanhes the value of the thing attached to). These are fundamantally different operations.
One requires a long-term value you can modify later (rebind), and the other requres a value you can read from (assignment).
However with one function, the parameter passed has to be either -- and those types do not match. A readable from value is a T const&. A bindable value is a T&.
If you change the type to T const& the rebind data = &t fails, as it should. If you pass a temporary, like 42, to your =, a rebind attaches to it, and becomes invalid at the end of the calling line. Similarly if you assign int const foo = 3; to it.
You can 'fix' this with a const_cast, but that just throws the type checking out the window: it hides the undefined behaviour instead of giving you a diagnosis.
The T& parameter, on the other hand, cannot bind to a 42 temporary, nor can it bind to a constant const int foo = 3. Which is great in the rebind case, but makes assignment sort of useless.
If you have two operations, =(T const &) and .rebind(T&), your problem goes away. As a side effect, = when not set is indefined behaviour. But that was basically true before.
Your reference is probably better called a 'pointer' at this point.
myreference<T>& operator=(T& t)
myref = 42;
non-const reference can't be bound to a temporary. You can change it to take parameter by const reference.
myreference<T>& operator=(const T& t)
Also, make changes in your code to make this work.
data = &t; <<< Replace this
data = const_cast<T*>(&t); <<< with this.
Since assignment operator is expecting variable (in this case an integer) you are not allowed to pass it some value(42).
So use an integer like
int x=42;
myref=x;
Happy coding.
Consider I have the following minimal code:
#include <boost/type_traits.hpp>
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[1][1] = 5;
return 0;
}
GNU C++ gives me the error:
test.cpp: In function 'int main(int, char**)':
test.cpp:16: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for second:
test.cpp:9: note: candidate 1: typename boost::remove_extent<ptr_t>::type& TData<ptr_t>::operator[](size_t) [with ptr_t = float [100][100]]
test.cpp:16: note: candidate 2: operator[](float (*)[100], int) <built-in>
My questions are:
Why GNU C++ gives the error, but Intel C++ compiler is not?
Why changing operator[] to the following leads to compiling without errors?
value_type & operator [] ( int id ) { return data[id]; }
Links to the C++ Standard are appreciated.
As I can see here are two conversion paths:
(1)int to size_t and (2)operator[](size_t).
(1)operator ptr_t&(), (2)int to size_t and (3)build-in operator[](size_t).
It's actually quite straight forward. For t[1], overload resolution has these candidates:
Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):
Parameter list: (T*, ptrdiff_t)
Candidate 2 (your operator)
Parameter list: (TData<float[100][100]>&, something unsigned)
The argument list is given by 13.3.1.2/6:
The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.
Argument list: (TData<float[100][100]>, int)
You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.
You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:
ptrdiff_t is int: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion.
ptrdiff_t is long: Neither candidate wins, because both require an integral conversion.
Now, 13.3.3/1 says
Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.
A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...
For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.
For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.
That last point might still confuse you, because of all the fuss around, so let's make an example
void f(int, int) { }
void f(long, char) { }
int main() { f(0, 'a'); }
This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0 converts to long worse than 'a' to int - yet you get an ambiguity, because you are in a criss-cross situation.
With the expression:
t[1][1] = 5;
The compiler must focus on the left hand side to determine what goes there, so the = 5; is ignored until the lhs is resolved. Leaving us with the expression: t[1][1], which represents two operations, with the second one operating on the result from the first one, so the compiler must only take into account the first part of the expression: t[1].The actual type is (TData&)[(int)]
The call does not match exactly any functions, as operator[] for TData is defined as taking a size_t argument, so to be able to use it the compiler would have to convert 1 from int to size_t with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]> into float[100][100].
The int to size_t conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]> to float[100][100] conversion according to §13.3.3.1.2/4. The conversion from float [100][100]& to float (*)[100] is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.
Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.
Q2: When you change the signature of the user defined operator[], the argument matches exactly the passed in type. t[1] is a perfect match for t.operator[](1) with no conversions whatsoever, so the compiler must follow that path.
I don't know what's the exact answer, but...
Because of this operator:
operator ptr_t & () { return data; }
there exist already built-in [] operator (array subscription) which accepts size_t as index. So we have two [] operators, the built-in and defined by you. Booth accepts size_t so this is considered as illegal overload probably.
//EDIT
this should work as you intended
template<typename ptr_t>
struct TData
{
ptr_t data;
operator ptr_t & () { return data; }
};
It seems to me that with
t[1][1] = 5;
the compiler has to choose between.
value_type & operator [] ( size_t id ) { return data[id]; }
which would match if the int literal were to be converted to size_t, or
operator ptr_t & () { return data; }
followed by normal array indexing, in which case the type of the index matches exactly.
As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.
(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)
I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity
I think the catch here is that the built-in [] operator as per 13.6/13 is defined as
T& operator[](T*, ptrdiff_t);
On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)
template<typename ptr_t>
struct TData
{
typedef typename boost::remove_extent<ptr_t>::type value_type;
ptr_t data;
value_type & operator [] ( size_t id ) { return data[id]; }
operator ptr_t & () { return data; }
};
typedef float (&ATYPE) [100][100];
int main( int argc, char ** argv )
{
TData<float[100][100]> t;
t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator
t[1][1] = 5; // error, as per the logic given below for Candidate 1 and Candidate 2
// Candidate 1 (CONVERSION rank)
// User defined conversion from 'TData' to float array
(t.operator[](1))[1] = 5;
// Candidate 2 (CONVERSION rank)
// User defined conversion from 'TData' to ATYPE
(t.operator ATYPE())[1][1] = 6;
return 0;
}
EDIT:
Here is what I think:
For candidate 1 (operator []) the conversion sequence S1 is
User defined conversion - Standard Conversion (int to size_t)
For candidate 2, the conversion sequence S2 is
User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)
The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...
Here the below quote from Standard should help.
$13.3.3.2/3 states - Standard
conversion sequence S1 is a better
conversion sequence than standard
conversion sequence S2 if — S1 is a
proper subsequence of S2 (comparing
the conversion sequences in the
canonical form defined by 13.3.3.1.1,
excluding any Lvalue Transformation;
the identity conversion sequence is
considered to be a subsequence of any
non-identity conversion sequence) or,
if not that...
$13.3.3.2 states- " User-defined
conversion sequence U1 is a better
conversion sequence than another
user-defined conversion sequence U2 if
they contain the same user-defined
conversion function or constructor and
if the second standard conversion
sequence of U1 is better than the
second standard conversion sequence of
U2."
Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.
That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"
This explains the ambiguity quiet well IMHO
Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]) which is too specific to the example (literals are type int but most variables you'll be using aren't), maybe you can generalize it:
template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }
Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic. Which really raises the question whether it's a compiler bug or not.
Also, once I eliminated the Boost dependency, it passed Comeau.