C++ Operator overloading - casting from class - c++

While porting Windows code to Linux, I encountered the following error message with GCC 4.2.3. (Yes, I'm aware that it's a slight old version, but I can't easily upgrade.)
main.cpp:16: error: call of overloaded ‘list(MyClass&)’ is ambiguous
/usr/include/c++/4.2/bits/stl_list.h:495: note: candidates are: std::list<_Tp, _Alloc>::list(const std::list<_Tp, _Alloc>&) [with _Tp = unsigned char, _Alloc = std::allocator<unsigned char>]
/usr/include/c++/4.2/bits/stl_list.h:484: note: std::list<_Tp, _Alloc>::list(size_t, const _Tp&, const _Alloc&) [with _Tp = unsigned char, _Alloc = std::allocator<unsigned char>]
I'm using the following code to generate this error.
#include <list>
class MyClass
{
public:
MyClass(){}
operator std::list<unsigned char>() const { std::list<unsigned char> a; return a; }
operator unsigned char() const { unsigned char a; return a; }
};
int main()
{
MyClass a;
std::list<unsigned char> b = (std::list<unsigned char>)a;
return 0;
}
Has anyone experienced this error? More importantly, how to get around it? (It's possible to completely avoid the overload, sure, by using functions such as GetChar(), GetList() etc, but I'd like to avoid that.)
(By the way, removing "operator unsigned char()" removes the error.)

The ambiguity comes from the interpretation of the cast-expression.
When choosing the conversion, the compiler first considers a static_cast style cast and considers how to resolve an initialization which looks like this:
std::list<unsigned_char> tmp( a );
This construction is ambiguous as a has a user-defined conversion to a std::list<unsigned char> and to an unsigned char and std::list<unsigned char> has both a constructor which takes a const std::list<unsigned char>& and a constructor which takes size_t (to which an unsigned char can be promoted).
When casting to a const std::list<unsigned_char>&, this initialization is considered:
const std::list<unsigned_char>& tmp( a );
In this case, when the user-defined conversion to std::list<unsigned_char> is chosen, the new reference can bind directly to the result of the conversion. If the user-defined conversion to unsigned char where chosen a temporary object of type std::list<unsigned char> would have to be created and this makes this option a worse conversion sequence than the former option.

I've simplified your example to the following:
typedef unsigned int size_t;
template <typename T>
class List
{
public:
typedef size_t size_type;
List (List const &);
List (size_type i, T const & = T());
};
typedef List<unsigned char> UCList;
class MyClass
{
public:
operator UCList const () const;
operator unsigned char () const;
};
void foo ()
{
MyClass mc;
(UCList)mc;
}
The first point, is that the standard defines that the C-style cast should use the more appropriate C++ style cast, and in this case that's static_cast. So the above cast is equivalent to:
static_cast<UCList> (mc);
The definition of static_cast says:
An expression e can be explicitly converted to a type T using a static_cast of the form
static_cast<T>(e) if the declaration "T t(e);" is well-formed, for some invented temporary variable
t (8.5)
So the semantics for the cast are the same as for:
UCList tmp (mc);
From 13.3.1.3 we get the set of candidate constructors that we can use in UCList:
UCList (UCList const &) #1
UCList (size_type, T const & = T()); #2
What happens next is two separate overload resolution steps, one for each conversion operator.
Converting to #1: With a target type of UCList const &, overload resolution selects between the following conversion operators.: "operator UCList const ()" and "operator unsigned char ()". Using unsigned char would require an additional user conversion and so is not a viable function for this overload step. Therefore overload resolution succeeds and will use operator UCList const ().
Converting to #2: With a target type of size_t. The default argument does not take part in overload resolution. Overload resolution again selects between the conversion operators: "operator UCList const ()" and "operator unsigned char ()". This time there is no conversion from UCList to unsigned int and so that is not a viable function. An unsigned char can be promoted to size_t and so this time overload resolution succeeds and will use "operator UCList const ()".
But, now back at the top level there are two separate and independent overload resolution steps that have successfully converted from mc to UCList. The result is therefore ambiguous.
To explain that last point, this example is different to the normal overload resolution case. Normally there is a 1:n relationship between argument and parameter types:
void foo (char);
void foo (short);
void foo (int);
void bar() {
int i;
foo (i);
}
Here there is i=>char, i=>short and i=>int. These are compared by overload resolution and the int overload would be selected.
In the above case we have an m:n relationship. The standard outlines the rules to select for each individual argument and all of the 'n' parameters, but that's where it ends, it does not specify how we should decide between using the different 'm' arguments.
Hope this makes some sense!
UPDATE:
The two kinds of initialization syntax here are:
UCList t1 (mc);
UCList t2 = mc;
't1' is a direct initialiation (13.3.1.3) and all constructors are included in the overload set. This is almost like having more than one user defined conversion. There are the set of constructors and the set of conversion operators. (ie. m:n).
In the case of 't2' the syntax uses copy-initialization (13.3.1.4) and the rules different:
Under the conditions specified in 8.5, as part of a copy-initialization of an object of class type, a userdefined
conversion can be invoked to convert an initializer expression to the type of the object being initialized.
Overload resolution is used to select the user-defined conversion to be invoked
In this case there is just one to type, UCList, and so there is only the set of conversion operator overloads to consider, ie. we do not consider the other constructors of UCList.

It compiles properly if you remove the cast, and I've checked that the operator std::list is being executed.
int main()
{
MyClass a;
std::list<unsigned char> b = a;
return 0;
}
Or if you cast it to a const reference.
int main()
{
MyClass a;
std::list<unsigned char> b = (const std::list<unsigned char>&)a;
return 0;
}

Related

Why is a double convertible to a const-reference of seemingly any primitive?

Consider the following code:
#include <iostream>
float func(char const & val1, unsigned int const & val2)
{
return val1 + val2;
}
int main() {
double test1 = 0.2;
double test2 = 0.3;
std::cout << func(test1, test2) << std::endl;
return 0;
}
This compiles and runs despite the fact that I am passing in a double to a function that takes a const-reference to types that are smaller than a double (on my system, sizeof(double) == 8, while sizeof(unsigned int) == 4, and sizeof(char) == 1 by definition). If the reference isn't const, compilation fails (e.g., float func(char & val1, unsigned int & val2) instead of the current definition) with an error:
cannot bind non-const lvalue reference of type 'char&' to an rvalue of type 'char'
I get the exact same behavior when testing this with GCC, Clang, ICC, and MSVC on Godbolt, so it appears standard. What is it about const-references that causes this to be accepted, whereas a reference isn't? Also, I used -Wall -pedantic - why am I not getting a warning about a narrowing conversion? I do when the function passes by value instead of by reference...
It is indeed standard.
test1 and test2 are converted to anonymous temporary char and unsigned types for which the const references in the function are appropriate bindings. If you set your compiler to warn you of narrowing conversions (e.g. -Wconversion), it would output a message.
These bindings are not possible if the function parameters are non-const references, and your compiler is correctly issuing a diagnostic in that case.
One fix is to delete a better overload match:
float func(double, double) = delete;
As a complement to the accepted answer, particularly the approach
One fix is to delete a better overload match:
float func(double, double) = delete;
one could also approach it from the other way around: namely deleting all overloads that are not exactly matching your intended types of parameters. If you want to avoid any implicit conversions (including promotions), you could define func as a deleted non-overloaded function template, and define explicit specializations of func only for the specific types of arguments you’d like to have overloads for. E.g.:
// Do not overload the primary function template 'func'.
// http://www.gotw.ca/publications/mill17.htm
template< typename T, typename U >
float func(const T& val1, const U& val2) = delete;
template<>
float func(char const& val1, unsigned int const& val2)
{
return val1 + val2;
}
int main() {
double test1 = 0.2;
double test2 = 0.3;
char test3 = 'a';
unsigned int test4 = 4U;
signed int test5 = 5;
//(void)func(test1, test2); // error: call to deleted function 'func' (... [with T = double, U = double])
//(void)func(test2, test3); // error: call to deleted function 'func' (... [with T = double, U = char])
(void)func(test3, test4); // OK
//(void)func(test3, test5); // error: call to deleted function 'func' (... [with T = char, U = int])
return 0;
}
Emphasizing again to take care if intending to overload the primary function template, as overload resolution for overloaded and explicitly specialized function templates can be somewhat confusing, as specializations do not participate in the first step of overload resolution.

Use of overloaded operator '[]' is ambiguous with template cast operator

The following code compiles well in gcc 7.3.0, but doesn't compiles with clang 6.0.0.
#include <string>
struct X {
X() : x(10) {}
int operator[](std::string str) { return x + str[0]; }
template <typename T> operator T() { return x; } // (1) fails only in clang
//operator int() { return x; } // (2) fails both in gcc and clang
private:
int x;
};
int main() {
X x;
int y = 20;
int z = int(x);
return x["abc"];
}
I used command clang++ 1.cpp -std=c++98 with specifying different standard versions. I tried c++98,11,14,17,2a. In all cases an error is the same. Error message in clang is following:
1.cpp:14:13: error: use of overloaded operator '[]' is ambiguous (with operand types 'X' and 'const char [4]')
return x["abc"];
~^~~~~~
1.cpp:5:9: note: candidate function
int operator[](std::string str) { return x + str[0]; }
^
1.cpp:14:13: note: built-in candidate operator[](long, const char *)
return x["abc"];
^
1.cpp:14:13: note: built-in candidate operator[](long, const volatile char *)
1 error generated.
What compiler correctly follows the standard in this situation? Is it a valid code?
The description of the problem can be found here, but it is about situation (2). I am interested in case (1).
GCC is wrong. The template case shouldn't make any difference.
[over.match.best]/1 says:
Define ICSi(F) as follows:
...
let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F. [over.best.ics] defines the implicit conversion sequences and [over.ics.rank] defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and ...
The two viable candidates are
int operator[](X&, std::string); // F1
const char& operator[](std::ptrdiff_t, const char*); // F2
... and ICS1(F1) (X -> X&) is better than ICS1(F2) (X -> std::ptrdiff_t), no matter whether or not X -> std::ptrdiff_t is through a template conversion function, but ICS2(F1) (const char[4] -> std::string) is worse than ICS2(F2) (const char[4] -> const char*). So neither function is better than the other, resulting in ambiguity.
This has been reported as a GCC bug.
The issue is that there is one conversion on each path:
first from "abc" to std::string and then operator[] call.
second from x to std::ptrdiff_t and then the operator[] for an std::ptrdiff_t and a const char*.
So the solution is to make the conversion operator explicit:
int operator[](const std::string& str) { return x + str[0]; }
template <typename T>
explicit operator T() { return x; } // (1) fails only in clang

User-defined conversion operator template and built-in operators: no match for operator

Consider the following MCVE.
#include <type_traits>
struct A {
template<typename T, typename std::enable_if<std::is_same<T,int>::value,int>::type = 0>
operator T() const { return static_cast<T>(1); }
};
int main() {
int x = 1;
A a;
return x + a;
}
clang compiles it fine. DEMO
But GCC fails with:
error: no match for 'operator+' (operand types are 'int' and 'A')
return x + a;
~~^~~
Question: who is right and why?
I believe clang is right.
To do lookup on +, since at least one argument has class type, we consider member, non-member, and builtin candidates. There aren't any member or non-member candidates, so that's eay enough. There is a builtin candidate for int operator+(int, int), which is the only candidate. That candidate is viable because A can be convertible to int, directly (we have a standard conversion from A to const A& for the implicit object parameter, and then the user defined conversion from that to int, there's no further conversion necessary). As we have one viable candidate, that trivially makes it the best viable candidate.
Note that if A just had operator int() const { return 1; }, gcc would accept it. It's just the conversion function template that fails to be considered.

error: ambiguous overload for ‘operator==’

I am trying to understand why my c++ compiler is confused with the following piece of code:
struct Enum
{
enum Type
{
T1,
T2
};
Enum( Type t ):t_(t){}
operator Type () const { return t_; }
private:
Type t_;
// prevent automatic conversion for any other built-in types such as bool, int, etc
template<typename T> operator T () const;
};
enum Type2
{
T1,
T2
};
int main()
{
bool b;
Type2 e1 = T1;
Type2 e2 = T2;
b = e1 == e2;
Enum t1 = Enum::T1;
Enum t2 = Enum::T2;
b = t1 == t2;
return 0;
}
Compilation leads to:
$ c++ enum.cxx
enum.cxx: In function ‘int main()’:
enum.cxx:30:10: error: ambiguous overload for ‘operator==’ (operand types are ‘Enum’ and ‘Enum’)
b = t1 == t2;
^
enum.cxx:30:10: note: candidates are:
enum.cxx:30:10: note: operator==(Enum::Type, Enum::Type) <built-in>
enum.cxx:30:10: note: operator==(int, int) <built-in>
I understand that I can solve the symptoms by providing an explicit operator==:
bool operator==(Enum const &rhs) { return t_ == rhs.t_; }
But really what I am looking for is the interpretation of why comparing enum leads to an ambiguity only when it's done within a class. I wrote this small enum wrapper since I am required to only use C++03 in my code.
The call is ambiguous as both the Enum::Type and int versions are valid with a single implicit conversion, the former using the operator Type conversion and the latter using the operator T template conversion operator.
It is unclear why you have a conversion to any type, but if you remove that operator, the code works.
If you are using C++11 you should use scoped enums instead.
Enum is implementation defined integral type which is mostly int. Now whatever operator you implement for enum is like you are implementing operator for type int. And redefining the any operator for integral types such as int, double, char ... is not allowed as it will change basic meaning of the programming language itself.
enum can be implicitly converted to int (as far as I understand it is caused by backward compatibility to C. If you can use C++11 you can use enum class to solve this issue because scoped enums do not allow implicit conversion to int.

"ambiguous overload for 'operator[]'" if conversion operator to int exist

I'm trying to implement the vector like and the map like [] operator for a class. But I get error messages from my compilers (g++ and clang++). Found out that they only occurs if the class has also conversion operators to integer types.
Now I have two problems. The first is that I don't know why the compiler can't distinguish between [](const std::string&) and [](size_t) when the class has conversion operators to ints.
The second... I need the conversion and the index operator. How to fix that?
works:
#include <stdint.h>
#include <string>
struct Foo
{
Foo& operator[](const std::string &foo) {}
Foo& operator[](size_t index) {}
};
int main()
{
Foo f;
f["foo"];
f[2];
}
does not work:
#include <stdint.h>
#include <string>
struct Foo
{
operator uint32_t() {}
Foo& operator[](const std::string &foo) {}
Foo& operator[](size_t index) {}
};
int main()
{
Foo f;
f["foo"];
f[2];
}
compiler error:
main.cpp: In function 'int main()':
main.cpp:14:9: error: ambiguous overload for 'operator[]' in 'f["foo"]'
main.cpp:14:9: note: candidates are:
main.cpp:14:9: note: operator[](long int, const char*) <built-in>
main.cpp:7:7: note: Foo& Foo::operator[](const string&)
main.cpp:8:7: note: Foo& Foo::operator[](size_t) <near match>
main.cpp:8:7: note: no known conversion for argument 1 from 'const char [4]' to 'size_t {aka long unsigned int}'
The problem is that your class has a conversion operator to uint32_t, so the compiler does not know whether to:
Construct a std::string from the string literal and invoke your overload accepting an std::string;
Convert your Foo object into an uint32_t and use it as an index into the string literal.
While option 2 may sound confusing, consider that the following expression is legal in C++:
1["foo"];
This is because of how the built-in subscript operator is defined. Per Paragraph 8.3.4/6 of the C++11 Standard:
Except where it has been declared for a class (13.5.5), the subscript operator [] is interpreted in such
a way that E1[E2] is identical to *((E1)+(E2)). Because of the conversion rules that apply to +, if E1 is an
array and E2 an integer, then E1[E2] refers to the E2-th member of E1. Therefore, despite its asymmetric
appearance, subscripting is a commutative operation.
Therefore, the above expression 1["foo"] is equivalent to "foo"[1], which evaluates to o. To resolve the ambiguity, you can either make the conversion operator explicit (in C++11):
struct Foo
{
explicit operator uint32_t() { /* ... */ }
// ^^^^^^^^
};
Or you can leave that conversion operator as it is, and construct the std::string object explicitly:
f[std::string("foo")];
// ^^^^^^^^^^^^ ^
Alternatively, you can add a further overload of the subscript operator that accepts a const char*, which would be a better match than any of the above (since it requires no user-defined conversion):
struct Foo
{
operator uint32_t() { /* ... */ }
Foo& operator[](const std::string &foo) { /* ... */ }
Foo& operator[](size_t index) { /* ... */ }
Foo& operator[](const char* foo) { /* ... */ }
// ^^^^^^^^^^^
};
Also notice, that your functions have a non-void return type, but currently miss a return statement. This injects Undefined Behavior in your program.
The problem is that f["foo"] can be resolved as:
Convert "foo" to std::string (be it s) and do f[s] calling Foo::operator[](const std::string&).
Convert f to integer calling Foo::operator int() (be it i) and do i["foo"] using the well known fact that built-in [] operator is commutative.
Both have one custom type conversion, hence the ambiguity.
The easy solution is to add yet another overload:
Foo& operator[](const char *foo) {}
Now, calling f["foo"] will call the new overload without needing any custom type conversion, so the ambiguity is broken.
NOTE: The conversion from type char[4] (type type of "foo") into char* is considered trivial and doesn't count.
As noted in other answers, your problem is that [] commutes by default -- a[b] is the same as b[a] for char const*, and with your class being convertible to uint32_t this is as good a match as the char* being converted to std::string.
What I'm providing here is a way to make an "extremely attractive overload" for when you are having exactly this kind of problem, where an overload doesn't get called despite your belief that it should.
So here is a Foo with an "extremely attractive overload" for std::string:
struct Foo
{
operator uint32_t() {return 1;}
Foo& lookup_by_string(const std::string &foo) { return *this; }
Foo& operator[](size_t index) {return *this;}
template<
typename String,
typename=typename std::enable_if<
std::is_convertible< String, std::string >::value
>::type
> Foo& operator[]( String&& str ) {
return lookup_by_string( std::forward<String>(str) );
}
};
where we create a free standing "lookup by string" function, then write a template that captures any type that can be converted into a std::string.
Because it "hides" the user-defined conversion within the body of the template operator[], when checking for matching no user defined conversion occurs, so this is preferred to other operations that require user defined conversions (like uint32_t[char*]). In effect, this is a "more attractive" overload than any overload that doesn't match the arguments exactly.
This can lead to problems, if you have another overload that takes a const Bar&, and Bar has a conversion to std::string, the above overload may surprise you and capture the passed in Bar -- both rvalues and non-const variables match the above [] signature better than [const Bar&]!