Operators overloaded within class declaration:
class Asdf{
operator float() const;
Asdf operator+(const Asdf&) const;
Asdf operator+(float);
}
int main()
{
Asdf object1, object2, object3;
//Receiving error: "more than one operator '+' matches these operands"
object1= object2 + object3;
_getch();
return 0;
}
Errors:
:error C2666: 'Asdf::operator +' : 3 overloads have similar conversions
:could be 'Asdf Asdf::operator +(float)'
:'Asdf Asdf::operator +(const Asdf &) const'
When I remove all conversion used with the overloaded float conversion operator the code compiles properly.
Implicit conversion operators tend to invite these kinds of ambiguities, especially when combined with implicit constructors.
From C++ Coding Standards:
Consider overloading to avoid implicit type conversions.
Do not multiply objects beyond necessity (Occam's Razor): Implicit
type conversions provide syntactic convenience (but see Item 40). But
when the work of creating temporary objects is unnecessary and
optimization is appropriate (see Item 8), you can provide overloaded
functions with signatures that match common argument types exactly and
won't cause conversions.
Not all change is progress: Implicit conversions can often do more
damage than good. Think twice before providing implicit conversions to
and from the types you define, and prefer to rely on explicit
conversions (explicit constructors and named conversion functions).
A lot of this goes into efficiency and unexpected behaviors that can result from providing implicit conversions, but ambiguity errors from function overloading are included in the sort of side effects you can easily encounter when providing implicit conversion operators and/or constructors.
Solution: make your operator explicit or try to avoid providing it at all. It may be convenient, but it can invite nasty surprises like this (the compiler error is actually the least nasty).
This is happening because the conversion operator provides an implicit means from your class to a float. Thus, the other addition operators involving floats get in the way.
To remedy this, the best solution if you really want that conversion to be possible is to mark it explicit:
explicit operator float() const;
Asdf a;
float f = static_cast<float>(a);
However, this only works in C++11. In C++03, a better option would be to use a function instead:
float toFloat() const;
Asdf a;
float f = a.toFloat();
Related
You can find the text below in Appendix B of the book "C++ Templates The Complete Guide" by David Vandevoorde and Nicolai Josuttis.
B.2 Simplified Overload Resolution
Given this first principle, we are left with specifying how well a
given argument matches the corresponding parameter of a viable
candidate. As a first approximation we can rank the possible matches
as follows (from best to worst):
Perfect match. The parameter has the type of the expression, or it has a type that is a reference to the type of the expression (possibly
with added const and/or volatile qualifiers).
Match with minor adjustments. This includes, for example, the decay of an array variable to a pointer to its first element, or the
addition of const to match an argument of type int** to a parameter of
type int const* const*.
Match with promotion. Promotion is a kind of implicit conversion that includes the conversion of small integral types (such as bool,
char, short, and sometimes enumerations) to int, unsigned int, long or
unsigned long, and the conversion of float to double.
Match with standard conversions only. This includes any sort of standard conversion (such as int to float) but excludes the implicit
call to a conversion operator or a converting constructor.
Match with user-defined conversions. This allows any kind of implicit conversion.
Match with ellipsis. An ellipsis parameter can match almost any type (but non-POD class types result in undefined behavior).
A few pages later the book shows the following example and text (emphasis mine):
class BadString {
public:
BadString(char const*);
...
// character access through subscripting:
char& operator[] (size_t); // (1)
char const& operator[] (size_t) const;
// implicit conversion to null-terminated byte string:
operator char* (); // (2)
operator char const* ();
...
};
int main()
{
BadString str("correkt");
str[5] = 'c'; // possibly an overload resolution ambiguity!
}
At first, nothing seems ambiguous about the expression str[5]. The
subscript operator at (1) seems like a perfect match. However, it is
not quite perfect because the argument 5 has type int, and the
operator expects an unsigned integer type (size_t and std::size_t
usually have type unsigned int or unsigned long, but never type int).
Still, a simple standard integer conversion makes (1) easily viable.
However, there is another viable candidate: the built-in subscript
operator. Indeed, if we apply the implicit conversion operator to str
(which is the implicit member function argument), we obtain a pointer
type, and now the built-in subscript operator applies. This built-in
operator takes an argument of type ptrdiff_t, which on many platforms
is equivalent to int and therefore is a perfect match for the argument
5. So even though the built-in subscript operator is a poor match (by user-defined conversion) for the implied argument, it is a better
match than the operator defined at (1) for the actual subscript! Hence
the potential ambiguity.
Note that the first list is Simplified Overload Resolution.
To remove the "possibly" and "potential" about whether or not int is the same as ptrdiff_t, let's change one line:
str[(ptrdiff_t)5] = 'c'; // definitely an overload resolution ambiguity!
Now the message I get from g++ is:
warning: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: [enabled by default]
and --pedantic-errors promotes that warning to an error.
So without diving into the standard, this tells you that the simplified list is only part of the story. That list tells you which of several possible routes to prefer when going from A to B, but it does not tell you whether to prefer a trip from A to B over a trip from C to D.
You can see the same phenomenon (and the same g++ message) more obviously here:
struct S {
explicit S(int) {}
operator int() { return 0; }
};
void foo(const S&, long) { }
void foo(int, int) { }
int main() {
S s(0);
foo(s, 1);
}
Again the call to foo is ambiguous, because when we have a choice of which argument to implicitly convert in order to choose an overload, the rules don't say, "pick the more lightweight conversion of the two and convert the argument requiring that conversion".
Now you're going to ask me for the standard citation, but I shall use the fact I need to hit the hay as an excuse for not looking it up ;-)
In summary it is not true here that "a user defined conversion is a better match than a standard integer conversion". However in this case the two possible overloads are defined by the standard to be equally good and hence the call is ambiguous.
str.operator[](size_t(5));
Write code in definite way and you'll not need to bother about all this stuff :)
There is much more to thing about.
I've been playing with C++ recently, and I just stumbled upon an interesting precedence issue. I have one class with two operators: "cast to double" and "+". Like so:
class Weight {
double value_;
public:
explicit Weight(double value) : value_(value) {}
operator double() const { return value_; }
Weight operator+(const Weight& other) { return Weight(value_ + other.value_); }
};
When I try to add two instances of this class...
class Weighted {
Weight weight_;
public:
Weighted(const Weight& weight) : weight_(weight) {}
virtual Weighted twice() const {
Weight w = weight_ + weight_;
return Weighted(w);
}
};
...something unexpected happens: the compiler sees the "+" sign and casts the two weight_s to double. It then spits out a compilation error, because it can't implicitly cast the resulting double back to a Weight object, due to my explicit one-argument constructor.
The question: how can I tell the compiler to use my own Weight::operator+ to add the two objects, and to ignore the cast operator for this expression? Preferably without calling weight_.operator+(weight_), which defeats the purpose.
Update: Many thanks to chris for pointing out that the compiler is right not to use my class's operator+ because that operator is not const and the objects that are being +ed are const.
I now know of three ways to fix the above in VS2012. Do see the accepted answer from chris for additional information.
Add the explicit qualifier to Weight::operator double(). This
doesn't work in VS 2012 (no support), but it stands to reason that
it's a good solution for compilers that do accept this approach (from the accepted answer).
Remove the virtual qualifier from method Weighted::twice, but don't ask me why this works in VS.
Add the const qualifier to method Weight::operator+ (from the accepted answer).
Current version:
First of all, the virtual should have nothing to do with it. I'll bet that's a problem with MSVC, especially considering there's no difference on Clang. Anyway, your twice function is marked const. This means that members will be const Weight instead of Weight. This is a problem for operator+ because it only accepts a non-const this. Therefore, the only way the compiler can go is to convert them to double and add those.
Another problem is that adding explicit causes it to compile. In fact, this should remove the compiler's last resort of converting to double. This is indeed what happens on Clang:
error: invalid operands to binary expression ('const Weight' and 'const Weight')
Weight w = weight_ + weight_;
note: candidate function not viable: 'this' argument has type 'const Weight', but method is not marked const
Finally, making operator+ const (or a free function) is the correct solution. When you do this, you might think you'll add back this route and thus have another error from ambiguity between this and the double route, but Weight to const Weight & is a standard conversion, whereas Weight to double is a user-defined conversion, so the standard one is used and everything works.
As of the updated code in the question, this is fine. The reason it won't compile is the fault of MSVC. For reference, it does compile on Clang. It also compiles on MSVC12 and the 2013 CTP.
You may be storing the result in a Foo, but there is still an implicit conversion from double to Foo needed. You should return Foo(value_ + other.value_) in your addition operator so that the conversion is explicit. I recommend making the operator a free function as well because free operators are (almost) always at least as good as members. While I'm at it, a constructor initializer list would be a welcome change, too.
In addition, from C++11 onward, a generally preferred choice is to make your conversion operator explicit:
explicit operator double() const {return value_;}
Also note the const added in because no state is being changed.
Question was changed!
I use a simple way to hide my enums from local namespaces - enumeration inside of a struct. It goes roughly like this:
struct Color
{
enum Type
{
Red, Green, Black
};
Type t_;
Color(Type t) : t_(t) {}
operator Type () const {return t_;}
private:
template<typename T>
operator T () const;
};
operator T () is a protection from implicit type casting. Then I tried to compile this code with gcc and with keil:
Color n;
int a[9];
a[ (int)n ] = 1;
gcc compiled it with no error (wich is what I expected), but Keil gived me an error:
"invalid type conversion. operator () is inaccessible".
So my question is: which compiler is right?
I know about c++11 enum class, but it isn't supported by Keil now
Should reinterpret_cast (not c-style () cast) call type conversion operator?
No, reinterpret_cast is only used for a few dodgy types of conversions:
converting pointers to integers and back
converting between pointers (and references) to unrelated types
You shouldn't need a cast at all to use the implicit conversion operator - you have not prevented implicit conversion at all. In C++11, if the operator were explicit, then you'd need a static_cast.
If you're stuck with C++03, and you really want to prevent implicit conversion but allow explicit conversion, then I think the only sensible thing to do is to provide a named conversion function.
Update: The question has now changed, and is asking about C-style casting rather than reinterpret_cast. That should compile since any conversion that can be done by static_cast (including implicit conversions) can also be done with a C-style cast.
Considering the following example, the line int a = objT + 5; gives ambiguous conversion which is handled in two ways, using the explicit cast which i think shouldn't be necessary and replacing the use of conversion operators with member functions instead. and here, my question becomes should you still use conversions operators or not at all?
class Testable {
public:
Testable(int val = 0):x(val){}
operator int() const { return x; }
operator double() const { return x; }
int toInt() { return x; }
double toDouble() { return x; }
private:
int x;
};
int main()
{
Testable objT(10);
// ambiguous conversion
int a = objT + 5;
// why to use explicit cast when
// the conversion operator is for to handle
// this automatically
int b = static_cast<int>(objT) + 5;
// member functions
int c = objT.toInt() + 5;
}
Note that int d = objT; is unambiguous, as is double e = objT; Here the compiler can unambiguously choose a conversion operator because of the exact match between the type of the left hand side and the return types from those conversion operators.
The ambiguous conversion results from objT + 5 (also note: long f = objT; is also ambiguous). The problem is that you've taken away the essential information the compiler needs to mindlessly perform the disambiguation.
The problem goes away if you get rid of either of those conversion operators. Since the underlying data member is an int, I suggest getting rid of operator double().
For your particular case, you could just provide the conversion to double, since it has a standard conversion to int. However, implicit conversions often do more harm than help. If you use them, then avoid having both the conversion from int (your non explicit constructor) and to int, which I think is the source of your current problem.
This question is similar to mine, whether is it possible to change the precedence of implicit conversion operators. Unfortunately there is no satisfactory solution, there is no precedence amongst implicit conversions.
One solution is to have additional overload(s) to the ambiguous function or operator, in your case they are operator + (const Testable&, int) and operator + (const Testable&, double).
Another solution is to simply leave only one implicit conversion, in your case the integer one would be best.
Only use implicit conversion when you desperately need automatic conversion to the given type. Explicit toInt, toDouble functions are MUCH better, they clarify the code and do not hide potential pitfalls. Use the explicit keyword for unary constructors, they block implicit conversion via that constructor.
And no, it is not possible to chain implicit conversions, neither if they are operators nor constructors. The C++ standard mandates only 1 implicit conversion can be in a conversion sequence.
You don't need implicit conversion to double. You always return decimal value and compiler can combine multiple implicit conversion operators.
If your private property x of type int were a double, I would recommend not to use implicit conversion to int since it would cause precision loss.
Hi I have a code like this, I think both the friend overloaded operator and conversion operator have the similar function. However, why does the friend overloaded operator is called in this case? What's the rules?
Thanks so much!
class A{
double i;
public:
A(int i):i(i) {}
operator double () const { cout<<"conversion operator"<<endl;return i;} // a conversion operator
friend bool operator>(int i, A a); // a friend funcion of operator >
};
bool operator>(int i, A a ){
cout<<"Friend"<<endl;
return i>a.i;
}
int main()
{
A aa(1);
if (0 > aa){
return 1;
}
}
No conversion is necessary for the overloaded operator> to be called. In order for the built-in operator> to be called, one conversion is necessary (the user-defined conversion operator. Overload resolution prefers options with fewer required conversions, so the overloaded operator> is used.
Note that if you were to change the definition of your overloaded operator> to be, for example:
friend bool operator>(double i, A a);
you would get a compilation error because both the overloaded operator> and the built-in operator> would require one conversion, and the compiler would not be able to resolve the ambiguity.
I am not claiming that my answer is supported by the standards, but lets think about it logically.
When you hit this line:
0 > aa
You have two options. Either you call the provided operator:
friend bool operator>(int i, A a);
Which is 100% compatible, or you can do two conversions to reach your destination! Which one would you choose?
If you add a conversion operator then an object of type A may be converted to double when you least expect it.
A good program does not provide a way for his classes to be accidently used and the conversion operator opens up the opertunity for the class to be used in a whole host of unintended situations (normally situations where you would expect a compile time error are not because of the auto type conversion).
As a result conversion operators (and single argument constructors) should be treateed with some care because of situations were the compiler may do conversion when you least expect it.
It is called because it is an exact match in the context of the expression 0 > aa. In fact, it is hard to figure out how you came up with the "why" question. By logic, one'd expect a "why" question if the friend weren't called in this case.
If you change the expression to 0.0 > aa, the call will beciome ambiguous, because neuther path will be better than the other.