Odd behavior with operator>= overloading - c++

I'm having a strange behavior with an operator overloading in C++. I have a class, and I need to check if its contents are greater or equal to a long double. I overloaded the >= operator to make this check, my declaration is as follows:
bool MyClass::operator>=(long double value) const;
I have to say that I also have a cast-to-long-double operator for my class, that works without exceptions only under certain conditions.
Now, when I use this operator, the compiler complains that there's an ambiguous use of operator>= and the alternatives are:
Mine.
The built-in operator>=(long double, int).
Now, how do I force the program to use my operator?

2015 update: Or, if you want to keep conversion ability using the (double)obj syntax instead the obj.to_double() syntax, make the conversion function explicit by prefixing it with that keyword. You need an explicit cast then for the conversion to trigger. Personally, I prefer the .to_double syntax, unless the conversion would be to bool because in that case the conversion is used by if(obj) even if it is explicit, and that is considerably more readable than if(obj.to_bool()) in my opinion.
Drop the conversion operator. It will cause troubles all the way. Have a function like
to_double()
Or similar that returns the double value and call that function explicitly to get a double.
For the problem at hand, there is this problem:
obj >= 10
Consider that expression. The builtin operator matches the first argument by a user defined conversion sequence for your type using the conversion operator long double(). But your function matches the second argument by a standard conversion sequence from int to long double (integral to floating point conversion). It is always ambiguous when there are conversions for two arguments, but not at least one argument that can be converted better while the remaining arguments are not converted worse for one call. In your case, the builtin one matches the second argument better but the first worse, but your function matches the first argument better but the second worse.
It's confusing, so here are some examples (conversions from char to int are called promotions, which are better than conversions from char to something other than int, which is called a conversion):
void f(int, int);
void f(long, long);
f('a', 'a');
Calls the first version. Because all arguments for the first can be converted better. Equally, the following will still call the first:
void f(int, long);
void f(long, long);
f('a', 'a');
Because the first can be converted better, and the second is not converted worse. But the following is ambiguous:
void f(char, long);
void f(int, char);
f('a', 'a'); // ambiguous
It's more interesting in this case. The first version accepts the first argument by an exact match. The second version accepts the second argument by an exact match. But both versions do not accept their other argument at least equally well. The first version requires a conversion for its second argument, while the second version requires a promotion for its argument. So, even though a promotion is better than a conversion, the call to the second version fails.
It's very similar to your case above. Even though a standard conversion sequence (converting from int/float/double to long double) is better than a user-defined conversion sequence (converting from MyClass to long double), your operator version is not chosen, because your other parameter (long double) requires a conversion from the argument which is worse than what the builtin operator needs for that argument (perfect match).
Overload resolution is a complex matter in C++, so one can impossibly remember all the subtle rules in it. But getting the rough plan is quite possible. I hope it helps you.

By providing an implicit conversion to a double you are effectively stating, my class is equivalent to a double and for this reason you shouldn't really mind if the built in operator >= for doubles is used. If you do care, then your class really isn't 'equivalent' to a double and you should consider not providing an implicit conversion to double, but instead providing an explicit GetAsDouble, or ConvertToDouble member function.
The reason that you have an ambiguity at the moment is that for an expression t >= d where t is an instance of your class and d is a double, the compiler always has to provide a conversion of either the left hand side or the right hand side so the expression really is ambiguous. Either t's operator double is called and the built-in operator >= for doubles is used, or d must be promoted to a long double and your member operator >= is used.
Edit, you've updated your question to suggest that your conversion is to long double and your comparison is against an int. In which case the last paragraph should read:
The reason that you have an ambiguity at the moment is that for an expression t >= d where t is an instance of your class and d is an int, the compiler always has to provide a conversion of either the left hand side or the right hand side so the expression really is ambiguous. Either t's operator long double is called and the built-in operator >= for long double and int is used, or d must be promoted to a long double and your member operator >= is used.

I assume you are comparing against a literal int, and not a long double:
MyClass o;
if (o >= 42)
{
// ...
}
If that is the case both alternatives are as good/complex.
Using your operator long double():
MyClass::operator long double()
built-in operator>=(long double, int)
Using your MyClass::operator>=(long double):
built-in conversion int to long double
MyClass::operator>=(long double)

You've got long double in the declaration. Try changing it to double.

Your use of operator overloading combined with custom casting can be very confusing to users of your class. Ask yourself, would users of this class expect it to convert itself into a double, or be comparable with a double? Wouldn't having a .greaterThan(double) function achieve the same goal but without surprising the user?
I guess you could always explicitly cast your object to double before comparing, to avoid the ambiguity. But if I were you I'd reconsider the approach above and focus on writing code that's intuitive and behaves in an unsurprising manner, instead of fancy type-casting and operator overloading.
(Inspired by the FQA's wonderful rant about operator overloading)

The built-in operator>=(long double, int).
Looks like you've defined:
bool class::operator>=(long double value) { return value >= classValue; }
And you are missing:
bool class::operator>=(double value) { return value >= classValue; }
bool class::operator>=(int value) { return value >= classValue; }
So the compiler can't decide which way to convert. (It's ambiguous.)
Perhaps a templated function (or method) would be helpful?
Watch out for situations where a>=b invokes different methods than b>=a.

Related

Why can't `std::min` be used between `int` and `long long int`

So I just want to know that why I can't pass values with different datatypes in min and max function?
int a=7;
long long int b=5;
long long int c=min(a,b);
cout<<c;
My doubt arises because what I know for sure that compiler can implicitly type cast from smaller datatype(int) to larger datatype(long long int) so why not compiler couldn't type cast here!
This comes down to how std::min and std::max were designed.
They could've had two different types for the two parameters, and use something like std::common_type to figure out a good result type. But instead they have the same type for both parameters, so the two arguments you pass must also have the same type.
compiler can implicitly type cast from smaller datatype(int) to larger datatype(long long int)
Yeah, but it also can implicitly convert back from long long to int. ("Cast" isn't the right word, it means an explicit conversion.)
The rule is that when a function parameter participates in template argument deduction, no implicit conversions are allowed for the argument passed to that parameter. Deduction rules are already compilcated, allowing some implicit conversions (i.e. to wider arithmetic types) would make them even more complex.
Because there is loss of precision error comes when you do this they are made for the int datatypes so you cannot pass long or something like float.
You have to implicit cast them.

Ambiguous call to overloaded integer constructor

I shall provide these two constructors.
BigUnsigned(int value)
:BigUnsigned(static_cast<unsigned long long>(value)){
}
BigUnsigned(unsigned long long value);
The problem is the call is ambiguous for some argument values. According to this answer, that refers the C++11 standard,
the type of an integer literal is the first of the corresponding list
in Table 6 in which its value can be represented.
Table 6 is here
int
long int
long long int
Therefore, in my case, the type of the constructor argument (integer literal) if it belongs to the range...
<0, numeric_limits<int>::max()> is int
---> calling BigUnsigned(int)
(numeric_limits<int>::max(), numeric_limits<long int>::max()> is long int
---> ambiguous
(numeric_limits<long int>::max(), too big literal) is long long int
---> ambiguous
How can I solve the ambiguity without declaring any more constructors or explicitly typecasting the argument?
This question on integer promotion and integer conversion might be useful. I still don't know which of them applies in my case, though.
One fundamental problem here is that a decimal literal will never be deduced as an unsigned type. Therefore, a decimal literal that's too large to fit in an int end up requiring a signed->unsigned conversion in one case, and a long->int conversion in the other. Both of these are classed as "integral conversions", so neither is considered a "better" conversion, and the overload is ambiguous.
As to possible ways to deal with this without explicitly casting the argument or adding more constructors, I can see a couple.
At least for a literal, you could add a suffix that specifies that the type of the literal is unsigned:
BigUnsigned a(5000000000U); // unambiguous
Another (that also applies only to literals) would be to use a hexadecimal or octal literal, which (according to part of table 6 that isn't quoted in the question) can be deduced as either signed or unsigned. This is only a partial fix though--it only works for values that will deduce as unsigned. For a typical system with 32-bit int, 32-bit long, and 64-bit long long, I believe it'll come out like this:
So for a parameter large enough that it won't fit in a signed long long, this gives an unambiguous call where the decimal constant would still have been ambiguous.
For people who've worked with smaller types, it might initially seem like the conversion from unsigned long to unsigned long long would qualify as a promotion instead of a conversion, which would make it preferable. And indeed, if (for example) the types involved were unsigned short and unsigned int, that would be exactly true--but that special preference is only given for types with conversion ranks less than int (which basically translates to: types that are smaller than int).
So that fixes the problem for one range of numbers, but only if they're literals, and only if they fall into one specific (albeit, quite large) range.
For the more general case, the only real cure is to change the interface. Either remove the overload for int, or add a few more ctor overloads, specifically for unsigned and for long long. These can be delegating constructors just like the existing one for int, if you decide you need them at all (but it's probably better to just have the one for unsigned long long and be done with it).
I had the same Problem. Also with some Big-Integer classes. However the literal-only solution did not work for me since I need to convert totally different kinds of (Big-)Ints between each other. I tried to avoid the excessive implementation of constructors. I solved the problem like this:
template <std::same_as<MyOtherBigInt> T>
explicit constexpr MyBigInt(T pValue) noexcept;
This stops the special MyOtherBigInt-Constructor from being called with any other integer.

Operator priority with operator overload?

I used operator overloading for my own Bignum header, and there are some operator priority problem.
Compiler says there is some error when I do bignum+int.
(My Bignum class name is 'bignum'. Don't mind that)
Here is my class definition:
operator long long(void) const {
return atoll(num.c_str());
}
operator +(bignum b) {
//c=this+b
return c;
}
And here is the case that error happens:
bignum a(1);
int b=1;
bignum c = a+b; //this is the case
ERRORS
line 7 : IntelliSense: "+" More than one of the operators is
consistent with the operand. Built-in operators "arithmetic +
arithmetic" Function "minary::bignum::operator+(minary::bignum
b)" The operand format is minary::bignum +
int. c:\Users\Secret\Documents\Visual Studio
2013\Projects\Calculator\Calculator\Source.cpp 11 3 Calculator
Thanks in advance.
The easiest approach is to make the conversion to integers explicit: many implicit conversions cause problems anyway. Sometimes implicit conversions are useful but if you want to deal with mixed type arithmetic they tend to be more a problem than help:
class bignum {
// ...
explicit operator long long() const { ... }
// ...
};
Making the conversion operator explicit is a C++11 feature. Your class also seems to have an implicit conversion from integral types to bignum, i.e., a corresponding constructor. Prior to C++11 you could only make these constructors explicit. This may be another option to deal with the ambiguity but it would have the effect that the conversion to long long and the built-in integer addition is used. My guess that you want a bignum as a result which requires that there is no suitable conversion to an integral type.
Note that the question has nothing to do with operator priority: for any given expression the compiler determines which overloads are the best fit (roughly: require the least amount of implicit conversions). If it finds multiple equally good candidates it considers the program to be ambiguous and asks for the ambiguity to be sorted out.

Compile-time assert using templates in c++

I was reading code from the cpp-btree library of google (https://code.google.com/p/cpp-btree/) and I came accross that compile-time assert mechanism.
// A compile-time assertion.
template <bool>
struct CompileAssert {
};
#define COMPILE_ASSERT(expr, msg) \
typedef CompileAssert<(bool(expr))> msg[bool(expr) ? 1 : -1]
So I understand more or less what it does, if expr is evaluated to false by the compiler it will declare a new type msg that will be a CompileAssert < false > array of size -1 which will trigger a compilation error.
What I don't get is the bool(expr) part, what is this exactly? Some kind of call to the copy constructor of the class bool? (but it's a builtin type so I'm confused)
I though this would be a mechanism to raise a compilation error when expr is not a boolean but actually I managed to compile a short program whit that line
COMPILE_ASSERT("trash",error_compilation_assert);
It compiles just fine with gcc 3.4
So can anyone explain the bool(expr) part of the mechanism?
It's a type conversion. There are 3 main types of type conversions in C++:
Cast notation (C-style cast): (bool) expr
Functional notation (constructor-style cast): bool(expr)
Cast operators (C++-style cast); static_cast<bool>(expr)
Cast notation and functional notation are semantically equivalent (i.e. they both perform the strongest possible conversion, the C-cast), but the scope & precedence of the functional notation is clearer.
It is generally advised not to use them in C++ code and use the specific cast operators (const_cast, static_cast etc.) instead.
So in your code, it's just a way of forcing the value to type bool and enclosing it in parentheses at the same time, so that no operator priority issues arise.
bool(expr) casts expr into a bool.
The first parameter should be some kind of expression, such as a == b. Using a string literal here is useless.
bool(expr) is a function-style cast which converts the expression to a bool. Lots of things convert implicitly to bool, but I guess they wanted an explicit cast to make sure the result is a bool.
If you convert a pointer to a bool, it evaluates to false if it is a NULL pointer, or true otherwise. Your string literal "Trash" decays into a const char * to the first character. As this is not a null pointer, the expression evaluates to true.
bool(expr) tries to convert expr to bool either implicitly or using any user defined conversion operator.
In C++, you can instantiate built-in types with ctor syntax:
bool b1 = bool(true);
bool b2 = bool(b1);
The difference to:
bool b2 = b1;
is that the latter does an implicit conversion to bool. When such an implicit conversion isn't allowed (as in the template typedef), then bool(b1) makes it explicit by creating a temporary bool from b1 and the temporary doesn't have to be converted anymore; it's an actual bool type.

Ternary operator

Is there any logical reason that will explain why in ternary optor both branches must have the same base type or be convertible to one? What is the problem in not having this rule? Why on earth I can't do thing like this (it is not the best example, but clarifies what I mean):
int var = 0;
void left();
int right();
var ? left() : right();
Expressions must have a type known at compile time. You can't have expressions of type "either X or Y", it has to be one or the other.
Consider this case:
void f(int x) {...}
void f(const char* str) {...}
f(condition ? 5 : "Hello");
Which overload is going to be called? This is a simple case, and there are more complicated ones involving e.g. templates, which have to be known at compile time. So in the above case, the compiler won't choose an overload based on the condition, it has to pick one overload to always call.
It can't do that, so the result of a ternary operator always has to be the same type (or compatible).
The ternary operator returns the value of the branch it takes. If the two branches didn't both have the same type, the expression would have an indeterminate type.
The point is that an expression should have a statically defined type.
If branches of your ternary operator are incompatible, you cannot statically deduce the ternary expression type.
I guess because the ternary operator must have a defined return value. Hard to do if the types of both branches is different, or void.