Equality operator overloads: Is (x!=y) == (!(x==y))? - c++

Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
I know there are many subtleties involved here: The operators == and != may be overloaded. They may be overloaded to have different return types (which only have to be implicitly convertible to bool). Even the !-operator might be overloaded on the return type. That's why I handwavingly referred to the "truth value" above, but trying to elaborate it further, exploiting the implicit conversion to bool, and trying to eliminate possible ambiguities:
bool ne = (x!=y);
bool e = (x==y);
bool result = (ne == (!e));
Is result guaranteed to be true here?
The C++ standard specifies the equality operators in section 5.10, but mainly seems to define them syntactically (and some semantics regarding pointer comparisons). The concept of being EqualityComparable exists, but there is no dedicated statement about the relationship of its operator == to the != operator.
There exist related documents from C++ working groups, saying that...
It is vital that equal/unequal [...] behave as boolean negations of each other. After all, the world would make no sense if both operator==() and operator!=() returned false! As such, it is common to implement these operators in terms of each other
However, this only reflects the Common Sense™, and does not specify that they have to be implemented like this.
Some background: I'm just trying to write a function that checks whether two values (of unknown type) are equal, and print an error message if this is not the case. I'd like to say that the required concept here is that the types are EqualityComparable. But for this, one would still have to write if (!(x==y)) {…} and could not write if (x!=y) {…}, because this would use a different operator, which is not covered with the concept of EqualityComparable at all, and which might even be overloaded differently...
I know that the programmer basically can do whatever he wants in his custom overloads. I just wondered whether he is really allowed to do everything, or whether there are rules imposed by the standard. Maybe one of these subtle statements that suggest that deviating from the usual implementation causes undefined behavior, like the one that NathanOliver mentioned in a comment, but which seemed to only refer to certain types. For example, the standard explicitly states that for container types, a!=b is equivalent to !(a==b) (section 23.2.1, table 95, "Container requirements").
But for general, user-defined types, it currently seems that there are no such requirements. The question is tagged language-lawyer, because I hoped for a definite statement/reference, but I know that this may nearly be impossible: While one could point out the section where it said that the operators have to be negations of each other, one can hardly prove that none of the ~1500 pages of the standard says something like this...
In doubt, and unless there are further hints, I'll upvote/accept the corresponding answers later, and for now assume that for comparing not-equality for EqualityComparable types should be done with if (!(x==y)) to be on the safe side.

Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
No it doesn't. Absolutely nothing stops me from writing:
struct Broken {
bool operator==(const Broken& ) const { return true; }
bool operator!=(const Broken& ) const { return true; }
};
Broken x, y;
That is perfectly well-formed code. Semantically, it's broken (as the name might suggest), but there's certainly nothing wrong from it from a pure C++ code functionality perspective.
The standard also clearly indicates this is okay in [over.oper]/7:
The identities among certain predefined operators applied to basic types (for example, ++a ≡ a+=1) need not hold for operator functions. Some predefined operators, such as +=, require an operand to be an lvalue when applied to basic types; this is not required by operator functions.
In the same vein, nothing in the C++ standard guarantees that operator< actually implements a valid Ordering (or that x<y <==> !(x>=y), etc.). Some standard library implementations will actually add instrumentation to attempt to debug this for you in the ordered containers, but that is just a quality of implementation issue and not a standards-compliant-based decision.
Library solutions like Boost.Operators exist to at least make this a little easier on the programmer's side:
struct Fixed : equality_comparable<Fixed> {
bool operator==(const Fixed&) const;
// a consistent operator!= is provided for you
};
In C++14, Fixed is no longer an aggregate with the base class. However, in C++17 it's an aggregate again (by way of P0017).
With the adoption of P1185 for C++20, the library solution has effectively becomes a language solution - you just have to write this:
struct Fixed {
bool operator==(Fixed const&) const;
};
bool ne(Fixed const& x, Fixed const& y) {
return x != y;
}
The body of ne() becomes a valid expression that evaluates as !x.operator==(y) -- so you don't have to worry about keeping the two comparison in line nor rely on a library solution to help out.

In general, I don't think you can rely on it, because it doesn't always make sense for operator == and operator!= to always correspond, so I don't see how the standard could ever require it.
For example, consider the built-in floating point types, like doubles, for which NaNs always compare false, so operator== and operator!= can both return false at the same time. (Edit: Oops, this is wrong; see hvd's comment.)
As a result, if I'm writing a new class with floating point semantics (maybe a really_long_double), I have to implement the same behaviour to be consistent with the primitive types, so my operator== would have to behave the same and compare two NaNs as false, even though operator!= also compares them as false.
This might crop up in other circumstances, too. For example, if I was writing a class to represent a database nullable value I might run into the same issue, because all comparisons to database NULL are false. I might choose to implement that logic in my C++ code to have the same semantics as the database.
In practice, though, for your use case, it might not be worth worrying about these edge cases. Just document that your function compares the objects using operator== (or operator !=) and leave it at that.

No. You can write operator overloads for == and != that do whatever you wish. It probably would be a bad idea to do so, but the definition of C++ does not constrain those operators to be each other's logical opposites.

Related

Why doesn't the ranges v3 library's sort function use operator< defined on custom types while std::sort does? [duplicate]

On cppreference on std::ranges::less, in notes we can see that:
Unlike std::less, std::ranges::less requires all six comparison operators <, <=, >, >=, == and != to be valid (via the totally_ordered_with constraint).
But... why? Why would we use std::ranges::less{} instead of std::less{}? What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
Not everything about the Ranges library is based purely on what is "practical". Much of it is about making the language and library make logical sense.
Concepts as a language feature gives the standard library the opportunity to define meaningful combinations of object features. To say that a type has an operator< is useful from the purely practical perspective of telling you what operations are available to it. But it doesn't really say anything meaningful about the type.
If a type is totally ordered, then that logically means that you could use any of the comparison operators to compare two objects of that type. Under the idea of a total order, a < b and b > a are equivalent statements. So it makes sense that if code is restricted to types that provide a total order, that code should be permitted to use either statement.
ranges::less::operator() does not use any operator other than <. But this function is constrained to types modelling the totally_ordered concept. This constraint exists because that's what ranges::less is for: comparing types which are totally ordered. It could have a more narrow constraint, but that would be throwing away any meaning provided by total ordering.
It also prevents you from exposing arbitrary implementation details to users. For example, let's say that you've got a template that takes some type T and you want to use T in a ranges::less-based operation. If you constrain this template to just having an operator<, then you have effectively put your implementation into the constraint. You no longer have the freedom for the implementation to switch to ranges::greater internally. Whereas if you had put std::totally_ordered in your constraint, you would make it clear to the user what they need to do while giving yourself the freedom to use whatever functors you need.
And since operator<=> exists and makes it easy to implement the ordering operators in one function, there's no practical downside. Well, except for code that has to compile on both C++17 and C++20.
Essentially, you shouldn't be writing types that are "ordered" by just writing operator< to begin with.
As far as I can tell based on the proposal the idea is to just simplify the design of the function objects. std::less is a template class which requires a template parameter and represents a homogeneous comparison. This template parameter can be omitted to default to std::less<void> which allows heterogeneous comparisons. The argument seems to be that the homogeneous case is unnecessary as it's handled fine by the heterogeneous approach, so the design can be simplified considerably and a class template isn't needed at all.
As to why the other operators besides operator< are required I'm not completely sure. My best guess is that this is just part of what it means to have a total order defined in C++ between two, possibly different, types.

Confusion in overloading the assignment operator = [duplicate]

When defining an assignment operator, it invariably looks like this:
class X {...};
X& X::operator=(...whatever...);
That is, it has the return type "reference to X". Here, parameters (...whatever...) can be X&, const X&, just X when using the copy-and-swap idiom, or any other type.
It seems strange that everyone recommends returning a non-const reference to X, regardless of the parameters. This explicitly allows expressions like (a = b).clear(), which is supposed to be good.
I have a different opinion, and I want to disallow expressions like (x=y).clear, (x=y)=z, and even x=y=z in my code. My idea is that these expressions do too complex things on a single line of code. So I decided to have my assignment operators return void:
void X::operator=(X) {...}
void X::operator=(int) {...}
Which negative effects does this have? (except looking different than usual)
Can my class X be used with standard containers (e.g. std::vector<X>)?
I am using C++03 (if that matters).
Your class does not meet the CopyAssignable concept (§17.6.3.1) so it is no longer guaranteed by the standard to work with the standard containers that require this (e.g. std::vector requires this for insert operations).
Besides that, this behavior is not idiomatic and will be perceived as surprising by programmers using your code. If you want to disallow chaining, consider adding a named function that does the assignment instead.
Just don't try to change the behavior of idiomatic operators in subtle ways like this. It will make your code harder to read and maintain.

Why was std::ranges::less introduced?

On cppreference on std::ranges::less, in notes we can see that:
Unlike std::less, std::ranges::less requires all six comparison operators <, <=, >, >=, == and != to be valid (via the totally_ordered_with constraint).
But... why? Why would we use std::ranges::less{} instead of std::less{}? What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
Not everything about the Ranges library is based purely on what is "practical". Much of it is about making the language and library make logical sense.
Concepts as a language feature gives the standard library the opportunity to define meaningful combinations of object features. To say that a type has an operator< is useful from the purely practical perspective of telling you what operations are available to it. But it doesn't really say anything meaningful about the type.
If a type is totally ordered, then that logically means that you could use any of the comparison operators to compare two objects of that type. Under the idea of a total order, a < b and b > a are equivalent statements. So it makes sense that if code is restricted to types that provide a total order, that code should be permitted to use either statement.
ranges::less::operator() does not use any operator other than <. But this function is constrained to types modelling the totally_ordered concept. This constraint exists because that's what ranges::less is for: comparing types which are totally ordered. It could have a more narrow constraint, but that would be throwing away any meaning provided by total ordering.
It also prevents you from exposing arbitrary implementation details to users. For example, let's say that you've got a template that takes some type T and you want to use T in a ranges::less-based operation. If you constrain this template to just having an operator<, then you have effectively put your implementation into the constraint. You no longer have the freedom for the implementation to switch to ranges::greater internally. Whereas if you had put std::totally_ordered in your constraint, you would make it clear to the user what they need to do while giving yourself the freedom to use whatever functors you need.
And since operator<=> exists and makes it easy to implement the ordering operators in one function, there's no practical downside. Well, except for code that has to compile on both C++17 and C++20.
Essentially, you shouldn't be writing types that are "ordered" by just writing operator< to begin with.
As far as I can tell based on the proposal the idea is to just simplify the design of the function objects. std::less is a template class which requires a template parameter and represents a homogeneous comparison. This template parameter can be omitted to default to std::less<void> which allows heterogeneous comparisons. The argument seems to be that the homogeneous case is unnecessary as it's handled fine by the heterogeneous approach, so the design can be simplified considerably and a class template isn't needed at all.
As to why the other operators besides operator< are required I'm not completely sure. My best guess is that this is just part of what it means to have a total order defined in C++ between two, possibly different, types.

Why does std::lerp not work with any type that has implemented required operations?

After learning about std::lerp I tried to use it with strong types, but it fails miserably since it only works for built in types...
#include <iostream>
#include <cmath>
struct MyFloat{
float val = 4.7;
MyFloat operator *(MyFloat other){
return MyFloat(other.val*val);
}
MyFloat operator +(MyFloat other){
return MyFloat(other.val+val);
}
MyFloat operator -(MyFloat other){
return MyFloat(other.val-val);
}
};
int main()
{
MyFloat a{1}, b{10};
//std::lerp(a, b, MyFloat{0.3}); :(
std::lerp(a.val, b.val, 0.3f);
}
My question is:
Is there a good reason why C++20 introduced a function/algorithm that is not generic?
It would be impossible for std::lerp to provide its guarantees about numerical behavior for arbitrary types that happen to provide some arithmetic operators. (There’s no way for the library to detect that your example merely forwards them to the builtin float versions.)
While requirements could be imposed on the parameter type to allow a correct implementation, they would need to be exceedingly detailed for MyFloat to be handled with the same performance and results as float. For example, the implementation may need to compare values of the parameter type (which your type doesn’t support!) and can capitalize on the spacing between floating-point values to provide monotonicity guarantees near t=1.
Since those guarantees are the entire point of the function (the naïve formulas are trivial), it’s not provided at all in a generic form.
Is there a good reason why C++20 introduced a function/algorithm that is not generic?
Implementing it for a small set of types makes it easier to ensure correct results when mixing different argument types and dealing with possible implicit conversions (not to mention ambiguous overloads). As the last overload on cppreference (which is likely a template) specifies, the types are adjusted such that as little precision is lost as possible.
How can the same be achieved when the list of types is open ended? And a client programmer injects whatever meaning they want into overloaded operators? I'd say its pretty much impossible.
And it's nothing new, take std::pow for instance, which had a similar overload added to it in C++11. The standard library utilities that deal with numerical data are always specified only for types the implementation is aware of.
If lerp makes sense for your custom type, then you can add overloads into your custom namespace. ADL will find it, and generic code that is build on top of
using std::lerp;
lerp(arg1, arg2, arg3);
can be made to work for your custom type too.

How to dis-ambiguate operator definitions between objects/classes in a programming language?

I'm designing my own programming language (called Lima, if you care its on www.btetrud.com), and I'm trying to wrap my head around how to implement operator overloading. I'm deciding to bind operators on specific objects (its a prototype based language). (Its also a dynamic language, where 'var' is like 'var' in javascript - a variable that can hold any type of value).
For example, this would be an object with a redefined + operator:
x =
{ int member
operator +
self int[b]:
ret b+self
int[a] self:
ret member+a
}
I hope its fairly obvious what that does. The operator is defined when x is both the right and left operand (using self to denote this).
The problem is what to do when you have two objects that define an operator in an open-ended way like this. For example, what do you do in this scenario:
A =
{ int x
operator +
self var[b]:
ret x+b
}
B =
{ int x
operator +
var[a] self:
ret x+a
}
a+b ;; is a's or b's + operator used?
So an easy answer to this question is "well duh, don't make ambiguous definitions", but its not that simple. What if you include a module that has an A type of object, and then defined a B type of object.
How do you create a language that guards against other objects hijacking what you want to do with your operators?
C++ has operator overloading defined as "members" of classes. How does C++ deal with ambiguity like this?
Most languages will give precedence to the class on the left. C++, I believe, doesn't let you overload operators on the right-hand side at all. When you define operator+, you are defining addition for when this type is on the left, for anything on the right.
In fact, it would not make sense if you allowed your operator + to work for when the type is on the right-hand side. It works for +, but consider -. If type A defines operator - in a certain way, and I do int x - A y, I don't want A's operator - to be called, because it will compute the subtraction in reverse!
In Python, which has more extensive operator overloading rules, there is a separate method for the reverse direction. For example, there is a __sub__ method which overloads the - operator when this type is on the left, and a __rsub__ which overloads the - operator when this type is on the right. This is similar to the capability, in your language, to allow the "self" to appear on the left or on the right, but it introduces ambiguity.
Python gives precedence to the thing on the left -- this works better in a dynamic language. If Python encounters x - y, it first calls x.__sub__(y) to see if x knows how to subtract y. This can either produce a result, or return a special value NotImplemented. If Python finds that NotImplemented was returned, it then tries the other way. It calls y.__rsub__(x), which would have been programmed knowing that y was on the right hand side. If that also returns NotImplemented, then a TypeError is raised, because the types were incompatible for that operation.
I think this is the ideal operator overloading strategy for dynamic languages.
Edit: To give a bit of a summary, you have an ambiguous situation, so you really only three choices:
Give precedence to one side or the other (usually the one on the left). This prevents a class with a right-side overload from hijacking a class with a left-side overload, but not the other way around. (This works best in dynamic languages, as the methods can decide whether they can handle it, and dynamically defer to the other one.)
Make it an error (as #dave is suggesting in his answer). If there is ever more than one viable choice, it is a compiler error. (This works best in static languages, where you can catch this thing in advance.)
Only allow the left-most class to define operator overloads, as in C++. (Then your class B would be illegal.)
The only other option is to introduce a complex system of precedence to the operator overloads, but then you said you want to reduce the cognitive overhead.
I'm going to answer this question by saying "duh, don't make ambiguous definitions".
If I recreate your example in C++ (using a function f instead of the + operator and int/float instead of A/B, but there really isn't much difference)...
template<class t>
void f(int a, t b)
{
std::cout << "me! me! me!";
}
template<class t>
void f(t a, float b)
{
std::cout << "no, me!";
}
int main(void)
{
f(1, 1.0f);
return 0;
}
...the compiler will tell me precisely that: error C2668: 'f' : ambiguous call to overloaded function
If you create a language powerful enough, it's always going to be possible to create things in it that don't make sense. When this happens, it's probably ok to just throw up your hands and say "this doesn't make sense".
In C++, a op b means a.op(b), so it's unambigious; the order settles it. If, in C++, you want to define an operator whose left operand is a built-in type, then the operator has to be a global function with two arguments, not a member; again, though, the order of the operands determines which method to call. It is illegal to define an operator where both operands are of built-in types.
I would suggest that given X + Y, the compiler should look for both X.op_plus(Y) and Y.op_added_to(X); each implementation should include an attribute indicating whether it should be a 'preferred', 'normal', 'fallback' implementation, and optionally also indicating that it is "common". If both implementations are defined, and they implementations are of different priorities (e.g. "preferred" and "normal"), use the type to select a preference. If both are defined to be of the same priority, and both are "common", favor the X.op_plus(Y) form. If both are defined with the same priority and they are not both "common", flag an error.
I would suggest that the ability to prioritize overloads and conversions would IMHO a very important feature for a language to have. It is not helpful for languages to squawk about ambiguous overloads in cases where both candidates would do the same thing, but languages should squawk in cases where two possible overloads would have different meanings, each of which would be useful in certain contexts. For example, given someFloat==someDouble or someDouble==someLong, a compiler should squawk, since there can be usefulness to knowing whether the numerical quantities represented by two values match, and there can also be usefulness in knowing whether the left-hand operand holds the best possible representation (for its type) of the value in the right-hand operand. Java and C# do not flag ambiguity in either case, opting instead to use the first meaning for the first expression and the second for the second, even though either meaning might be useful in either case. I would suggest that it would be better to reject such comparisons than to have them implement inconsistent semantics.
Overall, I'd suggest as a philosophy that a good language design should let a programmer indicate what's important and what isn't. If a programmer knows that certain "ambiguities" aren't problems, but other ones are, it should be easy to have the compiler flag the latter but not the former.
Addendum
I looked briefly through your proposal; it sees you're expecting bindings to be fully dynamic. I've worked with a language like that (HyperTalk, circa 1988) and it was "interesting". Consider, for example, that "2X" < "3" < 4 < 10 < "11" < "2X". Double dispatch can sometimes be useful, but only in cases where operators overloads with different semantics (e.g. string and numeric comparisons) are limited to operating on disjoint sets of things. Forbidding ambiguous operations at compile time is a good thing, since the programmer will be in a position to specify what's intended. Having such ambiguity trigger a run-time error is a bad thing, because the programmer may be long gone by the time an error surfaces. Consequently, I really can't offer any advice for how to do run-time double dispatch for operators except to say "don't", unless at compile time you restrict the operands to combinations where any possible overload would always have the same semantics.
For example, if you had an abstract "immutable list of numbers" type, with a member to report the length or return the number at a particular index, you could specify that two instances are equal if they have the same length, and every for every index they return the same number. While it would be possible to compare any two instances for equality by examining every item, that could be inefficient if e.g. one instance was a "BunchOfZeroes" type which simply held an integer N=1000000 and didn't actually store any items, and the other was an "NCopiesOfArray" which held N=500000 and {0,0} as the array to be copied. If many instances of those types are going to be compared, efficiency could be improved by having such comparisons invoke a method which, after checking overall array length, checks whether the "template" array contains any non-zero elements. If it doesn't, then it can be reported as equal the bunch-of-zeroes array without having to perform 1,000,000 element comparisons. Note that the invocation of such a method by double dispatch would not alter the program's behavior--it would merely allow it to execute more quickly.