Does `equality_­comparable_with` need to require `common_reference`? - c++

The concept equality_­comparable_with<T, U> is intended to declare that objects of type T and U can be compared equal to each other, and if they are, then this has the expected meaning. That's fine.
However, this concept also requires common_reference_t<T&, U&> to exist. The primary impetus for common_reference and its attendant functionality seems to be to enable proxy iterators, to have a place to represent the relationship between reference and value_type for such iterators.
That's great, but... what does that have to do with testing if a T and a U can be compared equal to one another? Why does the standard require that T and U have a common reference relationship just to allow you to compare them equal?
This creates oddball situations where it's very difficult to have two types which don't reasonably have a common-reference relationship that are logically comparable. For example, vector<int> and pmr::vector<int> logically ought to be comparable. But they can't be because there's no reasonable common-reference between the two otherwise unrelated types.

This goes all the way back to the Palo Alto report, §3.3 and D.2.
For the cross-type concepts to be mathematically sound, you need to define what the cross-type comparison means. For equality_comparable_with, t == u generally means that t and u are equal, but what does it even mean for two values of different types to be equal? The design says that cross-type equality is defined by mapping them to the common (reference) type (this conversion is required to preserve the value).
Where the strong axioms of equality_comparable_with is not desirable, the standard uses the exposition-only concept weakly-equality-comparable-with, and that concept doesn't require a common reference. It is, however, a "semantic abomination" (in the words of Casey Carter) and is exposition-only for that reason: it allows having t == u and t2 == u but t != t2 (this is actually necessary for sentinels).

Related

Why doesn't the ranges v3 library's sort function use operator< defined on custom types while std::sort does? [duplicate]

On cppreference on std::ranges::less, in notes we can see that:
Unlike std::less, std::ranges::less requires all six comparison operators <, <=, >, >=, == and != to be valid (via the totally_ordered_with constraint).
But... why? Why would we use std::ranges::less{} instead of std::less{}? What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
Not everything about the Ranges library is based purely on what is "practical". Much of it is about making the language and library make logical sense.
Concepts as a language feature gives the standard library the opportunity to define meaningful combinations of object features. To say that a type has an operator< is useful from the purely practical perspective of telling you what operations are available to it. But it doesn't really say anything meaningful about the type.
If a type is totally ordered, then that logically means that you could use any of the comparison operators to compare two objects of that type. Under the idea of a total order, a < b and b > a are equivalent statements. So it makes sense that if code is restricted to types that provide a total order, that code should be permitted to use either statement.
ranges::less::operator() does not use any operator other than <. But this function is constrained to types modelling the totally_ordered concept. This constraint exists because that's what ranges::less is for: comparing types which are totally ordered. It could have a more narrow constraint, but that would be throwing away any meaning provided by total ordering.
It also prevents you from exposing arbitrary implementation details to users. For example, let's say that you've got a template that takes some type T and you want to use T in a ranges::less-based operation. If you constrain this template to just having an operator<, then you have effectively put your implementation into the constraint. You no longer have the freedom for the implementation to switch to ranges::greater internally. Whereas if you had put std::totally_ordered in your constraint, you would make it clear to the user what they need to do while giving yourself the freedom to use whatever functors you need.
And since operator<=> exists and makes it easy to implement the ordering operators in one function, there's no practical downside. Well, except for code that has to compile on both C++17 and C++20.
Essentially, you shouldn't be writing types that are "ordered" by just writing operator< to begin with.
As far as I can tell based on the proposal the idea is to just simplify the design of the function objects. std::less is a template class which requires a template parameter and represents a homogeneous comparison. This template parameter can be omitted to default to std::less<void> which allows heterogeneous comparisons. The argument seems to be that the homogeneous case is unnecessary as it's handled fine by the heterogeneous approach, so the design can be simplified considerably and a class template isn't needed at all.
As to why the other operators besides operator< are required I'm not completely sure. My best guess is that this is just part of what it means to have a total order defined in C++ between two, possibly different, types.

Why was std::ranges::less introduced?

On cppreference on std::ranges::less, in notes we can see that:
Unlike std::less, std::ranges::less requires all six comparison operators <, <=, >, >=, == and != to be valid (via the totally_ordered_with constraint).
But... why? Why would we use std::ranges::less{} instead of std::less{}? What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
What is the practical situation in which we want to less{} only if there are other comparison operators defined, not only the < one?
Not everything about the Ranges library is based purely on what is "practical". Much of it is about making the language and library make logical sense.
Concepts as a language feature gives the standard library the opportunity to define meaningful combinations of object features. To say that a type has an operator< is useful from the purely practical perspective of telling you what operations are available to it. But it doesn't really say anything meaningful about the type.
If a type is totally ordered, then that logically means that you could use any of the comparison operators to compare two objects of that type. Under the idea of a total order, a < b and b > a are equivalent statements. So it makes sense that if code is restricted to types that provide a total order, that code should be permitted to use either statement.
ranges::less::operator() does not use any operator other than <. But this function is constrained to types modelling the totally_ordered concept. This constraint exists because that's what ranges::less is for: comparing types which are totally ordered. It could have a more narrow constraint, but that would be throwing away any meaning provided by total ordering.
It also prevents you from exposing arbitrary implementation details to users. For example, let's say that you've got a template that takes some type T and you want to use T in a ranges::less-based operation. If you constrain this template to just having an operator<, then you have effectively put your implementation into the constraint. You no longer have the freedom for the implementation to switch to ranges::greater internally. Whereas if you had put std::totally_ordered in your constraint, you would make it clear to the user what they need to do while giving yourself the freedom to use whatever functors you need.
And since operator<=> exists and makes it easy to implement the ordering operators in one function, there's no practical downside. Well, except for code that has to compile on both C++17 and C++20.
Essentially, you shouldn't be writing types that are "ordered" by just writing operator< to begin with.
As far as I can tell based on the proposal the idea is to just simplify the design of the function objects. std::less is a template class which requires a template parameter and represents a homogeneous comparison. This template parameter can be omitted to default to std::less<void> which allows heterogeneous comparisons. The argument seems to be that the homogeneous case is unnecessary as it's handled fine by the heterogeneous approach, so the design can be simplified considerably and a class template isn't needed at all.
As to why the other operators besides operator< are required I'm not completely sure. My best guess is that this is just part of what it means to have a total order defined in C++ between two, possibly different, types.

Practical meaning of std::strong_ordering and std::weak_ordering

I've been reading a bit about C++20's consistent comparison (i.e. operator<=>) but couldn't understand what's the practical difference between std::strong_ordering and std::weak_ordering (same goes for the _equality version for this manner).
Other than being very descriptive about the substitutability of the type, does it actually affect the generated code? Does it add any constraints for how one could use the type?
Would love to see a real-life example that demonstrates this.
Does it add any constraints for how one could use the type?
One very significant constraint (which wasn't intended by the original paper) was the adoption of the significance of strong_ordering by P0732 as an indicator that a class type can be used as a non-type template parameter. weak_ordering isn't sufficient for this case due to how template equivalence has to work. This is no longer the case, as non-type template parameters no longer work this way (see P1907R0 for explanation of issues and P1907R1 for wording of the new rules).
Generally, it's possible that some algorithms simply require weak_ordering but other algorithms require strong_ordering, so being able to annotate that on the type might mean a compile error (insufficiently strong ordering provided) instead of simply failing to meet the algorithm's requirements at runtime and hence just being undefined behavior. But all the algorithms in the standard library and the Ranges TS that I know of simply require weak_ordering. I do not know of one that requires strong_ordering off the top of my head.
Does it actually affect the generated code?
Outside of the cases where strong_ordering is required, or an algorithm explicitly chooses different behavior based on the comparison category, no.
There really isn't any reason to have std::weak_ordering. It's true that the standard describes operations like sorting in terms of a "strict" weak order, but there's an isomorphism between strict weak orderings and a totally ordered partition of the original set into incomparability equivalence classes. It's rare to encounter generic code that is interested both in the order structure (which considers each equivalence class to be one "value") and in some possibly finer notion of equivalence: note that when the standard library uses < (or <=>) it does not use == (which might be finer).
The usual example for std::weak_ordering is a case-insensitive string, since for instance printing two strings that differ only by case certainly produces different behavior despite their equivalence (under any operator). However, lots of types can have different behavior despite being ==: two std::vector<int> objects, for instance, might have the same contents and different capacities, so that appending to them might invalidate iterators differently.
The simple fact is that the "equality" implied by std::strong_ordering::equivalent but not by std::weak_ordering::equivalent is irrelevant to the very code that stands to benefit from it, because generic code doesn't depend on the implied behavioral changes, and non-generic code doesn't need to distinguish between the ordering types because it knows the rules for the type on which it operates.
The standard attempts to give the distinction meaning by talking about "substitutability", but that is inevitably circular because it can sensibly refer only to the very state examined by the comparisons. This was discussed prior to publishing C++20, but (perhaps for the obvious reasons) not much of the planned further discussion has taken place.

Equality operator overloads: Is (x!=y) == (!(x==y))?

Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
I know there are many subtleties involved here: The operators == and != may be overloaded. They may be overloaded to have different return types (which only have to be implicitly convertible to bool). Even the !-operator might be overloaded on the return type. That's why I handwavingly referred to the "truth value" above, but trying to elaborate it further, exploiting the implicit conversion to bool, and trying to eliminate possible ambiguities:
bool ne = (x!=y);
bool e = (x==y);
bool result = (ne == (!e));
Is result guaranteed to be true here?
The C++ standard specifies the equality operators in section 5.10, but mainly seems to define them syntactically (and some semantics regarding pointer comparisons). The concept of being EqualityComparable exists, but there is no dedicated statement about the relationship of its operator == to the != operator.
There exist related documents from C++ working groups, saying that...
It is vital that equal/unequal [...] behave as boolean negations of each other. After all, the world would make no sense if both operator==() and operator!=() returned false! As such, it is common to implement these operators in terms of each other
However, this only reflects the Common Sense™, and does not specify that they have to be implemented like this.
Some background: I'm just trying to write a function that checks whether two values (of unknown type) are equal, and print an error message if this is not the case. I'd like to say that the required concept here is that the types are EqualityComparable. But for this, one would still have to write if (!(x==y)) {…} and could not write if (x!=y) {…}, because this would use a different operator, which is not covered with the concept of EqualityComparable at all, and which might even be overloaded differently...
I know that the programmer basically can do whatever he wants in his custom overloads. I just wondered whether he is really allowed to do everything, or whether there are rules imposed by the standard. Maybe one of these subtle statements that suggest that deviating from the usual implementation causes undefined behavior, like the one that NathanOliver mentioned in a comment, but which seemed to only refer to certain types. For example, the standard explicitly states that for container types, a!=b is equivalent to !(a==b) (section 23.2.1, table 95, "Container requirements").
But for general, user-defined types, it currently seems that there are no such requirements. The question is tagged language-lawyer, because I hoped for a definite statement/reference, but I know that this may nearly be impossible: While one could point out the section where it said that the operators have to be negations of each other, one can hardly prove that none of the ~1500 pages of the standard says something like this...
In doubt, and unless there are further hints, I'll upvote/accept the corresponding answers later, and for now assume that for comparing not-equality for EqualityComparable types should be done with if (!(x==y)) to be on the safe side.
Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
No it doesn't. Absolutely nothing stops me from writing:
struct Broken {
bool operator==(const Broken& ) const { return true; }
bool operator!=(const Broken& ) const { return true; }
};
Broken x, y;
That is perfectly well-formed code. Semantically, it's broken (as the name might suggest), but there's certainly nothing wrong from it from a pure C++ code functionality perspective.
The standard also clearly indicates this is okay in [over.oper]/7:
The identities among certain predefined operators applied to basic types (for example, ++a ≡ a+=1) need not hold for operator functions. Some predefined operators, such as +=, require an operand to be an lvalue when applied to basic types; this is not required by operator functions.
In the same vein, nothing in the C++ standard guarantees that operator< actually implements a valid Ordering (or that x<y <==> !(x>=y), etc.). Some standard library implementations will actually add instrumentation to attempt to debug this for you in the ordered containers, but that is just a quality of implementation issue and not a standards-compliant-based decision.
Library solutions like Boost.Operators exist to at least make this a little easier on the programmer's side:
struct Fixed : equality_comparable<Fixed> {
bool operator==(const Fixed&) const;
// a consistent operator!= is provided for you
};
In C++14, Fixed is no longer an aggregate with the base class. However, in C++17 it's an aggregate again (by way of P0017).
With the adoption of P1185 for C++20, the library solution has effectively becomes a language solution - you just have to write this:
struct Fixed {
bool operator==(Fixed const&) const;
};
bool ne(Fixed const& x, Fixed const& y) {
return x != y;
}
The body of ne() becomes a valid expression that evaluates as !x.operator==(y) -- so you don't have to worry about keeping the two comparison in line nor rely on a library solution to help out.
In general, I don't think you can rely on it, because it doesn't always make sense for operator == and operator!= to always correspond, so I don't see how the standard could ever require it.
For example, consider the built-in floating point types, like doubles, for which NaNs always compare false, so operator== and operator!= can both return false at the same time. (Edit: Oops, this is wrong; see hvd's comment.)
As a result, if I'm writing a new class with floating point semantics (maybe a really_long_double), I have to implement the same behaviour to be consistent with the primitive types, so my operator== would have to behave the same and compare two NaNs as false, even though operator!= also compares them as false.
This might crop up in other circumstances, too. For example, if I was writing a class to represent a database nullable value I might run into the same issue, because all comparisons to database NULL are false. I might choose to implement that logic in my C++ code to have the same semantics as the database.
In practice, though, for your use case, it might not be worth worrying about these edge cases. Just document that your function compares the objects using operator== (or operator !=) and leave it at that.
No. You can write operator overloads for == and != that do whatever you wish. It probably would be a bad idea to do so, but the definition of C++ does not constrain those operators to be each other's logical opposites.

What are the differences between concepts and template constraints?

I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).