Why does double negation change the value of C++ concept? - c++

A friend of mine shown me a C++20 program with concepts, which puzzled me:
struct A { static constexpr bool a = true; };
template <typename T>
concept C = T::a || T::b;
template <typename T>
concept D = !!(T::a || T::b);
static_assert( C<A> );
static_assert( !D<A> );
It is accepted by all compilers: https://gcc.godbolt.org/z/e67qKoqce
Here the concept D is the same as the concept C, the only difference is in double negation operator !!, which from the first sight shall not change the concept value. Still for the struct A the concept C is true and the concept D is false.
Could you please explain why it is so?

Here the concept D is the same as the concept C
They are not. Constraints (and concept-ids) are normalized when checked for satisfaction and broken down to atomic constraints.
[temp.names]
8 A concept-id is a simple-template-id where the template-name is
a concept-name. A concept-id is a prvalue of type bool, and does not
name a template specialization. A concept-id evaluates to true if the
concept's normalized constraint-expression ([temp.constr.decl]) is
satisfied ([temp.constr.constr]) by the specified template arguments
and false otherwise.
And the || is regarded differently in C and D:
[temp.constr.normal]
2 The normal form of an expression E is a constraint that is defined
as follows:
The normal form of an expression ( E ) is the normal form of E.
The normal form of an expression E1 || E2 is the disjunction of the normal forms of E1 and E2.
The normal form of an expression E1 && E2 is the conjunction of the normal forms of E1 and E2.
The normal form of a concept-id C<A1, A2, ..., An> is the normal form of the constraint-expression of C, after substituting A1,
A2, ..., An for C's respective template parameters in the
parameter mappings in each atomic constraint. If any such substitution
results in an invalid type or expression, the program is ill-formed;
no diagnostic is required.
The normal form of any other expression E is the atomic constraint whose expression is E and whose parameter mapping is the identity
mapping.
For C the atomic constraints are T::a and T::b.
For D there is only one atomic constraint that is !!(T::a || T::b).
Substitution failure in an atomic constraint makes it not satisfied and evaluate to false. C<A> is a disjunction of one constraint that is satisified, and one that is not, so it's true. D<A> is false since its one and only atomic constraint has a substitution failure.

The important thing to realize is that per [temp.constr.constr], atomic constraints are composed only via conjunctions (through top-level &&) and disjunctions (through top-level ||). Negation must be thought of as part of a constraint, not the negation of a constraint. There's even a non-normative note pointing this out explicitly.
With that in mind, we can examine the two cases. C is a disjunction of two atomic constraints: T::a and T::b. Per /3, disjunctions employ short-circuiting behaviour when checking for satisfaction. This means that T::a is checked first. Since it succeeds, the entire constraint C is satisfied without ever checking the second.
D, on the other hand, is one atomic constraint: !!(T::a || T::b). The || does not create a disjunction in any way, it's simply part of the expression. We look to [temp.constr.atomic]/3 to see that template parameters are substituted in. This means that both T::a and T::b have substitution performed. This paragraph also states that if substitution fails, the constraint is not satisfied. As the earlier note suggests, the negations out front are not even considered yet. In fact, having only one negation yields the same result.
Now the obvious question is why concepts were designed this way. Unfortunately, I don't remember coming across any reasoning for it in the designer's conference talks and other communications. The best I've been able to find was this bit from the original proposal:
While negation has turned out to be fairly common in our constraints (see Section 5.3), we have not found it necessary to assign deeper semantics to the operator.
In my opinion, this is probably really underselling the thought that was put into the decision. I'd love to see the designer elaborate on this, as I'm confident he has more to say than this small quotation.

Related

Using concepts in an unevaluated context gives inconsistent results

Consider the following useless concept C:
template<class T>
concept C = static_cast<T>(true);
If we pass an arbitrary type to C in an unevaluated context, then all three compilers will compile successfully:
struct S {};
decltype(C<S>) x = 0;
But if we pass int to C in the unevaluated context:
decltype(C<int>) y = 0;
GCC still accepts it, while Clang and MSVC reject it with the same error message:
<source>:2:13: error: atomic constraint must be of type 'bool' (found 'int')
Is the above code still well-formed? Which compiler should I trust?
Concept names do not work on the basis of evaluating an expression as we would normally think of it. A concept name resolves to a boolean that tells if the constraint-expression is satisfied or not:
A concept-id is a prvalue of type bool, and does not name a template specialization. A concept-id evaluates to true if the concept's normalized constraint-expression is satisfied ([temp.constr.constr]) by the specified template arguments and false otherwise
Constraint expressions are broken down into atomic pieces. Fortunately, your constraint expression has only one atomic piece: static_cast<T>(true). The way we resolve whether an atomic constraint is satisfied is simple. There are several parts. Part one is:
To determine if an atomic constraint is satisfied, the parameter mapping and template arguments are first substituted into its expression. If substitution results in an invalid type or expression, the constraint is not satisfied.
This is why the compilers allow the first one. static_cast<S>(true) is not a valid expression, as there is no conversion from a bool to an S. Therefore, the atomic constraint is not satisfied, so C<S> is false.
However, static_cast<int>(true) is a valid expression. So we move on to part 2:
Otherwise, the lvalue-to-rvalue conversion is performed if necessary, and E shall be a constant expression of type bool.
And that's where we run into the word "shall". In standard-ese, "shall" means "if the user provides code where this is not the case, there is a compile error". An int is not a "constant expression of type bool". Therefore, the code does not conform to this requirement. And a compile error results.
I imagine that GCC just treats the errors as substitution failures (that or it automatically coerces it into a bool), but the standard requires MSVC/Clang's behavior of erroring out.

Satisfied and modeled concept?

Introduction
The standard specifies that each concept is related to two predicates:
predicate "is statisfied by": a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
predicate "is modeled by": A sequence Args of template arguments is said to model a concept C if Args satisfies C ([temp.constr.decl]) and meets all semantic requirements (if any) given in the specification of C. [res.on.requirements]
For some concepts, the requirements that makes a satisfied concept modeled are clearly expressed. Example [concept.assignable]
LHS and RHS model assignable_­from<LHS, RHS> only if
addressof(lhs = rhs) == addressof(lcopy)
[...]
But I wonder if the syntactic requirements also implicitly implies semantic requirements.
Question
Does the syntactic predicates implicitly imply requirement for the concept to be modeled ?
I see two kind of implicit requirement:
The concept is satisfied because syntactically checked expressions are unevaluated expressions and such expressions would result in the program being ill-formed if those expressions were not unevaluated expressions.
The concept is satisfied because syntactically checked expressions are not evaluated but evaluation of those expression would result in the program having undefined behavior.
Examples
For example, let's consider the default_initializable concept, defined here: [concept.default.init].
default_initializable is satisfied by A<int> but the program is ill-formed if a variable of type A<int> is default-initialized (demo):
template <class T>
struct A {
A() {
f(T{});
}
};
static_assert (default_initializable <A<int>>); // A<int> satisfies default_initializable
A<int> a{}; //compile time error: f not declared in this scope
default_initializable is satisfied by A but default-initialization of A result in undefined behavior (when the default-initialization is not preceded by a zero-initialization) (demo):
struct A {
int c;
A() {
c++;
}
};
static_assert (default_initializable <A>); // A satisfies default_initializable
auto p = new A; //undefined behavior: indeterminate-value as operand of operator ++
a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
No, it is not "almost" anything: it is a syntactic check. The constraints specified by a requires clause (for example) verify that a specific syntax is legal syntax for that type. This is all that "satisfying a concept" means.
Does the syntactic predicates implicitly imply requirement for the concept to be modeled?
... no. If satisfying a concept also implied modeling the concept, then the standard wouldn't need different terms for these.
The point of having such a distinction is the recognition that the concept language feature can't specify every requirement that concepts as a concept should encapsulate. So satisfying-a-concept is just the language part, while modelling-a-concept includes things that the language can't do.
But that question is kind of separate from what your two examples show. Your examples represent the difference between "valid syntax" and "can be compiled/executed". Satisfying a concept only cares about the former. And modelling a concept only cares about the latter to the extent that said semantic behavior is explicitly specified.
There is nothing in the standard about implicit semantic requirements. There is no statement to the effect of "all expressions/statements in a concept must be able to be compiled and/or executed in order for it to be modeled". Nor is it intended to.
However much we try to pretend it's more than this, concepts as it exists in C++20 is nothing more than a more convenient mechanism for performing SFINAE. SFINAE can't test compilable/executable validity of the contents of some expression, so neither can concepts. And neither does concepts attempt to pretend that it can.

Why is the switch-case statement not allowed upon Equality-Comparable classes?

Why is a C++ class satisfying the EqualityComparable concept NOT allowed to be used in a switch-case statement? What is the rationale behind this decision?
Here follows the EqualityComparable definition:
template <class T>
concept bool EqualityComparable() {
return requires(T a, T b) {
{a == b} -> Boolean; // Boolean is the concept defining a type usable in boolean context
{a != b} -> Boolean;
};
}
The switch statement was designed with branch tables in mind. And so it requires that it operates with integer types 1). For me this is a historic reason, as I can easily see a relaxed rule where you can have any kind of comparable types, or even supply your own comparator.
Even as it is now, the compiler is not forced to use branch tables for switch (it is an implementation detail 2)), so having a switch statement that cannot create these branch tables (with non-integer types) would not be an issue in my humble opinion.
1) or enumeration type, or of a class type contextually implicitly convertible to an integral or enumeration type
http://en.cppreference.com/w/cpp/language/switch
2) in fact the compiler can do all sorts of crazy things, e.g. generate a hibrid of classic conditional jumps with multi-level branch tables.
The switch-case statement can only be used for integral types, not arbitrary equality comparable types: http://en.cppreference.com/w/cpp/language/switch
The intention of creating the switch-case construct is that it creates a jump table instead of being just a line of if-then-else constructs, which is more efficient in most cases. It may not always be more efficient in modern CPUs with branch prediction, but the language was created way before branch prediction was a thing (and is not always used even now, for example embedded CPUs like ARM).

define type before use

According to the MLton documentation:
Standard ML requires types to be defined before they are used. [link]
Not all implementations enforce this requirement (for example, SML/NJ doesn't), but the above-linked page makes a good case for why it might be needed for soundness (depending on how the implementation handles the value restriction), and it accords with some of the commentary in the Definition:
Although not assumed in our definitions, it is intended that every context C = T, U, E has the property that tynames E ⊆ T. Thus T may be thought of, loosely, as containing all type names which "have been generated". […] Of course, remarks about what "has been generated" are not precise in terms of the semantic rules. But the following precise result may easily be demonstrated:
Let S be a sentence T, U, E ⊢ phrase ⇒ A such that tynames E ⊆ T, and let S′ be a sentence T′, U′, E′ ⊢ phrase′ ⇒ A′ occurring in a proof of S; then also tynames E′ ⊆ T′.
[page 21]
But I'm doubly confused by this.
Firstly — the above theorem seems backward. If I correctly understand the phrase "occurring in a proof of S", then this seems to mean (by contrapositive) "once you have a context that violates the intention that tynames E ⊆ T, all subsequent contexts will also violate that intention". Even if that's true, it seems that it would be much more useful and meaningful to assert the converse, namely, "if all contexts so far conform to the intention that tynames E ⊆ T, then any subsequently inferable context will also conform to that intention". No?
Secondly — neither MLton's statement nor the Definition's statement actually seems to be supported by the inference rules (or the "Further Restrictions" that follow them). A few inference rules have "tynames τ ⊆ T of C" or "tynames VE ⊆ T of C" as a side-condition, but none of those rules is needed for this program (given in the above-linked documentation):
val r = ref NONE
datatype t = A | B
val () = r := SOME A
(Specifically: rule (4) has to do with let, rule (14) with =>, and rule (26) with rec. None of those is used in this program.)
And coming at it from the other direction, rule (17), which covers datatype declarations, requires only that the generated type names not be in T of C; so it doesn't prevent the generation of a type name used in the existing value environment (except insofar as it's already true that tynames VE ⊆ T of C).
I feel like I'm probably missing something pretty basic here, but I have no idea what it could be!
Regarding your first question, I'm not sure why you suggest that reading. The result basically says that if you have a derivation S (think of it as a tree) whose context satisfies the condition, then all its subderivations (think subtrees) will have contexts that also satisfy the condition. In other words, all rules maintain the condition. Think of the condition as the well-formedness requirement for contexts C.
Regarding your second question, note the use of ⊕ in the sequencing rule (24), which extends T of C as needed. More concretely, if r was assigned type t option ref, then the first declaration would produce an environment E1 with the corresponding t &in; tynames E1. Then, according to the sequencing rule (24), the second declaration would have to be elaborated under the context C' = C ⊕ E1, which is defined as C + (tynames E1, E1) in Section 4.3. Hence, t &in; T of C', as required for well-formedness, and consequently, rule (17) would not be able to pick the same t as the denotation of t.

What are the differences between concepts and template constraints?

I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).