Using concepts in an unevaluated context gives inconsistent results - c++

Consider the following useless concept C:
template<class T>
concept C = static_cast<T>(true);
If we pass an arbitrary type to C in an unevaluated context, then all three compilers will compile successfully:
struct S {};
decltype(C<S>) x = 0;
But if we pass int to C in the unevaluated context:
decltype(C<int>) y = 0;
GCC still accepts it, while Clang and MSVC reject it with the same error message:
<source>:2:13: error: atomic constraint must be of type 'bool' (found 'int')
Is the above code still well-formed? Which compiler should I trust?

Concept names do not work on the basis of evaluating an expression as we would normally think of it. A concept name resolves to a boolean that tells if the constraint-expression is satisfied or not:
A concept-id is a prvalue of type bool, and does not name a template specialization. A concept-id evaluates to true if the concept's normalized constraint-expression is satisfied ([temp.constr.constr]) by the specified template arguments and false otherwise
Constraint expressions are broken down into atomic pieces. Fortunately, your constraint expression has only one atomic piece: static_cast<T>(true). The way we resolve whether an atomic constraint is satisfied is simple. There are several parts. Part one is:
To determine if an atomic constraint is satisfied, the parameter mapping and template arguments are first substituted into its expression. If substitution results in an invalid type or expression, the constraint is not satisfied.
This is why the compilers allow the first one. static_cast<S>(true) is not a valid expression, as there is no conversion from a bool to an S. Therefore, the atomic constraint is not satisfied, so C<S> is false.
However, static_cast<int>(true) is a valid expression. So we move on to part 2:
Otherwise, the lvalue-to-rvalue conversion is performed if necessary, and E shall be a constant expression of type bool.
And that's where we run into the word "shall". In standard-ese, "shall" means "if the user provides code where this is not the case, there is a compile error". An int is not a "constant expression of type bool". Therefore, the code does not conform to this requirement. And a compile error results.
I imagine that GCC just treats the errors as substitution failures (that or it automatically coerces it into a bool), but the standard requires MSVC/Clang's behavior of erroring out.

Related

Why does double negation change the value of C++ concept?

A friend of mine shown me a C++20 program with concepts, which puzzled me:
struct A { static constexpr bool a = true; };
template <typename T>
concept C = T::a || T::b;
template <typename T>
concept D = !!(T::a || T::b);
static_assert( C<A> );
static_assert( !D<A> );
It is accepted by all compilers: https://gcc.godbolt.org/z/e67qKoqce
Here the concept D is the same as the concept C, the only difference is in double negation operator !!, which from the first sight shall not change the concept value. Still for the struct A the concept C is true and the concept D is false.
Could you please explain why it is so?
Here the concept D is the same as the concept C
They are not. Constraints (and concept-ids) are normalized when checked for satisfaction and broken down to atomic constraints.
[temp.names]
8 A concept-id is a simple-template-id where the template-name is
a concept-name. A concept-id is a prvalue of type bool, and does not
name a template specialization. A concept-id evaluates to true if the
concept's normalized constraint-expression ([temp.constr.decl]) is
satisfied ([temp.constr.constr]) by the specified template arguments
and false otherwise.
And the || is regarded differently in C and D:
[temp.constr.normal]
2 The normal form of an expression E is a constraint that is defined
as follows:
The normal form of an expression ( E ) is the normal form of E.
The normal form of an expression E1 || E2 is the disjunction of the normal forms of E1 and E2.
The normal form of an expression E1 && E2 is the conjunction of the normal forms of E1 and E2.
The normal form of a concept-id C<A1, A2, ..., An> is the normal form of the constraint-expression of C, after substituting A1,
A2, ..., An for C's respective template parameters in the
parameter mappings in each atomic constraint. If any such substitution
results in an invalid type or expression, the program is ill-formed;
no diagnostic is required.
The normal form of any other expression E is the atomic constraint whose expression is E and whose parameter mapping is the identity
mapping.
For C the atomic constraints are T::a and T::b.
For D there is only one atomic constraint that is !!(T::a || T::b).
Substitution failure in an atomic constraint makes it not satisfied and evaluate to false. C<A> is a disjunction of one constraint that is satisified, and one that is not, so it's true. D<A> is false since its one and only atomic constraint has a substitution failure.
The important thing to realize is that per [temp.constr.constr], atomic constraints are composed only via conjunctions (through top-level &&) and disjunctions (through top-level ||). Negation must be thought of as part of a constraint, not the negation of a constraint. There's even a non-normative note pointing this out explicitly.
With that in mind, we can examine the two cases. C is a disjunction of two atomic constraints: T::a and T::b. Per /3, disjunctions employ short-circuiting behaviour when checking for satisfaction. This means that T::a is checked first. Since it succeeds, the entire constraint C is satisfied without ever checking the second.
D, on the other hand, is one atomic constraint: !!(T::a || T::b). The || does not create a disjunction in any way, it's simply part of the expression. We look to [temp.constr.atomic]/3 to see that template parameters are substituted in. This means that both T::a and T::b have substitution performed. This paragraph also states that if substitution fails, the constraint is not satisfied. As the earlier note suggests, the negations out front are not even considered yet. In fact, having only one negation yields the same result.
Now the obvious question is why concepts were designed this way. Unfortunately, I don't remember coming across any reasoning for it in the designer's conference talks and other communications. The best I've been able to find was this bit from the original proposal:
While negation has turned out to be fairly common in our constraints (see Section 5.3), we have not found it necessary to assign deeper semantics to the operator.
In my opinion, this is probably really underselling the thought that was put into the decision. I'd love to see the designer elaborate on this, as I'm confident he has more to say than this small quotation.

Satisfied and modeled concept?

Introduction
The standard specifies that each concept is related to two predicates:
predicate "is statisfied by": a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
predicate "is modeled by": A sequence Args of template arguments is said to model a concept C if Args satisfies C ([temp.constr.decl]) and meets all semantic requirements (if any) given in the specification of C. [res.on.requirements]
For some concepts, the requirements that makes a satisfied concept modeled are clearly expressed. Example [concept.assignable]
LHS and RHS model assignable_­from<LHS, RHS> only if
addressof(lhs = rhs) == addressof(lcopy)
[...]
But I wonder if the syntactic requirements also implicitly implies semantic requirements.
Question
Does the syntactic predicates implicitly imply requirement for the concept to be modeled ?
I see two kind of implicit requirement:
The concept is satisfied because syntactically checked expressions are unevaluated expressions and such expressions would result in the program being ill-formed if those expressions were not unevaluated expressions.
The concept is satisfied because syntactically checked expressions are not evaluated but evaluation of those expression would result in the program having undefined behavior.
Examples
For example, let's consider the default_initializable concept, defined here: [concept.default.init].
default_initializable is satisfied by A<int> but the program is ill-formed if a variable of type A<int> is default-initialized (demo):
template <class T>
struct A {
A() {
f(T{});
}
};
static_assert (default_initializable <A<int>>); // A<int> satisfies default_initializable
A<int> a{}; //compile time error: f not declared in this scope
default_initializable is satisfied by A but default-initialization of A result in undefined behavior (when the default-initialization is not preceded by a zero-initialization) (demo):
struct A {
int c;
A() {
c++;
}
};
static_assert (default_initializable <A>); // A satisfies default_initializable
auto p = new A; //undefined behavior: indeterminate-value as operand of operator ++
a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
No, it is not "almost" anything: it is a syntactic check. The constraints specified by a requires clause (for example) verify that a specific syntax is legal syntax for that type. This is all that "satisfying a concept" means.
Does the syntactic predicates implicitly imply requirement for the concept to be modeled?
... no. If satisfying a concept also implied modeling the concept, then the standard wouldn't need different terms for these.
The point of having such a distinction is the recognition that the concept language feature can't specify every requirement that concepts as a concept should encapsulate. So satisfying-a-concept is just the language part, while modelling-a-concept includes things that the language can't do.
But that question is kind of separate from what your two examples show. Your examples represent the difference between "valid syntax" and "can be compiled/executed". Satisfying a concept only cares about the former. And modelling a concept only cares about the latter to the extent that said semantic behavior is explicitly specified.
There is nothing in the standard about implicit semantic requirements. There is no statement to the effect of "all expressions/statements in a concept must be able to be compiled and/or executed in order for it to be modeled". Nor is it intended to.
However much we try to pretend it's more than this, concepts as it exists in C++20 is nothing more than a more convenient mechanism for performing SFINAE. SFINAE can't test compilable/executable validity of the contents of some expression, so neither can concepts. And neither does concepts attempt to pretend that it can.

Strange operator ?: usage with decltype

I am reading a book, that explains C++ traits and there is an example from C++ type_traits header with a strange ?: usage, here is the quote from the corresponding /usr/include/c++/... file:
template<typename _Tp, typename _Up>
static __success_type<typename decay<decltype
(true ? std::declval<_Tp>()
: std::declval<_Up>())>::type> _S_test(int);
Setting aside the purpose of the given declaration, the ?: operator usage puzzles me in this code. If the first operand is true, then std::declval<_Tp>() will always be chosen as result of the evaluation.
How does that declval operand selection actually works?
Edit: originally read in Nicolai M. Josuttis's "The C++ Standard Library: A Tutorial and Reference, 2nd ed.", p.125. But there it is given in a slightly simplified form as compared to what my GCC header files has.
In the expression true ? std::declval<_Tp>() : std::declval<_Up>() the first alternative is always selected, but the whole expression must be a valid expression. So std::declval<_Up>() must be valid and that means _Up must be a callable that accepts zero arguments. Beside that, _Tp() and _Up() must return the same type (or one of the types must be implicitly convertible to another), otherwise ternary iterator would not be able to select return value.
This technique is called SFINAE (substitution failure is not an error). The idea is that if template instantiation fails, then it is not an error, and this template is just ignored and compiler searches for another one.
The idea here is that ?: requires that the second and third operand has the same type, or one type is convertible to the other.
Otherwise the instantiation of the function will fail, and some other overload is selected.

Is there any guarantee on the order of substitution in a function template after type deduction?

Consider this function template:
template<typename T>
typename soft_error<T>::type foo(T, typename hard_error<T>::type)
{ }
After deducing type T from the type of the first argument in the call to foo(), the compiler will proceed to substitute T and instantiate the function signature.
If substitution for the return type gets executed first, causing a simple substitution failure, the compiler will discard this function template when computing the overload set and search for other viable overloads (SFINAE).
On the other hand, if substitution for the second function parameter occurs first, causing a hard error (e.g. because of a substitution failure in a non-immediate context), the entire compilation would fail.
QUESTION: Is there any guarantee on the order in which substitution will be performed for the function parameters and return types?
NOTE: This example seems to show that on all major compilers (VC11 was tested separately and gave identical results) substitution for the return type occurs before substitution for the parameter types.
[NOTE: This was not originally meant to be a self-answered question, but I happened to find out the solution while crafting the question]
Is there any guarantee on the order in which substitution will be performed for the function parameters and return types?
Not in the current standard.
However, this Defect Report (courtesy of Xeo) shows that this is indeed intended to be the case. Here is the proposed new wording for Paragraph 14.8.2/7 of the C++11 Standard (which has become part of the n3485 draft):
The substitution occurs in all types and expressions that are used in the function type and in template
parameter declarations. The expressions include not only constant expressions such as those that appear in
array bounds or as nontype template arguments but also general expressions (i.e., non-constant expressions)
inside sizeof, decltype, and other contexts that allow non-constant expressions. The substitution proceeds
in lexical order and stops when a condition that causes deduction to fail is encountered. [...]
As correctly pointed out by Nicol Bolas in the comments to the question, lexical order means that a trailing return type would be substituted after the parameter types, as shown in this live example.

Where in the C++11 standard does it specify when a constexpr function can be evaluated during translation?

Just because a function (or constructor)...
is declared constexpr and
the function definition meets the constexpr requirements
...doesn't mean that the compiler will evaluate the constexpr function during translation. I've been looking through the C++11 FDIS (N3242, available at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/) to try and determine two things:
When is the compiler obligated to evaluate a constexpr function during translation?
When is the compiler allowed to evaluate a constexpr function during translation?
Section 5.19 Paragraph 1 says that constant expressions can be evaluated during translation. As far as I can comprehend, the remainder of Section 5.19 sets forth the rules for what is valid in the definition of a constexpr function.
I understand that I can force constexpr evaluation during translation by declaring the result of the constexpr function as constexpr. Like this:
// Declaration
constexpr double eulers_num() { return 2.718281828459045235360287471; }
// Forced evaluation during translation
constexpr double twoEulers = eulers_num() * 2.0;
static_assert(twoEulers > 5.0, "Yipes!");
So far I've been unable to find the paragraphs in the FDIS that:
Force twoEulers to be evaluated during translation or
Specify other situations when the compiler may or must evaluate a constexpr function during translation.
What I'm particularly interested in discovering is whether constexpr evaluation during translation is triggered by:
When all parameters passed to the constexpr function are literals, or
The implied object argument during overload resolution (Section 13.3.1 Paragraph 3) is either constexpr or requires a literal (such as for array dimensions), or
Something else entirely.
Please, if possible, in your responses cite sections of the FDIS that I can look up or key phrases I can search in the FDIS. The English in the standard is somewhat obtuse, so I may have been reading the relevant paragraphs and have entirely missed their meaning or intent.
It is "allowed" to evaluate the constexpr call at compile time whenever it is actually possible to do so. Remember that the specification operates under the "as if" rule. So if you can't tell the difference, the compiler can do whatever it wants.
The compiler is required to evaluate constexpr calls at compile time when it actually needs the answer at compile time. For example:
constexpr int foo() {return 5;}
std::array<float, foo()> arr;
The compiler needs to know the array size at compile time. Therefore, it must evaluate the constant expression at compile time. If the constexpr function cannot be executed at compile time, you get a compile-time error.
Nicol Bolas is 100% correct, but there is one other interesting aspect: whether the expression is evaluated at translation-time and whether it is evaluated at run-time are completely independent questions. Since the constant expression cannot have side-effects, it can be evaluated an arbitrary number of times, and nothing stops it from being evaluated at both translation-time and run-time.
Suppose the constant expression were a large array (not a std::array, just an array), which is entirely legal, and the program does not indicate that it has static storage. Suppose also that only element 7 of the array is used in a context in which compile-time computation is necessary. It is quite reasonable for the compiler to compute the entire array, use element 7, discard it, and insert code to compute it at run-time in the scope in which it is used, rather than bloating the binary with the whole computed array. I believe this is not a theoretical issue; I've observed it with various compilers in various contexts. constexpr does not imply static.
Of course, if the compiler can determine that the array is not used at runtime, it might not even insert code to compute it, but that's yet another issue.
If you do use such an object at run-time, and you want to indicate to the compiler that it would be worth keeping it around for the duration of the program, you should declare it as static.
By combing the FDIS I have found three places that specify where a constexpr expression must be evaluated during translation.
Section 3.6.2 Initialization of non-local variables, paragraph 2 says if an object with static or thread local storage duration is initialized with a constexpr constructor then the constructor is evaluated during translation:
Constant initialization is performed:
if an object with static or thread storage duration is initialized by a constructor call, if the constructor is a constexpr constructor, if all constructor arguments are constant expressions (including conversions), and if, after function invocation substitution (7.1.5), every constructor call and full-expression in the mem-initializers is a constant expression;
Section 7.1.5 The constexpr specifier, paragraph 9 says if an object declaration includes the constexpr specifier, that object is evaluated during translation (i.e., is a literal):
A constexpr specifier used in an object declaration declares the object as const. Such an object shall have literal type and shall be initialized. If it is initialized by a constructor call, that call shall be a constant expression (5.19). Otherwise, every full-expression that appears in its initializer shall be a constant expression. Each implicit conversion used in converting the initializer expressions and each constructor call used for the initialization shall be one of those allowed in a constant expression (5.19).
I’ve heard people argue that this paragraph leaves room for an implementation to postpone the initialization until runtime unless the effect can be detected during translation due to, say, a static_assert. That is probably not an accurate view because whether a value is initialized during translation is, under some circumstances, observable. This view is reinforced by Section 5.19 Constant expressions paragraph 4:
[ Note: Although in some contexts constant expressions must be evaluated during program translation, others may be evaluated during program execution. Since this International Standard imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution... — end note ]
Section 9.4.2 Static data members, paragraph 3 says if a const static data member of literal type is initialized by a constexpr function or constructor, then that function or constructor must be evaluated during translation:
If a static data member is of const literal type, its declaration in the class definition can specify a brace-orequal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. A static data member of literal type can be declared in the class definition with the constexpr specifier; if so, its declaration shall specify a brace-or-equal-initializer in which every initializer-clause that is an assignment-expression is a constant expression. [ Note: In both these cases, the member may appear in constant expressions. — end note ]
Interestingly, I did not find anything in the FDIS that required a constexpr expression to be evaluated if its result is used as an array dimension. I'm quite sure the standard committee expects that to be the case. But I also could have missed that in my search.
Outside of those circumstances the C++11 standard allows computations in constexpr functions and constructors to be performed during translation. But it does not require it. The computations could occur at runtime. Which computations the compiler performs during translation are, to a certain extent, a quality of implementation question.
In all three of the situations I located, the trigger for translation-time evaluation is based on the requirements of the target using the result of the constexpr call. Whether or not the arguments to the constexpr function are literal is never considered (although it is a pre-requisite for valid evaluation).
So, to get to the real point of this, it appears that constexpr evaluation during translation is triggered by:
The implied object argument during overload resolution (Section 13.3.1 Paragraph 3) is either constexpr or requires a literal.
I hope that's helpful to someone besides me. Thanks to everyone who contributed.
I don't think it's forced anywhere. I had a look too, it's tricky because there's not one paper on constexpr in that list; they all seem to add/remove from the previous collection of papers.
I think the general idea is when the inputs to the constexpr function are constexpr themselves, it'll all be done at compile time; and by extension non-function constexpr statements, which are literal anyway will be run at compile time if you're using a half intelligent compiler.
If a constexpr function or constructor is called with arguments which
aren't constant expressions, the call behaves as if the function were
not constexpr, and the resulting value is not a constant expression.
from wikipedia
which in seem to get the info from this pdf:
constexpr functions: A constexpr function is one which is “suf-
ficiently simple” so that it delivers a constant expression when
called with arguments that are constant values (see §2.1).