Rationale for [dcl.constexpr]p5 in the c++ standard - c++

What is the rationale for [dcl.constexpr]p5 (http://eel.is/c++draft/dcl.constexpr#5)?
For a non-template, non-defaulted constexpr function or a
non-template, non-defaulted, non-inheriting constexpr constructor, if
no argument values exist such that an invocation of the function or
constructor could be an evaluated subexpression of a core constant
expression ([expr.const]), or, for a constructor, a constant
initializer for some object ([basic.start.init]), the program is
ill-formed; no diagnostic required.
If a program violated this rule, declaring the offending function constexpr was useless. So what? Isn't it better to accept useless uses of the decl-specifier constexpr instead of triggering undefined behaviour (by no diagnostics required)? In addition to the problem with undefined behaviour we also have the additional complexity of having the rule [dcl.constexpr]p5 in the standard.
An implementation can still provide useful diagnostic messages in some cases that it is able to detect (warnings by convention). Just like in the following case:
int main() { 0; }
The expression in main there is well-formed but useless. Some compilers issue a diagnostic message anyway (and they are allowed to) in the form of a warning.
I understand that [dcl.constexpr]p5 cannot require diagnostics, so i'm not asking about that. I'm just asking about why this rule is even in the standard.

The reason it's ill-formed is because making it ill-formed allows implementations to reject constexpr function definitions that cannot possibly form constant expressions. Rejecting them early means getting more useful diagnostics.
The reason no diagnostic is required is because it may be unrealistic for an implementation to determine that for each and every possible combination of arguments, the result is not a constant expression.
The fact that ill-formed, no diagnostic required, effectively means the same thing as making the behaviour undefined seems to me as if it's unfortunate, but merely picked for lack of a better option. I'd be highly surprised if the intent would actually be to allow any arbitrary run-time behaviour, but there is no concept of "may be diagnosed as an error, but if not, must behave as specified" for any language feature in C++.

Related

Is nodiscard necessary on operators?

Is the [[nodiscard]] attribute necessary on operators? Or is it safe to assume the compiler will emit a warning like it does for most suspiciously discarded things?
E.g. an overloaded operator+, should one apply the attribute? What about special operators like function-cast operators or new operators? When is it pedantic?
Let me cite the following paper by N.Josuttis: "[[nodiscard]] in the library" (with some omissions, see the full paper):
C++17 introduced the [[nodiscard]] attribute. The question is, where to apply it now in the standard library. It should be added where:
not using the return value always is a “huge mistake” (e.g. always resulting in resource leak),
not using the return value is a source of trouble and easily can happen (not obvious that something is wrong).
It should not be added when:
not using the return value is a possible/common way of programming at least for some input,
not using the return value makes no sense but doesn’t hurt and is usually not an error.
So, [[nodiscard]] should not signal bad code if this
can be useful not to use the return value,
is common not to use the return value,
doesn’t hurt and probably no state change was meant that doesn’t happen.
It is never necessary to add the [[nodiscard]] attribute. From cppreference:
If a function declared nodiscard or a function returning an enumeration or class declared nodiscard by value is called from a discarded-value expression other than a cast to void, the compiler is encouraged to issue a warning.
Note the last part: "... the compiler is encouraged to issue a warning." The is no guarantee, as far as the standard is concerned, that there actually will be a warning. Its a quality of implementation issue. If your compiler does emit a warning (read the docs) and if you are treating such warnings as errors, then the [[nodiscard]] can be of great use.
It is pedantic to use the attribute on operators where discarding the return is only potentially an error. I would only use it when calling the operator and discarding the result is always a logic error. Many operators use the return value merely to enable chaining and the [[nodiscard]] would rather be an annoyance on such operators. There are cases where the decision is not so obvious and it is a matter of opinion and style what you choose.
Is nodiscard necessary on operators?
No. nodiscard and other attribures are optional.
Or is it safe to assume the compiler will emit a warning like it does for most suspiciously discarded things?
There is no guarantee about any warning in the language except when the program is ill formed.
I would also not assume warning without nodiscard because there are many cases where result of operation is intentionally discarded. A common example:
a = b; // result of assignment was discarded
In fact, if all discarded results resulted in a warning, then there would not be any purpose for the nodiscard attribure.

Why are constexpr functions possibly ill-formed, NDR (10.1.5)?

Paragraph 10.1.5 says that a program is ill-formed, no diagnostic required, if a function is declared constexpr but no set of arguments exist that make it evaluable at compile-time.
What's the rationale behind this?
Since it's not feasible for the compiler to check that precondition, how can it benefit from this rule?
However, the only alternative I would see is to declare such programs well-formed (and so barely enforcing constexpr at all, making it rather a kind of hint to the compiler and reader). But wouldn't this still be preferable to having more UB in C++, with all its undesirable consequences? Maybe constexpr is indeed going in the wrong direction...

Is missing a required include undefined behavior?

As I wrote an answer to How is it possible to use pow without including cmath library I fear to have proven that missing an include of a needed header is actually undefined behavior, but since I have not found any consent of that fact I like to impose the formal question:
Is missing a required header i.e.
#include <iostream>
int main()
{
std::cout << std::pow(10, 2);
}
Ill-formed ( [defns.ill.formed] ) code?
Invoking undefined behavior ( [defns.undefined] )?
If it is not 1 and 2, is it unspecified behavior [defns.unspecified] or implementation-defined behavior [defns.impl.defined]?
If not 1. i.e. if this code is well-formed, wouldn't that contradict [using.headers] and [intro.compliance] "accept and correctly execute a well-formed program"?
As in my answer I tend to affirm both questions, but [using.headers] is very confusing because of Difference between Undefined Behavior and Ill-formed, no diagnostic message required . As [defns.well.formed] implies that a program constructed to the ODR is well formed, and there is specification of whenever the for example iostream must not define pow, one could argue this is still unspecified behavior ( [defns.unspecified]). I don't want to rely only of my standard interpretation skills for a definitive answer for such an important question. Note that the accepted i.e. the only other answer does not answer if the code is UB nor does the question asks it.
It is unspecified whether this program is well-formed or ill-formed (with a required diagnostic, because name lookup doesn’t find pow). The possibilities arise from the statement that one C++ header may include another, which grants permission to the implementation to give this program either of just two possible interpretations.
Several similar rules (e.g., that a template must have at least one valid potential specialization) are described as rendering the program ill-formed, no diagnostic required, but in this situation that freedom is not extended to the implementation (which is arguably preferable). That said, an implementation is allowed to process an ill-formed program in an arbitrary fashion so long as it issues at least one diagnostic message, so it’s not completely unreasonable to group this situation with true undefined behavior even though the symptoms differ usefully in practice.

`noexcept` behavior of `constexpr` functions

The wording of [expr.unary.noexcept] changed in C++17.
Previously (n4140, 5.3.7 noexcept operator [expr.unary.noexcept]), my emphasis:
The result of the noexcept operator is false if in a potentially-evaluated context the expression would contain
(3.1) a potentially-evaluated call to a function, member function,
function pointer, or member function pointer that does not have a
non-throwing exception-specification ([except.spec]), unless the call
is a constant expression ([expr.const]) ...
Now1 (7.6.2.6 noexcept operator [expr.unary.noexcept]):
The result of the noexcept operator is true unless the expression is potentially-throwing ([except.spec]).
And then in 14.5 Exception specifications [except.spec]:
If a declaration of a function does not have a noexcept-specifier, the declaration has a potentially throwing exception specification unless ...
but the unless list of 14.5(3) doesn't list constexpr, leaving it as potentially throwing...
1 a link to C++17 n4659 added by L.F. in a comment.
Test code
constexpr int f(int i) { return i; }
std::cout << boolalpha << noexcept(f(7)) << std::endl;
int a = 7;
std::cout << boolalpha << noexcept(f(a)) << std::endl;
used to print (with gcc 8.3):
true
false
both when compiled with -std=c++11 and -std=c++2a
However the same code prints now (with gcc 9.2):
false
false
both when compiled with -std=c++11 and -std=c++2a
Clang by the way is very consistent, since 3.4.1 and goes with:
false
false
What is the right behavior per each spec?
Was there a real change in the spec? If so, what is the reason for this change?
If there is a change in the spec that affects or contradicts past behavior, would it be a common practice to emphasize that change and its implications? If the change is not emphasized can it imply that it might be an oversight?
If this is a real intended change, was it considered a bug fix that should go back to previous versions of the spec, are compilers right with aligning the new behavior retroactively to C++11?
Side Note: the noexcept deduction on a constexpr function affects this trick.
Summary
What is the right behavior per each spec?
true false before C++17, false false since C++17.
Was there a real change in the spec? If so, what is the reason for this change?
Yes. See the quote from the Clang bug report below.
If there is a change in the spec that affects or contradicts past
behavior, would it be a common practice to emphasize that change and
its implications? If the change is not emphasized can it imply that it
might be an oversight?
Yes; yes (but CWG found a reason to justify the oversight later, so it was kept as-is).
If this is a real intended change, was it considered a bug fix that
should go back to previous versions of the spec, are compilers right
with aligning the new behavior retroactively to C++11?
I'm not sure. See the quote from the Clang bug report below.
Detail
I have searched many places, and so far the closest thing I can find is the comments on relevant bug reports:
GCC Bug 87603 - [C++17] noexcept isn't special cased for constant expressions anymore
CWG 1129 (which ended up in C++11) added a special case to noexcept
for constant expressions, so that:
constexpr void f() {} static_assert(noexcept(f()));
CWG 1351 (which ended up in C++14) changed the wording significantly,
but the special case remained, in a different form.
P0003R5 (which ended up in C++17) changed the wording again, but the
special case was removed (by accident), so now:
constexpr void f() {} static_assert(!noexcept(f()));
According to Richard Smith in LLVM 15481, CWG discussed this but decided to keep the behavior as-is. Currently, clang does the right
thing for C++17 (and fails for C++14 and C++11, on purpose). g++,
however, implemented the special case for C++11 already, but not the
change for C++17. Currently, icc and msvc seem to behave like g++.
Clang Bug 15481 - noexcept should check whether the expression is a constant expression
The constant expression special case was removed -- apparently by accident -- by wg21.link/p0003. I'm investigating whether it's going
to stay gone or not.
Did you do anything to avoid quadratic runtime on deeply-nested
expressions?
[...]
Conclusion from CWG discussion: we're going to keep this as-is. noexcept has no special rule for constant expressions.
It turns out this is actually essential for proper library
functionality: e.g., if noexcept tries evaluating its operand, then
(for example) is_nothrow_swappable is broken by making std::swap
constexpr, because std::swap<T> then often ends up getting
instantiated before T is complete.
As a result of that, I'm also going to consider this change as an
effective DR against C++11 and C++14... but I'm open to reconsidering
if we see many user complaints.
In other words, the special rule was accidentally removed by P0003, but CWG decided to keep the removal.

Why does [[nodiscard]] only encourage compiler to issue a warning and does not require it?

The [[nodiscard]] attribute introduced in C++17 standard, and in case of the
... potentially-evaluated discarded-value expression,..., implementations are encouraged to issue a warning in such cases.
Source: n4659, C++17 final working draft.
Similar phrasing is used on cppreference, that in case of "violation":
the compiler is encouraged to issue a warning.
Why is the word encouraged used instead of required? Are there situations (except, the explicit cast to void) when a compiler is better off not issuing a warning? What is the reason behind softening the standard language in the particular case of relatively safe requirement to issue a warning no matter what (again, except, say, explicit cast to void)?
The C++ standard specifies the behavior of a valid C++ program. In so doing, it also defines what "valid C++ program" means.
Diagnostics are only required for code which is ill-formed, code which is syntactically or semantically incorrect (and even then, there are some ill-formed circumstances that don't require diagnostics). Either the code is well-formed, or it is ill-formed and (usually) a diagnostic is displayed.
So the very idea of a "warning" is just not something the C++ standard recognizes, or is meant to recognize. Notice that even the "implementations are encouraged to issue a warning" statement is in a non-normative notation, rather than a legitimate specification of behavior.