Usage of lambda in constant expression - c++

Take the following code:
template <typename T, typename U>
constexpr bool can_represent(U&& w) noexcept
{
return [] (auto&& x) {
try {
return T(std::forward<U>(x)) == std::forward<U>(x);
} catch(...) {
return false;
}
} (std::forward<U>(w));
}
I am using this function in a constant expression (template).
gcc compiles it without a problem. clang and MSVC don't, lamenting that the function does not result in a constant expression.
Indeed, gcc did not immediately accept this either; it was getting hung up on the try, that normally wouldn't be allowed in a constexpr function. That's why I had to use an immediately invoked lambda expression. However, now it works, and considering it only works with gcc I'm quite confused.
Which compiler is correct?
Is there a property of the lambda that permits this to work in a constexpr context, or is this some kind of non-standard gcc extension?
[I've used godbolt to compile with clang and MSVC, where as I have gcc 8.1.0 on my machine]

[gcc] was getting hung up on the try, that normally wouldn't be allowed in a constexpr function.
This is correct for a C++17 program. (C++20 relaxed this, so a try block can now be used in a constexpr function. However, it is only the try that is allowed; it is not allowed for execution to hit something that throws an exception.)
That's why I had to use an immediately invoked lambda expression.
The implication here is that your approach made your code valid. This is incorrect. Using an an immediately invoked lambda did not work around the problem; it swept the problem under the rug. The try is still a problem, but now compilers do not have to tell you it is a problem.
Using a lambda switches the constexpr criterion from the straight-forward "the function body must not contain a try-block" to the indirect "there exists at least one set of argument values such that an invocation of the function could be an evaluated subexpression of a core constant expression". The tricky part here is that a violation of the latter criterion is "no diagnostic required", meaning that all the compilers are correct, whether or not they complain about this code. Hence my characterization of this as sweeping the problem under the rug.
So why is... that criterion is a long thing to repeat... what's the issue involving "core constant expressions"? C++17 removed the prohibition against lambdas in core constant expressions, so that much looks good. However, there is still a requirement that all function calls within the constexpr function also be themselves constexpr. Lambdas can become constexpr in two ways. First, they can be explicitly marked constexpr (but if you do that here, the complaint about the try block should come back). Second, they can simply satisfy the constexpr function requirements. However, your lambda contains a try, so it is not constexpr (in C++17).
Your lambda is not a valid constexpr function. Hence calling it is not allowed in a core constant expression. There is no execution path through can_represent() that avoids invoking your lambda. Therefore, can_represent is not a valid constexpr function, no diagnostic required.

Related

`noexcept` behavior of `constexpr` functions

The wording of [expr.unary.noexcept] changed in C++17.
Previously (n4140, 5.3.7 noexcept operator [expr.unary.noexcept]), my emphasis:
The result of the noexcept operator is false if in a potentially-evaluated context the expression would contain
(3.1) a potentially-evaluated call to a function, member function,
function pointer, or member function pointer that does not have a
non-throwing exception-specification ([except.spec]), unless the call
is a constant expression ([expr.const]) ...
Now1 (7.6.2.6 noexcept operator [expr.unary.noexcept]):
The result of the noexcept operator is true unless the expression is potentially-throwing ([except.spec]).
And then in 14.5 Exception specifications [except.spec]:
If a declaration of a function does not have a noexcept-specifier, the declaration has a potentially throwing exception specification unless ...
but the unless list of 14.5(3) doesn't list constexpr, leaving it as potentially throwing...
1 a link to C++17 n4659 added by L.F. in a comment.
Test code
constexpr int f(int i) { return i; }
std::cout << boolalpha << noexcept(f(7)) << std::endl;
int a = 7;
std::cout << boolalpha << noexcept(f(a)) << std::endl;
used to print (with gcc 8.3):
true
false
both when compiled with -std=c++11 and -std=c++2a
However the same code prints now (with gcc 9.2):
false
false
both when compiled with -std=c++11 and -std=c++2a
Clang by the way is very consistent, since 3.4.1 and goes with:
false
false
What is the right behavior per each spec?
Was there a real change in the spec? If so, what is the reason for this change?
If there is a change in the spec that affects or contradicts past behavior, would it be a common practice to emphasize that change and its implications? If the change is not emphasized can it imply that it might be an oversight?
If this is a real intended change, was it considered a bug fix that should go back to previous versions of the spec, are compilers right with aligning the new behavior retroactively to C++11?
Side Note: the noexcept deduction on a constexpr function affects this trick.
Summary
What is the right behavior per each spec?
true false before C++17, false false since C++17.
Was there a real change in the spec? If so, what is the reason for this change?
Yes. See the quote from the Clang bug report below.
If there is a change in the spec that affects or contradicts past
behavior, would it be a common practice to emphasize that change and
its implications? If the change is not emphasized can it imply that it
might be an oversight?
Yes; yes (but CWG found a reason to justify the oversight later, so it was kept as-is).
If this is a real intended change, was it considered a bug fix that
should go back to previous versions of the spec, are compilers right
with aligning the new behavior retroactively to C++11?
I'm not sure. See the quote from the Clang bug report below.
Detail
I have searched many places, and so far the closest thing I can find is the comments on relevant bug reports:
GCC Bug 87603 - [C++17] noexcept isn't special cased for constant expressions anymore
CWG 1129 (which ended up in C++11) added a special case to noexcept
for constant expressions, so that:
constexpr void f() {} static_assert(noexcept(f()));
CWG 1351 (which ended up in C++14) changed the wording significantly,
but the special case remained, in a different form.
P0003R5 (which ended up in C++17) changed the wording again, but the
special case was removed (by accident), so now:
constexpr void f() {} static_assert(!noexcept(f()));
According to Richard Smith in LLVM 15481, CWG discussed this but decided to keep the behavior as-is. Currently, clang does the right
thing for C++17 (and fails for C++14 and C++11, on purpose). g++,
however, implemented the special case for C++11 already, but not the
change for C++17. Currently, icc and msvc seem to behave like g++.
Clang Bug 15481 - noexcept should check whether the expression is a constant expression
The constant expression special case was removed -- apparently by accident -- by wg21.link/p0003. I'm investigating whether it's going
to stay gone or not.
Did you do anything to avoid quadratic runtime on deeply-nested
expressions?
[...]
Conclusion from CWG discussion: we're going to keep this as-is. noexcept has no special rule for constant expressions.
It turns out this is actually essential for proper library
functionality: e.g., if noexcept tries evaluating its operand, then
(for example) is_nothrow_swappable is broken by making std::swap
constexpr, because std::swap<T> then often ends up getting
instantiated before T is complete.
As a result of that, I'm also going to consider this change as an
effective DR against C++11 and C++14... but I'm open to reconsidering
if we see many user complaints.
In other words, the special rule was accidentally removed by P0003, but CWG decided to keep the removal.

Rationale for [dcl.constexpr]p5 in the c++ standard

What is the rationale for [dcl.constexpr]p5 (http://eel.is/c++draft/dcl.constexpr#5)?
For a non-template, non-defaulted constexpr function or a
non-template, non-defaulted, non-inheriting constexpr constructor, if
no argument values exist such that an invocation of the function or
constructor could be an evaluated subexpression of a core constant
expression ([expr.const]), or, for a constructor, a constant
initializer for some object ([basic.start.init]), the program is
ill-formed; no diagnostic required.
If a program violated this rule, declaring the offending function constexpr was useless. So what? Isn't it better to accept useless uses of the decl-specifier constexpr instead of triggering undefined behaviour (by no diagnostics required)? In addition to the problem with undefined behaviour we also have the additional complexity of having the rule [dcl.constexpr]p5 in the standard.
An implementation can still provide useful diagnostic messages in some cases that it is able to detect (warnings by convention). Just like in the following case:
int main() { 0; }
The expression in main there is well-formed but useless. Some compilers issue a diagnostic message anyway (and they are allowed to) in the form of a warning.
I understand that [dcl.constexpr]p5 cannot require diagnostics, so i'm not asking about that. I'm just asking about why this rule is even in the standard.
The reason it's ill-formed is because making it ill-formed allows implementations to reject constexpr function definitions that cannot possibly form constant expressions. Rejecting them early means getting more useful diagnostics.
The reason no diagnostic is required is because it may be unrealistic for an implementation to determine that for each and every possible combination of arguments, the result is not a constant expression.
The fact that ill-formed, no diagnostic required, effectively means the same thing as making the behaviour undefined seems to me as if it's unfortunate, but merely picked for lack of a better option. I'd be highly surprised if the intent would actually be to allow any arbitrary run-time behaviour, but there is no concept of "may be diagnosed as an error, but if not, must behave as specified" for any language feature in C++.

How to effectively debug constexpr functions?

In C++14 we get upgraded version of constexpr meaning that now it will be possible to use loops, if-statements and switches.
Recursion is already possible as in C++11.
I understand that constexpr functions/code should be quite simple, but still the question arise: how to effectively debug it?
Even in "The C++ Programming Language, 4th Edition" there is a sentence that debugging can be hard.
There are two important aspects for debugging constexpr functions.
1) Make sure they compute the correct result
Here you can use regular unit-testing, asserts or a runtime debugger to step through your code. There is nothing new compared to testing regular functions here.
2) Make sure they can be evaluated at compile-time
This can be tested by evaluating the function as the right-hand side of a constexpr variable assignment.
constexpr auto my_var = my_fun(my_arg);
In order for this to work, my_fun can a) only have compile-time constant expression as actual arguments. I.e. my_arg is a literal (builtin or user-defined) or a previously computed constexpr variable or a template parameter, etc, and b) it can only call constexpr functions in its implementation (so no virtuals, no lambda expressions, etc.).
Note: it is very hard to actually debug the compiler's implementation of code generation during the compile-time evaluation of your constexpr function. You would have to attach a debugger to your compiler and actually be able to interpret the code path. Maybe some future version of Clang will let you do this, but this is not feasible with current technology.
Fortunately, because you can decouple the runtime and compile-time behavior of constexpr functions, debugging them isn't half as hard as debugging template metaprograms (which can only be run at compile-time).
The answer I wrote on 3 April '15 is clearly wrong. I can't understand what I was thinking.
Here is the "real" answer - the method I use now.
a) write your constexpr function as you normally would. So far it doesn't work.
b) when the function is invoked at compile time - compilation fails with nothing more than a message to the effect "invalid constexpr" function. This makes it hard to know what the problem actually is.
c) Make small test program which calls the function with parameters known only at runtime. Run your test program with the debugger. You'll find that you can trace through the function in the normal manner.
It took me an embarrassingly long time to figure this out.
If you are using gcc, you can try this
and there is a introduce about it
If by debugging you mean "make it known that a certain expression is not of a desired value", you could check it at runtime
#include <stdexcept>
#include <iostream>
constexpr int test(int x){ return x> 0 ? x : (throw std::domain_error("wtf")); }
int main()
{
test(42);
std::cout<< "42\n";
test(-1);
std::cout<< "-1\n";
}

Why is constexpr not automatic? [duplicate]

This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.

Literals and constexpr functions, compile-time evaluation

Attempting to implement a pleasing (simple, straightforward, no TMP, no macros, no unreadable convoluted code, no weird syntax when using it) compile-time hash via user-defined literals, I found that apparently GCC's understanding of what's a constant expression is grossly different from my understanding.
Since code and compiler output say more than a thousand words, without further ado:
#include <cstdio>
constexpr unsigned int operator"" _djb(const char* const str, unsigned int len)
{
static_assert(__builtin_constant_p(str), "huh?");
return len ? str[0] + (33 * ::operator"" _djb(str+1, len-1)) : 5381;
}
int main()
{
printf("%u\n", "blah"_djb);
return 0;
}
The code is pretty straightforward, not much to explain, and not much to ask about -- except it does not evaluate at compile-time. I tried using a pointer dereference instead of using an array index as well as having the recursion break at !*str, all to the same result.
The static_assert was added later when fishing in troubled waters for why the hash just wouldn't evaluate at compile-time when I firmly believed it should. Well, surprise, that only puzzled me more, but didn't clear up anything! The original code, without the static_assert, is well-accepted and compiles without warnings (gcc 4.7.2).
Compiler output :
[...]\main.cpp: In function 'constexpr unsigned int operator"" _djb(const char*, unsigned int)':
[...]\main.cpp:5:2: error: static assertion failed: huh?
My understanding is that a string literal is, well... a literal. In other words, a compile-time constant. Specifically, it is a compiletime-known sequence of constant characters starting at a constant address assigned by the compiler (and thus, known) terminated by '\0'. This logically implies that the literal's compiler-calculated length as supplied to operator"" is a constexpr as well.
Also, my understanding is that calling a constexpr function with only compile-time parameters makes it elegible as initializer for an enumeration or as template parameter, in other words it should result in evaluation at compile time.
Of course it is in principle always allowable for the compiler to evaluate a constexpr function at runtime, but being able to move the evaluation to compile-time is the entire point of having constexpr, after all.
Where is my fallacy, and is there a way of implementing a user-defined literal that can take a string literal so it actually evaluates at compile-time?
Possibly relevant similar questions:
Can a string literal be subscripted in a constant expression?
User defined literal arguments are not constexpr?
The first one seems to suggest that at least for char const (&str)[N] this works, and GCC accepts it, though I admittedly can't follow the conclusion.
The second one uses integer literals, not string literals, and finally addresses the issue by using template metaprogramming (which I don't want). So apparently the issue is not limited to string literals?
I don't have GCC 4.7.2 at hand to try, but your code without the static assertion (more on that later) compiles fine and executes the function at compile-time with both GCC 4.7.3 and GCC 4.8. I guess you will have to update your compiler.
The compiler is not always allowed to move the evaluation to runtime: some contexts, like template arguments, and static_assert, require evaluation at compile-time or an error if not possible. If you use your UDL in a static_assert you will force the compiler to evaluate it at compile-time if possible. In both my tests it does so.
Now, on to __builtin_constant_p(str). To start with, as documented, __builtin_constant_p can produce false negatives (i.e. it can return 0 for constant expressions sometimes).
str is not provably a constant expression because it is a function argument. You can force the compiler to evaluate the function at compile-time in some contexts, but that doesn't mean it can never evaluate it at runtime: some contexts never force compile-time evaluation (and in fact, in some of those contexts compile-time evaluation is just impossible). str can be a non-constant expression.
The static assertions are tested when the compiler sees the function, not once for each call the compiler sees. That makes the fact that you always call it in compile-time contexts irrelevant: only the body matters. Because str can sometimes be a non-constant expression, __builtin_constant_p(str) in that context cannot be true: it can produce false negatives, but it does not produce false positives.
To make it more clear: static_assert(__builtin_constant_p("blah"), "") will pass (well, in theory it could be fail, but I doubt the compiler would produce a false negative here), because "blah" is always a constant expression, but str is not the same expression as "blah".
For completeness, if the argument in question was of a numeric type (more on that later), and you did the test outside of a static assertion, you could get the test to return true if you passed a constant, and false if you passed a non-constant. In a static assertion, it always fails.
But! The docs for __builtin_constant_p reveal one interesting detail:
However, if you use it in an inlined function and pass an argument of the function as the argument to the built-in, GCC will never return 1 when you call the inline function with a string constant or compound literal (see Compound Literals) and will not return 1 when you pass a constant numeric value to the inline function unless you specify the -O option.
As you can see the built-in has a limitation makes the test always return false if the expression given is a string constant.