I know, that such questions were asked early (for example non-constexpr calls in constexpr functions), but let's take next code:
consteval int factorial(int n)
{
return n <= 1 ? 1 : (n * factorial(n - 1));
}
factorial(5);
All is OK. We guarantee, that factorial(5) expression is resolved at compile time, because consteval. Right? If so, I think it should mean, that recursive factorial(n - 1) in call factorial(5) is resolved at compile time too. However, we too know, that in declaration int factorial(int n) parameter int n is just a variable, not constexpr. And this influences, if we try to do something like this:
consteval int factorial(int n)
{
// 1
constexpr auto res = factorial(n - 1); // error: ‘n’ is not a constant expression
// 2
return n <= 1 ? 1 : (n * factorial(n - 1)); // hhhhmmmmmm...but all is ok..
}
factorial(5);
What we have?
We call consteval function with literal constant. OK.
Within consteval function we make recursive call to this function with non constexpr local parameter at row 2. And all is OK, though we call consteval function with non-constexpr value. Well, we can suggest, that compiler knows, that base call has been done as right consteval call factorial(5), and the whole final expression (with all internal code of factorial) should be interpreted as consteval. Yes? Or, why? Because...
At row 1 we explicitly make a call as constexpr with non-constexpr value. And we get an error.
My question is next: why for explicit consteval call of factorial(5) compiler makes difference between explicit and implicit constexpr recursion call of factorial? Is it bug or feature?
Let's review what a constant expression is. A core constant expression is an expression which, when evaluated, does not cause one of a long list of "bad" behaviors. A constant expression is a core constant expression whose result is "allowed" by some other rules (not important here). In particular, note that these conditions are heavily non-syntactic: a constant expression are not defined positively by defining what expressions are constant expressions, but negatively by defining what constant expressions can't do.
A result of this definition is that an expression can be a constant expression even it requires the evaluations of many non-constant expressions (even non-core constant expressions). In the definitions
consteval int factorial1(int n) {
if(n == 0) return 1;
else { // making this correct since undefined behavior interferes with constant expressions
/*constexpr*/ auto rec = factorial1(n - 1);
return n * rec;
}
}
consteval int factorial2(int n) {
return n == 0 ? 1 : n * factorial2(n - 1);
}
the factorial1(n - 1) in factorial1 is not a constant expression, so adding constexpr to rec is an error. Similarly, the n == 0 ? 1 : n * factorial2(n - 1) in factorial2 is also not a constant expression. The reason is the same: both of these expressions read the value of (perform lvalue-to-rvalue conversion on) the object n, which did not start lifetime within the expression. But this is fine: the bodies of constexpr/consteval functions are simply not checked for being constant expressions. All constexpr really does is whitelist a function's calls for appearing in constant expressions. And, again, an expression can be constant (like factorial1(5)) even if you need to evaluate a non-constant expression on the way (like factorial(n - 1)). (In this case, when evaluating factorial1(5), the lifetime of the n object that is the parameter to factorial does start its lifetime within the expression being checked, so it can be read during evaluation.)
Two places where an expression will be checked for being a constant expression are initializations of constexpr variables and "non-protected" calls to consteval functions. The first one explains why adding constexpr to rec in factorial1 is an error: you're adding an additional check for a constant expression that is not done in the correct factorial1 function, and this extra check (correctly) fails. This should have answered your point 3.
For your point 2: yes, there's a special "protection" for consteval functions called from other consteval functions. Usually, a call to a consteval function is, right at the point it is written, checked for being a constant expression. As we've been discussing, this check would fail for the calls factorial1(n - 1) and factorial2(n - 1) in the above definitions. There is a special case built into the language to save them: a call to a consteval function in an immediate function context (basically, whose immediately enclosing function is also consteval) is not required to be a constant expression.
Why doesn't the following code compile?
// source.cpp
int main()
{
constexpr bool result = (0 == ("abcde"+1));
}
The compile command:
$ g++ -std=c++14 -c source.cpp
The output:
source.cpp: In function ‘int main()’:
source.cpp:4:32: error: ‘((((const char*)"abcde") + 1u) == 0u)’ is not a constant expression
constexpr bool result = (0 == ("abcde"+1));
~~~^~~~~~~~~~~~~~~
I'm using gcc6.4.
The restrictions on what can be used in a constant expression are defined mostly as a list of negatives. There's a bunch of things you're not allowed to evaluate ([expr.const]/2 in C++14) and certain things that values have to result in ([expr.const]/4 in C++14). This list changes from standard to standard, becoming more permissive with time.
In trying to evaluate:
constexpr bool result = (0 == ("abcde"+1));
there is nothing that we're not allowed to evaluate, and we don't have any results that we're not allowed to have. No undefined behavior, etc. It's a perfectly valid, if odd, expression. Just one that gcc 6.3 happens to disallow - which is a compiler bug. gcc 7+, clang 3.5+, msvc all compile it.
There seems to be a lot of confusion around this question, with many comments suggesting that since the value of a string literal like "abcde" is not known until runtime, you cannot do anything with such a pointer during constant evaluation. It's important to explain why this is not true.
Let's start with a declaration like:
constexpr char const* p = "abcde";
This pointer has some value. Let's say N. The crucial thing is - just about anything you can do to try to observe N during constant evaluation would be ill-formed. You cannot cast it to an integer to read the value. You cannot compare it to a different, unrelated string† (by way of [expr.rel]/4.3):
constexpr char const* q = "hello";
p > q; // ill-formed
p <= q; // ill-formed
p != q; // ok, false
We can say for sure that p != q because wherever it is they point, they are clearly different. But we cannot say which one goes first. Such a comparison is undefined behavior, and undefined behavior is disallowed in constant expressions.
You can really only compare to pointers within the same array:
constexpr char const* a = p + 1; // ok
constexpr char const* b = p + 17; // ill-formed
a > p; // ok, true
Wherever it is that p points to, we know that a points after it. But we don't need to know N to determine this.
As a result, the actual value N during constant evaluation is more or less immaterial.
"abcde" is... somewhere. "abcde"+1 points to one later than that, and has the value "bcde". Regardless of where it points, you can compare it to a null pointer (0 is a null pointer constant) and it is not a null pointer, hence that comparison evaluates as false.
This is a perfectly well-formed constant evaluation, which gcc 6.3 happens to reject.
†Although we simply state by fiat that std::less()(p, q) provides some value that gives a consistent total order at compile time and that it gives the same answer at runtime. Which is... an interesting conundrum.
I'm confused to what it means to be known at compile time. From the code below, can the compiler not calculate the value of n even if I have passed a constant literal value 90 as an argument? Why does it give me an error that expression must have a constant value
constexpr int MAX_expr = 100;
const int MAX = 90;
void foo(int n)
{
constexpr int cExpr1 = MAX_expr + 7;
constexpr int cExpr2 = n + 7;
constexpr int cExpr1 = MAX + 7;
constexpr int cExpr2 = n + 7;
const int cExpr1 = MAX_expr + 7;
const int cExpr2 = n + 7;
const int cExpr1 = MAX + 7;
const int cExpr2 = n + 7;
}
int main() {
foo(90);
const int i = factorials(90);
}
With that same logic, shouldn't factorials(int i) give an error because it does not know what argument is going to be passed therefore, the compiler won't be able to compute what is going to be returned?
constexpr int factorials(int i) {
return i > 1 ? i * factorials(i - 1) : 1;
}
The constexpr keyword can be confusing. It can be applied to both variables and functions, but with totally different meanings, except that they both have something to do with constant expressions.
A variable declared with constexpr must be initialized by a constant expression. In your code, n + 7 is not a constant expression because the value of n is not known until the function is called and may vary from one call to the next. What if the user entered some integer, and then you passed that integer to foo? Obviously, that number plus 7 is not something you can call "known at compile time". Because of that, a function definition such as foo is not allowed. You cannot promise the compiler that you will only ever pass a constant expression argument. If you can, then promote n to a template parameter, and the code will work.
In contrast, constexpr applied to a function doesn't guarantee that calling the function produces a constant expression. It allows the function to be called in a constant expression, and places constraints on the definition in order to make this possible. While factorial will certainly not produce a constant expression if given a runtime argument, it will produce a constant expression if given an integer constant expression as an argument (assuming no overflow). Thus, unlike initializers of constexpr variables, a constexpr function is allowed to contain constructs that may or may not have compile-time constant values.
There's been some debate going on in this question about whether the following code is legal C++:
std::list<item*>::iterator i = items.begin();
while (i != items.end())
{
bool isActive = (*i)->update();
if (!isActive)
{
items.erase(i++); // *** Is this undefined behavior? ***
}
else
{
other_code_involving(*i);
++i;
}
}
The problem here is that erase() will invalidate the iterator in question. If that happens before i++ is evaluated, then incrementing i like that is technically undefined behavior, even if it appears to work with a particular compiler. One side of the debate says that all function arguments are fully evaluated before the function is called. The other side says, "the only guarantees are that i++ will happen before the next statement and after i++ is used. Whether that is before erase(i++) is invoked or afterwards is compiler dependent."
I opened this question to hopefully settle that debate.
Quoth the C++ standard 1.9.16:
When calling a function (whether or
not the function is inline), every
value computation and side effect
associated with any argument
expression, or with the postfix
expression designating the called
function, is sequenced before
execution of every expression or
statement in the body of the called
function. (Note: Value computations
and side effects associated with the
different argument expressions are
unsequenced.)
So it would seem to me that this code:
foo(i++);
is perfectly legal. It will increment i and then call foo with the previous value of i. However, this code:
foo(i++, i++);
yields undefined behavior because paragraph 1.9.16 also says:
If a side effect on a scalar object is
unsequenced relative to either another
side effect on the same scalar object
or a value computation using the value
of the same scalar object, the
behavior is undefined.
To build on Kristo's answer,
foo(i++, i++);
yields undefined behavior because the order that function arguments are evaluated is undefined (and in the more general case because if you read a variable twice in an expression where you also write it, the result is undefined). You don't know which argument will be incremented first.
int i = 1;
foo(i++, i++);
might result in a function call of
foo(2, 1);
or
foo(1, 2);
or even
foo(1, 1);
Run the following to see what happens on your platform:
#include <iostream>
using namespace std;
void foo(int a, int b)
{
cout << "a: " << a << endl;
cout << "b: " << b << endl;
}
int main()
{
int i = 1;
foo(i++, i++);
}
On my machine I get
$ ./a.out
a: 2
b: 1
every time, but this code is not portable, so I would expect to see different results with different compilers.
The standard says the side effect happens before the call, so the code is the same as:
std::list<item*>::iterator i_before = i;
i = i_before + 1;
items.erase(i_before);
rather than being:
std::list<item*>::iterator i_before = i;
items.erase(i);
i = i_before + 1;
So it is safe in this case, because list.erase() specifically doesn't invalidate any iterators other than the one erased.
That said, it's bad style - the erase function for all containers returns the next iterator specifically so you don't have to worry about invalidating iterators due to reallocation, so the idiomatic code:
i = items.erase(i);
will be safe for lists, and will also be safe for vectors, deques and any other sequence container should you want to change your storage.
You also wouldn't get the original code to compile without warnings - you'd have to write
(void)items.erase(i++);
to avoid a warning about an unused return, which would be a big clue that you're doing something odd.
It's perfectly OK.
The value passed would be the value of "i" before the increment.
++Kristo!
The C++ standard 1.9.16 makes a lot of sense with respect to how one implements operator++(postfix) for a class. When that operator++(int) method is called, it increments itself and returns a copy of the original value. Exactly as the C++ spec says.
It's nice to see standards improving!
However, I distinctly remember using older (pre-ANSI) C compilers wherein:
foo -> bar(i++) -> charlie(i++);
Did not do what you think! Instead it compiled equivalent to:
foo -> bar(i) -> charlie(i); ++i; ++i;
And this behavior was compiler-implementation dependent. (Making porting fun.)
It's easy enough to test and verify that modern compilers now behave correctly:
#define SHOW(S,X) cout << S << ": " # X " = " << (X) << endl
struct Foo
{
Foo & bar(const char * theString, int theI)
{ SHOW(theString, theI); return *this; }
};
int
main()
{
Foo f;
int i = 0;
f . bar("A",i) . bar("B",i++) . bar("C",i) . bar("D",i);
SHOW("END ",i);
}
Responding to comment in thread...
...And building on pretty much EVERYONE's answers... (Thanks guys!)
I think we need spell this out a bit better:
Given:
baz(g(),h());
Then we don't know whether g() will be invoked before or after h(). It is "unspecified".
But we do know that both g() and h() will be invoked before baz().
Given:
bar(i++,i++);
Again, we don't know which i++ will be evaluated first, and perhaps not even whether i will be incremented once or twice before bar() is called. The results are undefined! (Given i=0, this could be bar(0,0) or bar(1,0) or bar(0,1) or something really weird!)
Given:
foo(i++);
We now know that i will be incremented before foo() is invoked. As Kristo pointed out from the C++ standard section 1.9.16:
When calling a function (whether or not the function is inline), every value computation and side effect associated with any argument expression, or with the postfix expression designating the called function, is sequenced before execution of every expression or statement in the body of the called function. [ Note: Value computations and side effects associated with different argument expressions are unsequenced. -- end note ]
Though I think section 5.2.6 says it better:
The value of a postfix ++ expression is the value of its operand. [ Note: the value obtained is a copy of the original value -- end note ] The operand shall be a modifiable lvalue. The type of the operand shall be an arithmetic type or a pointer to a complete effective object type. The value of the operand object is modified by adding 1 to it, unless the object is of type bool, in which case it is set to true. [ Note: this use is deprecated, see Annex D. -- end note ] The value computation of the ++ expression is sequenced before the modification of the operand object. With respect to an indeterminately-sequenced function call, the operation of postfix ++ is a single evaluation. [ Note: Therefore, a function call shall not intervene between the lvalue-to-rvalue conversion and the side effect associated with any single postfix ++ operator. -- end note ] The result is an rvalue. The type of the result is the cv-unqualified version of the type of the operand. See also 5.7 and 5.17.
The standard, in section 1.9.16, also lists (as part of its examples):
i = 7, i++, i++; // i becomes 9 (valid)
f(i = -1, i = -1); // the behavior is undefined
And we can trivially demonstrate this with:
#define SHOW(X) cout << # X " = " << (X) << endl
int i = 0; /* Yes, it's global! */
void foo(int theI) { SHOW(theI); SHOW(i); }
int main() { foo(i++); }
So, yes, i is incremented before foo() is invoked.
All this makes a lot of sense from the perspective of:
class Foo
{
public:
Foo operator++(int) {...} /* Postfix variant */
}
int main() { Foo f; delta( f++ ); }
Here Foo::operator++(int) must be invoked prior to delta(). And the increment operation must be completed during that invocation.
In my (perhaps overly complex) example:
f . bar("A",i) . bar("B",i++) . bar("C",i) . bar("D",i);
f.bar("A",i) must be executed to obtain the object used for object.bar("B",i++), and so on for "C" and "D".
So we know that i++ increments i prior to calling bar("B",i++) (even though bar("B",...) is invoked with the old value of i), and therefore i is incremented prior to bar("C",i) and bar("D",i).
Getting back to j_random_hacker's comment:
j_random_hacker writes: +1, but I had to read the standard carefully to convince myself that this was OK. Am I right in thinking that, if bar() was instead a global function returning say int, f was an int, and those invocations were connected by say "^" instead of ".", then any of A, C and D could report "0"?
This question is a lot more complicated than you might think...
Rewriting your question as code...
int bar(const char * theString, int theI) { SHOW(...); return i; }
bar("A",i) ^ bar("B",i++) ^ bar("C",i) ^ bar("D",i);
Now we have only ONE expression. According to the standard (section 1.9, page 8, pdf page 20):
Note: operators can be regrouped according to the usual mathematical rules only where the operators really are associative or commutative.(7) For example, in the following fragment: a=a+32760+b+5; the expression statement behaves exactly the same as: a=(((a+32760)+b)+5); due to the associativity and precedence of these operators. Thus, the result of the sum (a+32760) is next added to b, and that result is then added to 5 which results in the value assigned to a. On a machine in which overflows produce an exception and in which the range of values representable by an int is [-32768,+32767], the implementation cannot rewrite this expression as a=((a+b)+32765); since if the values for a and b were, respectively, -32754 and -15, the sum a+b would produce an exception while the original expression would not; nor can the expression be rewritten either as a=((a+32765)+b); or a=(a+(b+32765)); since the values for a and b might have been, respectively, 4 and -8 or -17 and 12. However on a machine in which overflows do not produce an exception and in which the results of overflows are reversible, the above expression statement can be rewritten by the implementation in any of the above ways because the same result will occur. -- end note ]
So we might think that, due to precedence, that our expression would be the same as:
(
(
( bar("A",i) ^ bar("B",i++)
)
^ bar("C",i)
)
^ bar("D",i)
);
But, because (a^b)^c==a^(b^c) without any possible overflow situations, it could be rewritten in any order...
But, because bar() is being invoked, and could hypothetically involve side effects, this expression cannot be rewritten in just any order. Rules of precedence still apply.
Which nicely determines the order of evaluation of the bar()'s.
Now, when does that i+=1 occur? Well it still has to occur before bar("B",...) is invoked. (Even though bar("B",....) is invoked with the old value.)
So it's deterministically occurring before bar(C) and bar(D), and after bar(A).
Answer: NO. We will always have "A=0, B=0, C=1, D=1", if the compiler is standards-compliant.
But consider another problem:
i = 0;
int & j = i;
R = i ^ i++ ^ j;
What is the value of R?
If the i+=1 occurred before j, we'd have 0^0^1=1. But if the i+=1 occurred after the whole expression, we'd have 0^0^0=0.
In fact, R is zero. The i+=1 does not occur until after the expression has been evaluated.
Which I reckon is why:
i = 7, i++, i++; // i becomes 9 (valid)
Is legal... It has three expressions:
i = 7
i++
i++
And in each case, the value of i is changed at the conclusion of each expression. (Before any subsequent expressions are evaluated.)
PS: Consider:
int foo(int theI) { SHOW(theI); SHOW(i); return theI; }
i = 0;
int & j = i;
R = i ^ i++ ^ foo(j);
In this case, i+=1 has to be evaluated before foo(j). theI is 1. And R is 0^0^1=1.
To build on MarkusQ's answer: ;)
Or rather, Bill's comment to it:
(Edit: Aw, the comment is gone again... Oh well)
They're allowed to be evaluated in parallel. Whether or not it happens in practice is technically speaking irrelevant.
You don't need thread parallelism for this to occur though, just evaluate the first step of both (take the value of i) before the second (increment i). Perfectly legal, and some compilers may consider it more efficient than fully evaluating one i++ before starting on the second.
In fact, I'd expect it to be a common optimization. Look at it from an instruction scheduling point of view. You have the following you need to evaluate:
Take the value of i for the right argument
Increment i in the right argument
Take the value of i for the left argument
Increment i in the left argument
But there's really no dependency between the left and the right argument. Argument evaluation happens in an unspecified order, and need not be done sequentially either (which is why new() in function arguments is usually a memory leak, even when wrapped in a smart pointer)
It's also undefined what happens when you modify the same variable twice in the same expression.
We do have a dependency between 1 and 2, however, and between 3 and 4.
So why would the compiler wait for 2 to complete before computing 3? That introduces added latency, and it'll take even longer than necessary before 4 becomes available.
Assuming there's a 1 cycle latency between each, it'll take 3 cycles from 1 is complete until the result of 4 is ready and we can call the function.
But if we reorder them and evaluate in the order 1, 3, 2, 4, we can do it in 2 cycles. 1 and 3 can be started in the same cycle (or even merged into one instruction, since it's the same expression), and in the following, 2 and 4 can be evaluated.
All modern CPU's can execute 3-4 instructions per cycle, and a good compiler should try to exploit that.
Sutter's Guru of the Week #55 (and the corresponding piece in "More Exceptional C++") discusses this exact case as an example.
According to him, it is perfectly valid code, and in fact a case where trying to transform the statement into two lines:
items.erase(i);
i++;
does not produce code that is semantically equivalent to the original statement.
To build on Bill the Lizard's answer:
int i = 1;
foo(i++, i++);
might also result in a function call of
foo(1, 1);
(meaning that the actuals are evaluated in parallel, and then the postops are applied).
-- MarkusQ