C++ constexpr conditional initialization [duplicate] - c++

For example:
void foo()
{
if constexpr (...)
int x = 5;
else
double x = 10.0;
bar(x); // calls different overloads of bar with different values
}
It's common case in D lang, but I didn't found info about C++17.
Of course, it is possible to use something like
std::conditional<..., int, double>::type x;
but only in elementary cases. Even different initializators (as above) creates big problem.

There is no way this code could work. The problem is that x is out of scope when you are calling bar. But there is a workaround:
constexpr auto t = []() -> auto {
if constexpr(/* condition */) return 1;
else return 2.9;
}();
bar(t);
To explain a little bit, it uses instantly invoked lambda expression along with auto return type deduction. Therefore we are giving t value in place and it does not get out of the scope.
Of course, it wouldn't work if the if statement couldn't have been evaluated at compile time. And if you want to do some runtime operations inside of this lambda you cannot have t as constexpr, but it will still work.

There are two ways this won't work.
First, a variable is limited to the scope in which it's declared. Leaving out the braces isn't gonna fool the compiler: int x = 5 is still in its own scope, and disappears immediately after it appears.
Second, the relaxed grammar rules for if constexpr only apply within the if constexpr's body. It would be infeasible to allow the context created in the body to leak out to the surrounding scope, since by definition it may not be well-formed, or consistent between the then/else block. (What if the else-block declared x as a typename?)
Bottom line, you'll need to move bar() into the if-body, or template foo() itself and have the type and value of x determined by your ....

Related

lambda capture in C++17

[expr.prim.lambda.capture]/7:
If an expression potentially references a local entity within a scope in which it is odr-usable, and the expression would be potentially evaluated if the effect of any enclosing typeid expressions ([expr.typeid]) were ignored, the entity is said to be implicitly captured by each intervening lambda-expression with an associated capture-default that does not explicitly capture it. The implicit capture of *this is deprecated when the capture-default is =; see [depr.capture.this]. [Example 4:
void f(int, const int (&)[2] = {});
void test() {
const int x = 17;
auto g = [](auto a) {
f(x); // OK, calls #1, does not capture x
};
auto g1 = [=](auto a) {
f(x); // OK, calls #1, captures x
};
}
... Within g1, an implementation can optimize away the capture of x as it is not odr-used. — end example]
So an entity is captured even if it is not odr-used by the lambda body, and the example below says that the implementations can optimize it away. Therefore, since the implementations can optimize it away, why add such a rule? What's the point of it?
cppreference says it was added in C++17. What's the original proposal?
P0588R0 contains the rationale explaining why the rules for implicit capture were changed in order to capture some variables that are not going to be odr-used anyway. It's very subtle.
Basically, in order to determine the size of a lambda closure type, you need to know which variables are captured and which ones are not, but under the old rules:
in order to determine whether a variable is captured implicitly, you have to know whether it's odr-used, and
in order to know whether the variable is odr-used, you have to perform substitution into the lambda's body, and
if the function call operator is generic, it means the above substitution is done at a point where the function call operator itself is not ready to be instantiated yet (since its own template parameters aren't yet known). This causes problems that could be avoided by not doing the substitution in the first place.
The paper gives the following example:
template<typename T> void f(T t) {
auto lambda = [&](auto a) {
if constexpr (Copyable<T>() && sizeof(a) == 32) {
T u = t;
} else {
// ... do not use t ...
}
};
// ...
}
When f is instantiated with a non-copyable type, ideally we would like the branch with T u = t; to be discarded. That's what the if constexpr is there for. But an if constexpr statement does not discard the un-taken branch until the point at which the condition is no longer dependent---meaning that the type of a must be known before T u = t; can be discarded. Unfortunately, under the old rules, T must be substituted into T u = t; in order to determine whether t is captured, and this happens before any opportunity to discard this statement
The new rules simply declare that t is captured; therefore, the compiler doesn't have to perform any substitution into the lambda body until the point at which a specialization of the function call operator is referenced (and thus, its body can be fully instantiated). And if the compiler is somehow able to prove that t will never be odr-used by any possible specialization of the function call operator, it's free to optimize it out.

Why does `auto` not adopt the constexpr'ness of its initializing expression?

Why doesn't defining a variable with auto keyword carry the constexpr'ness of the expression used to initialize it?
As an example, consider the following code:
#include <string_view>
constexpr std::string_view f() { return "hello"; }
static constexpr std::string_view g() {
constexpr auto x = f(); // (*)
return x.substr(1, 3);
}
int foo() { return g().length(); }
With GCC 10.2 and --std=c++20 -fsanitize=undefined -O3, this compiles into:
foo():
mov eax, 3
ret
But if we remove the constexpr on line (*), we would get a 27-line program with a bunch of pointers, a long string constant etc.
Notes:
I marked this question C++20, but I have no reason to believe this behavior is different from C++11's.
This question is not about the example, it is about the general behavior of auto w.r.t. constexprness. The example simply shows that that GCC does not treat x as constexpr if we don't explicitly tell it to.
auto is intended to enable type deduction, not a replacement for "everything useful you would have typed here". constexpr is not a part of an expression's type, and is thus ignored by auto (as opposed to const and volatile, which are part of an expression's type, and are deduced).
But if we remove the constexpr on line (*), we would get a 27-line program with a bunch of pointers, a long string constant etc.
That is a choice for your compiler. It has 100% of the information it needs to make that code go away. The fact that it didn't is not the C++ standard's concern.
This is a "quality of implementation" issue, not a standardization issue. If an implementation won't run as much of your code at compile-time as you desire, you can complain to them about it.
Remember: constexpr isn't meant to be a runtime optimization per-se. It's meant to allow you to write things that you otherwise couldn't write. Like std::get<g()>(some_tuple) or whatever. That code has to run at compile-time, since it's being used in a template parameter.
I'm not asking about some kind of deep deduction, only about the case of the function explicitly being constexpr.
Let's forget for a moment that auto is for type deduction and constexpr is not part of the type system. Let's focus instead on what if auto was supposed to deduce constexpr anyway. So what you want is for auto to only deduce constexpr if <expr> is specifically a function that is designated constexpr.
So let's look at some code:
auto x = constexpr_func();
auto y = constexpr_func() + 5;
auto z = constexpr_func() + constexpr_func();
auto w = constexpr_func_2() + constexpr_func_2();
Which of these variables are constexpr? If what you want is what we had, then x would be constexpr, but y would not. I personally would find this both surprising and annoying.
Worse, if we assume constexpr_func() returns an int, then z is also not constexpr. But if constexpr_func_2() returns a user-defined literal type that has a constexpr operator+, then w would be constexpr.
Isn't that all very weird? So I highly suspect that this is not what you really want.
What you really want is for auto x = <expr>; to deduce constexpr if constexpr auto x = <expr>; would be valid.
But really, that goes back to the original point. If you make a variable constexpr, that should mean you want it to be used in a place where being constexpr is required by some process. Given that fact, deducing constexpr makes no sense, because you should need it to be constexpr lest you get a compile error.

Why design a language with unique anonymous types?

This is something that has always been bugging me as a feature of C++ lambda expressions: The type of a C++ lambda expression is unique and anonymous, I simply cannot write it down. Even if I create two lambdas that are syntactically exactly the same, the resulting types are defined to be distinct. The consequence is, that a) lambdas can only be passed to template functions that allow the compile time, unspeakable type to be passed along with the object, and b) that lambdas are only useful once they are type erased via std::function<>.
Ok, but that's just the way C++ does it, I was ready to write it off as just an irksome feature of that language. However, I just learned that Rust seemingly does the same: Each Rust function or lambda has a unique, anonymous type. And now I'm wondering: Why?
So, my question is this:
What is the advantage, from a language designer point of view, to introduce the concept of a unique, anonymous type into a language?
Many standards (especially C++) take the approach of minimizing how much they demand from compilers. Frankly, they demand enough already! If they don't have to specify something to make it work, they have a tendency to leave it implementation defined.
Were lambdas to not be anonymous, we would have to define them. This would have to say a great deal about how variables are captured. Consider the case of a lambda [=](){...}. The type would have to specify which types actually got captured by the lambda, which could be non-trivial to determine. Also, what if the compiler successfully optimizes out a variable? Consider:
static const int i = 5;
auto f = [i]() { return i; }
An optimizing compiler could easily recognize that the only possible value of i that could be captured is 5, and replace this with auto f = []() { return 5; }. However, if the type is not anonymous, this could change the type or force the compiler to optimize less, storing i even though it didn't actually need it. This is a whole bag of complexity and nuance that simply isn't needed for what lambdas were intended to do.
And, on the off-case that you actually do need a non-anonymous type, you can always construct the closure class yourself, and work with a functor rather than a lambda function. Thus, they can make lambdas handle the 99% case, and leave you to code your own solution in the 1%.
Deduplicator pointed out in comments that I did not address uniqueness as much as anonymity. I am less certain of the benefits of uniqueness, but it is worth noting that the behavior of the following is clear if the types are unique (action will be instantiated twice).
int counter()
{
static int count = 0;
return count++;
}
template <typename FuncT>
void action(const FuncT& func)
{
static int ct = counter();
func(ct);
}
...
for (int i = 0; i < 5; i++)
action([](int j) { std::cout << j << std::endl; });
for (int i = 0; i < 5; i++)
action([](int j) { std::cout << j << std::endl; });
If the types were not unique, we would have to specify what behavior should happen in this case. That could be tricky. Some of the issues that were raised on the topic of anonymity also raise their ugly head in this case for uniqueness.
Lambdas are not just functions, they are a function and a state. Therefore both C++ and Rust implement them as an object with a call operator (operator() in C++, the 3 Fn* traits in Rust).
Basically, [a] { return a + 1; } in C++ desugars to something like
struct __SomeName {
int a;
int operator()() {
return a + 1;
}
};
then using an instance of __SomeName where the lambda is used.
While in Rust, || a + 1 in Rust will desugar to something like
{
struct __SomeName {
a: i32,
}
impl FnOnce<()> for __SomeName {
type Output = i32;
extern "rust-call" fn call_once(self, args: ()) -> Self::Output {
self.a + 1
}
}
// And FnMut and Fn when necessary
__SomeName { a }
}
This means that most lambdas must have different types.
Now, there are a few ways we could do that:
With anonymous types, which is what both languages implement. Another consequence of that is that all lambdas must have a different type. But for language designers, this has a clear advantage: Lambdas can be simply described using other already existing simpler parts of the language. They are just syntax sugar around already existing bits of the language.
With some special syntax for naming lambda types: This is however not necessary since lambdas can already be used with templates in C++ or with generics and the Fn* traits in Rust. Neither language ever force you to type-erase lambdas to use them (with std::function in C++ or Box<Fn*> in Rust).
Also note that both languages do agree that trivial lambdas that do not capture context can be converted to function pointers.
Describing complex features of a languages using simpler feature is pretty common. For example both C++ and Rust have range-for loops, and they both describe them as syntax sugar for other features.
C++ defines
for (auto&& [first,second] : mymap) {
// use first and second
}
as being equivalent to
{
init-statement
auto && __range = range_expression ;
auto __begin = begin_expr ;
auto __end = end_expr ;
for ( ; __begin != __end; ++__begin) {
range_declaration = *__begin;
loop_statement
}
}
and Rust defines
for <pat> in <head> { <body> }
as being equivalent to
let result = match ::std::iter::IntoIterator::into_iter(<head>) {
mut iter => {
loop {
let <pat> = match ::std::iter::Iterator::next(&mut iter) {
::std::option::Option::Some(val) => val,
::std::option::Option::None => break
};
SemiExpr(<body>);
}
}
};
which while they seem more complicated for a human, are both simpler for a language designer or a compiler.
Cort Ammon's accepted answer is good, but I think there's one more important point to make about implementability.
Suppose I have two different translation units, "one.cpp" and "two.cpp".
// one.cpp
struct A { int operator()(int x) const { return x+1; } };
auto b = [](int x) { return x+1; };
using A1 = A;
using B1 = decltype(b);
extern void foo(A1);
extern void foo(B1);
The two overloads of foo use the same identifier (foo) but have different mangled names. (In the Itanium ABI used on POSIX-ish systems, the mangled names are _Z3foo1A and, in this particular case, _Z3fooN1bMUliE_E.)
// two.cpp
struct A { int operator()(int x) const { return x + 1; } };
auto b = [](int x) { return x + 1; };
using A2 = A;
using B2 = decltype(b);
void foo(A2) {}
void foo(B2) {}
The C++ compiler must ensure that the mangled name of void foo(A1) in "two.cpp" is the same as the mangled name of extern void foo(A2) in "one.cpp", so that we can link the two object files together. This is the physical meaning of two types being "the same type": it's essentially about ABI-compatibility between separately compiled object files.
The C++ compiler is not required to ensure that B1 and B2 are "the same type." (In fact, it's required to ensure that they're different types; but that's not as important right now.)
What physical mechanism does the compiler use to ensure that A1 and A2 are "the same type"?
It simply burrows through typedefs, and then looks at the fully qualified name of the type. It's a class type named A. (Well, ::A, since it's in the global namespace.) So it's the same type in both cases. That's easy to understand. More importantly, it's easy to implement. To see if two class types are the same type, you take their names and do a strcmp. To mangle a class type into a function's mangled name, you write the number of characters in its name, followed by those characters.
So, named types are easy to mangle.
What physical mechanism might the compiler use to ensure that B1 and B2 are "the same type," in a hypothetical world where C++ required them to be the same type?
Well, it couldn't use the name of the type, because the type doesn't have a name.
Maybe it could somehow encode the text of the body of the lambda. But that would be kind of awkward, because actually the b in "one.cpp" is subtly different from the b in "two.cpp": "one.cpp" has x+1 and "two.cpp" has x + 1. So we'd have to come up with a rule that says either that this whitespace difference doesn't matter, or that it does (making them different types after all), or that maybe it does (maybe the program's validity is implementation-defined, or maybe it's "ill-formed no diagnostic required"). Anyway, mangling lambda types the same way across multiple translation units is certainly a harder problem than mangling named types like A.
The easiest way out of the difficulty is simply to say that each lambda expression produces values of a unique type. Then two lambda types defined in different translation units definitely are not the same type. Within a single translation unit, we can "name" lambda types by just counting from the beginning of the source code:
auto a = [](){}; // a has type $_0
auto b = [](){}; // b has type $_1
auto f(int x) {
return [x](int y) { return x+y; }; // f(1) and f(2) both have type $_2
}
auto g(float x) {
return [x](int y) { return x+y; }; // g(1) and g(2) both have type $_3
}
Of course these names have meaning only within this translation unit. This TU's $_0 is always a different type from some other TU's $_0, even though this TU's struct A is always the same type as some other TU's struct A.
By the way, notice that our "encode the text of the lambda" idea had another subtle problem: lambdas $_2 and $_3 consist of exactly the same text, but they should clearly not be considered the same type!
By the way, C++ does require the compiler to know how to mangle the text of an arbitrary C++ expression, as in
template<class T> void foo(decltype(T())) {}
template void foo<int>(int); // _Z3fooIiEvDTcvT__EE, not _Z3fooIiEvT_
But C++ doesn't (yet) require the compiler to know how to mangle an arbitrary C++ statement. decltype([](){ ...arbitrary statements... }) is still ill-formed even in C++20.
Also notice that it's easy to give a local alias to an unnamed type using typedef/using. I have a feeling that your question might have arisen from trying to do something that could be solved like this.
auto f(int x) {
return [x](int y) { return x+y; };
}
// Give the type an alias, so I can refer to it within this translation unit
using AdderLambda = decltype(f(0));
int of_one(AdderLambda g) { return g(1); }
int main() {
auto f1 = f(1);
assert(of_one(f1) == 2);
auto f42 = f(42);
assert(of_one(f42) == 43);
}
EDITED TO ADD: From reading some of your comments on other answers, it sounds like you're wondering why
int add1(int x) { return x + 1; }
int add2(int x) { return x + 2; }
static_assert(std::is_same_v<decltype(add1), decltype(add2)>);
auto add3 = [](int x) { return x + 3; };
auto add4 = [](int x) { return x + 4; };
static_assert(not std::is_same_v<decltype(add3), decltype(add4)>);
That's because captureless lambdas are default-constructible. (In C++ only as of C++20, but it's always been conceptually true.)
template<class T>
int default_construct_and_call(int x) {
T t;
return t(x);
}
assert(default_construct_and_call<decltype(add3)>(42) == 45);
assert(default_construct_and_call<decltype(add4)>(42) == 46);
If you tried default_construct_and_call<decltype(&add1)>, t would be a default-initialized function pointer and you'd probably segfault. That's, like, not useful.
(Adding to Caleth's answer, but too long to fit in a comment.)
The lambda expression is just syntactic sugar for an anonymous struct (a Voldemort type, because you can't say its name).
You can see the similarity between an anonymous struct and the anonymity of a lambda in this code snippet:
#include <iostream>
#include <typeinfo>
using std::cout;
int main() {
struct { int x; } foo{5};
struct { int x; } bar{6};
cout << foo.x << " " << bar.x << "\n";
cout << typeid(foo).name() << "\n";
cout << typeid(bar).name() << "\n";
auto baz = [x = 7]() mutable -> int& { return x; };
auto quux = [x = 8]() mutable -> int& { return x; };
cout << baz() << " " << quux() << "\n";
cout << typeid(baz).name() << "\n";
cout << typeid(quux).name() << "\n";
}
If that is still unsatisfying for a lambda, it should be likewise unsatisfying for an anonymous struct.
Some languages allow for a kind of duck typing that is a little more flexible, and even though C++ has templates that doesn't really help in making a object from a template that has a member field that can replace a lambda directly rather than using a std::function wrapper.
Why design a language with unique anonymous types?
Because there are cases where names are irrelevant and not useful or even counter-productive. In this case the ability abstract out their existence is useful because it reduces name pollution, and solves one of the two hard problems in computers science (how to name things). For the same reason, temporary objects are useful.
lambda
The uniqueness is not a special lambda thing, or even special thing to anonymous types. It applies to named types in the language as well. Consider following:
struct A {
void operator()(){};
};
struct B {
void operator()(){};
};
void foo(A);
Note that I cannot pass B into foo, even though the classes are identical. This same property applies to unnamed types.
lambdas can only be passed to template functions that allow the compile time, unspeakable type to be passed along with the object ... erased via std::function<>.
There's a third option for a subset of lambdas: Non-capturing lambdas can be converted to function pointers.
Note that if the limitations of an anonymous type are a problem for a use case, then the solution is simple: A named type can be used instead. Lambdas don't do anything that cannot be done with a named class.
C++ lambdas need distinct types for distinct operations, as C++ binds statically. They are only copy/move-constructable, so mostly you don't need to name their type. But that's all somewhat of an implementation detail.
I'm not sure if C# lambdas have a type, as they are "anonymous function expressions", and they immediately get converted to a compatible delegate type or expression tree type. If the do, it's probably an unpronouncable type.
C++ also has anonymous structs, where each definition leads to a unique type. Here the name isn't unpronouncable, it simply doesn't exist as far as the standard is concerned.
C# has anonymous data types, which it carefully forbids from escaping from the scope they are defined. The implementation gives a unique, unpronouncable name to those too.
Having an anonymous type signals to the programmer that they shouldn't poke around inside their implementation.
Aside:
You can give a name to a lambda's type.
auto foo = []{};
using Foo_t = decltype(foo);
If you don't have any captures, you can use a function pointer type
void (*pfoo)() = foo;
Why use anonymous types?
For types that are automatically generated by the compiler, the choice is to either (1) honor a user's request for the name of the type, or (2) let the compiler choose one on its own.
In the former case, the user is expected to explicitly provide a name each time such a construct appears (C++/Rust: whenever a lambda is defined; Rust: whenever a function is defined). This is a tedious detail for the user to provide each time, and in the majority of cases the name is never referred to again. Thus it make sense to let the compiler figure out a name for it automatically, and use existing features such as decltype or type inference to reference the type in the few places where it is needed.
In the latter case, the compiler need to choose a unique name for the type, which would probably be an obscure, unreadable name such as __namespace1_module1_func1_AnonymousFunction042. The language designer could specify precisely how this name is constructed in glorious and delicate detail, but this needlessly exposes an implementation detail to the user that no sensible user could rely upon, since the name is no doubt brittle in the face of even minor refactors. This also unnecessarily constrains the evolution of the language: future feature additions may cause the existing name generation algorithm to change, leading to backward compatibility issues. Thus, it makes sense to simply omit this detail, and assert that the auto-generated type is unutterable by the user.
Why use unique (distinct) types?
If a value has a unique type, then an optimizing compiler can track a unique type across all its use sites with guaranteed fidelity. As a corollary, the user can then be certain of the places where the provenance of this particular value is full known to the compiler.
As an example, the moment the compiler sees:
let f: __UniqueFunc042 = || { ... }; // definition of __UniqueFunc042 (assume it has a nontrivial closure)
/* ... intervening code */
let g: __UniqueFunc042 = /* some expression */;
g();
the compiler has full confidence that g must necessarily originate from f, without even knowing the provenance of g. This would allow the call to g to be devirtualized. The user would know this too, since the user has taken great care to preserve the unique type of f through the flow of data that led to g.
Necessarily, this constrains what the user can do with f. The user is not at liberty to write:
let q = if some_condition { f } else { || {} }; // ERROR: type mismatch
as that would lead to the (illegal) unification of two distinct types.
To work around this, the user could upcast the __UniqueFunc042 to the non-unique type &dyn Fn(),
let f2 = &f as &dyn Fn(); // upcast
let q2 = if some_condition { f2 } else { &|| {} }; // OK
The trade-off made by this type erasure is that uses of &dyn Fn() complicate the reasoning for the compiler. Given:
let g2: &dyn Fn() = /*expression */;
the compiler has to painstakingly examine the /*expression */ to determine whether g2 originates from f or some other function(s), and the conditions under which that provenance holds. In many circumstances, the compiler may give up: perhaps human could tell that g2 really comes from f in all situations but the path from f to g2 was too convoluted for the compiler to decipher, resulting in a virtual call to g2 with pessimistic performance.
This becomes more evident when such objects delivered to generic (template) functions:
fn h<F: Fn()>(f: F);
If one calls h(f) where f: __UniqueFunc042, then h is specialized to a unique instance:
h::<__UniqueFunc042>(f);
This enables the compiler to generate specialized code for h, tailored for the particular argument of f, and the dispatch to f is quite likely to be static, if not inlined.
In the opposite scenario, where one calls h(f) with f2: &Fn(), the h is instantiated as
h::<&Fn()>(f);
which is shared among all functions of type &Fn(). From within h, the compiler knows very little about an opaque function of type &Fn() and so could only conservatively call f with a virtual dispatch. To dispatch statically, the compiler would have to inline the call to h::<&Fn()>(f) at its call site, which is not guaranteed if h is too complex.
First, lambda without capture are convertible to a function pointer. So they provide some form of genericity.
Now why lambdas with capture are not convertible to pointer? Because the function must access the state of the lambda, so this state would need to appear as a function argument.
To avoid name collisions with user code.
Even two lambdas with same implementation will have different types. Which is okay because I can have different types for objects too even if their memory layout is equal.

Can `if constexpr` be used to declare variables with different types and init-expr

For example:
void foo()
{
if constexpr (...)
int x = 5;
else
double x = 10.0;
bar(x); // calls different overloads of bar with different values
}
It's common case in D lang, but I didn't found info about C++17.
Of course, it is possible to use something like
std::conditional<..., int, double>::type x;
but only in elementary cases. Even different initializators (as above) creates big problem.
There is no way this code could work. The problem is that x is out of scope when you are calling bar. But there is a workaround:
constexpr auto t = []() -> auto {
if constexpr(/* condition */) return 1;
else return 2.9;
}();
bar(t);
To explain a little bit, it uses instantly invoked lambda expression along with auto return type deduction. Therefore we are giving t value in place and it does not get out of the scope.
Of course, it wouldn't work if the if statement couldn't have been evaluated at compile time. And if you want to do some runtime operations inside of this lambda you cannot have t as constexpr, but it will still work.
There are two ways this won't work.
First, a variable is limited to the scope in which it's declared. Leaving out the braces isn't gonna fool the compiler: int x = 5 is still in its own scope, and disappears immediately after it appears.
Second, the relaxed grammar rules for if constexpr only apply within the if constexpr's body. It would be infeasible to allow the context created in the body to leak out to the surrounding scope, since by definition it may not be well-formed, or consistent between the then/else block. (What if the else-block declared x as a typename?)
Bottom line, you'll need to move bar() into the if-body, or template foo() itself and have the type and value of x determined by your ....

Rules for constexpr functions

In the following example:
//Case 1
constexpr int doSomethingMore(int x)
{
return x + 1;
}
//Case 2
constexpr int doSomething(int x)
{
return ++x;
}
int main()
{}
Output:
prog.cpp: In function ‘constexpr int doSomething(int)’:
prog.cpp:12:1: error: expression ‘++ x’ is not a constant-expression
Why is Case 1 allowed but Case 2 is not allowed?
Case 1 doesn't modify anything, case 2 modifies a variable. Seems pretty obvious to me!
Modifying a variable requires it to not be constant, you need to have mutable state and the expression ++x modifies that state. Since a constexpr function can be evaluated at compile-time there isn't really any "variable" there to modify, because no code is executing, because we're not at run-time yet.
As others have said, C++14 allows constexpr functions to modify their local variables, allowing more interesting things like for loops. There still isn't really a "variable" there, so the compiler is required to act as a simplified interpreter at compile-time and allow limited forms of local state to be manipulated at compile-time. That's quite a significant change from the far more limited C++11 rules.
Your argument is indeed valid that by spirit/technicality of constexpr both x+1 and ++x are same. Where x is a local variable to the function. Hence there should be no error in any case.
This issue is now fixed with C++14. Here is the forked code and that compiles fine with C++14.
Constant expressions are defined in the last few pages of clause 5.
As a rough description, they are side-effect-free expressions that can be evaluated at compile-time (during translation). The rules surrounding them are created with this principle in mind.