While working on an RAII-style guard object, I ended up encoding some of the guard state in a template parameter. This seems reasonable if, for instance, you want a recursive / nested guard object that's aware of how many levels deep it is but without space overhead (being pedantic, I know) or to eliminate some runtime overheads. This turned into an academic curiosity though...
Something like this would be an example:
template <unsigned depth>
class guard {
unsigned get_depth() const {return depth;}
};
guard<2> g2;
std::cout << reinterpret_cast< guard<5>* >( &g2 )->get_depth(); // works? crazy? useful?
I cannot for the life of me think of a legitimate reason to do this, but it got me thinking if this is legal C++ and just how the compiler ought to handle something like this (if it can at all) or if it's just silly through and through.
I assume because the cast target needs to be known at compile time, that the relevant template is instantiated for the cast. Has anyone found something like this useful, assuming it does work and has uses at all, and if so where could this be utilized?
The general question I guess is can reinterpret_cast alter constant template parameters? If so is this just a type hack (for want of a better term) and g2 in this case would always return 2 (after casting)? Or should it return 5 (after casting)?
This is undefined but not because of the strict aliasing rule. The call to get_depth neither reads nor modifies the value of any object (a template non-type parameter isn't an object), so it doesn't access (as defined in [defns.access]) anything within the meaning of the strict aliasing rule.
This is controlled instead by [class.mfct.non-static]/2:
If a non-static member function of a class X is called for an object
that is not of type X, or of a type derived from X, the behavior is
undefined.
Related
Example:
std::function<std::monostate()> convert(std::function<void()> func){
return *reinterpret_cast<std::function<std::monostate()> * >(&func);
}
Are std::function<void()> and std::function<std::monostate()> considered "similar" enough for reinterpret_cast to be safe?
Edit: someone asked me to clarify what I am asking. I am not asking if the general case of foo<X> and foo<Y> are similar but whether foo<void> and foo<std::monostate> are.
No this is unsafe and leads to undefined behavior. In particular, there's no guarantee that the two layouts will be compatible. Of course, you might get away with it with some compiler and runtime combinations, but then it might break if some future release of your compiler decides to implement certain forms of control flow integrity.
The safe way to do what you want, albeit at a small cost in performance, is just to return a new lambda, as in:
std::function<std::monostate()> convert(std::function<void()> func){
return [func=std::move(func)]() -> std::monostate { func(); return {}; };
}
Are std::function<void()> and std::function<std::monostate()> considered "similar" enough for reinterpret_cast to be safe?
No. Given a template foo and distinct types X and Y, the instantiations foo<X> and foo<Y> are not similar, regardless of any perceived relationship between X and Y (as long as they are not the same type, which is why they were qualified as "distinct"). Different template instantiations are unrelated unless documented otherwise. There is no such documentation for std::function.
The rules for "similar" make allowances for digging into pointer types, but there is nothing special for templates. (Nor could there be, since a template specialization could look radically different than its base template.) Different types as template arguments yield dissimilar templated classes. No need to dig deeper into those arguments.
I am not asking if the general case of foo<X> and foo<Y> are similar but whether foo<void> and foo<std::monostate> are.
There is nothing special about void and std::monostate that would make them two names for the same type. (In fact, they cannot be the same type, as the former has zero values, while the latter has exactly one value.) So, asking about foo<void> and foo<std::monostate> is the same as asking about the general case, just with a greater possibility of seeing connections that do not exist.
Also, the question is not about foo<void> and foo<std::monostate> but about foo<void()> and foo<std::monostate()>. The types used as template arguments are function types, not object types. Function types are very particular in that two function types are the same only when all of their parameter and return types are exact matches; none of the conversions allowed when invoking a function are considered. (Not that there is a conversion from void to std::monostate.) The function types are different, so again the templates instantiated from those types are not similar.
Perhaps a more focused version of this question would have asked about function pointers instead of std::function objects.
(from a comment:) I was looking at the assembly code of std::monostate() functions and void() functions and they generate the same assembly verbatim.
Generated assembly means nothing as far as the language is concerned. At best, you have evidence that with your compiler, it seems likely that you could get away with invoking a function pointer after casting it from void (*)() to std::monostate (*)(). Not "safe" so much as "works for now". And that assumes that you use the function pointer directly instead of burying it inside a std::function (a complex adapter of types).
C++ is a strongly typed language. Different types are different even if they are treated the same at the level of assembly code. This might be more readily apparent if we switch to more familiar types. On many common systems, char is signed, making it equivalent to signed char at the assembly code level. However, this does not affect the similarity of functions. The following code is illegal, even if changing char to signed char has no effect on the assembly code generated for foo().
char foo() { return 'c'; }
int main()
{
signed char (*fun)() = foo; // <-- Error: invalid conversion
// ^^^^^^ -- because the return type is signed char, not char
}
One can downgrade this error to a warning with a reinterpret_cast. After all, it is legal to cast a function pointer to any function pointer type. However, it is not safe to invoke the function through the cast pointer (unless cast back to the original type), hence the warning. Invoking it might work very reliably on your system, but that is due to your system, not the language. When you ask about "safe", you are asking for guidance from the language specs, not merely what will probably work on your system.
C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.
I am increasingly finding scoped enums unwieldy to use. I am trying to write a set of function overloads including a template for scoped enums that sets/initializes a value by reference--something like this:
void set_value(int& val);
void set_value(double& val);
template <typename ENUM> set_value(ENUM& val);
However, I don't quite see how to write the templated version of set_value without introducing multiple temporary values:
template <typename ENUM>
set_value(ENUM& val)
{
std::underlying_type_t<ENUM> raw_val;
set_value(raw_val); // Calls the appropriate "primitive" overload
val = static_cast<ENUM>(raw_val);
}
I believe the static_cast introduces a second temporary value in addition to raw_val. I suppose it's possible that one or both of these could be optimized away by the compiler, and in any case it shouldn't really make much difference in terms of performance since the set_value call will also generate temporary values (assuming it's not inlined), but this still seems inelegant. What I would like to do would be something like this:
template <typename ENUM>
set_value(ENUM& val)
{
set_value(static_cast<std::underlying_type_t<ENUM>&>(val));
}
... but this isn't valid (nor is the corresponding code using pointers directly instead of references) because scoped enums aren't related to their underlying primitives via inheritance.
I could use reinterpret_cast, which, from some preliminary testing, appears to work (and I can't think of any reason why it wouldn't work), but that seems to be frowned upon in C++.
Is there a "standard" way to do this?
I could use reinterpret_cast, which, from some preliminary testing, appears to work (and I can't think of any reason why it wouldn't work), but that seems to be frowned upon in C++.
Indeed, that reinterpret_cast is undefined behavior by violation of the strict aliasing rule.
Eliminating a single mov instruction (or otherwise, more or less, copying a register's worth of data) is premature micro-optimization. The compiler is likely to be able to take care of it.
If performance is really important, then follow the optimization process: profile, disassemble, understand the compiler's interpretation, and work together with it within the defined rules.
At a glance, you (and the compiler) might have an easier time with functions like T get_value() instead of void set_value(T). The flow of data and initialization make more sense, although type deduction is lost. You can regain the deduction through tag types, if that's really important.
C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.
C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.