I have two enum class types: Type and SocketType. The following code won't compile and fails with the message mentioned in the question, in VC++ 2017:
static constexpr std::map<Type,SocketType> PacketTypeMap =
{
{Type::JUSTJOINED, SocketType::TCP},
{Type::CHAT_MESSAGE, SocketType::TCP},
{Type::REQUEST_WORLD, SocketType::TCP},
{Type::DATA_WORLD, SocketType::TCP},
{Type::DATA_PLAYER, SocketType::UDP},
{Type::RESPAWN_PLAYER, SocketType::TCP}
};
Been trying some variations and nothing works, but I'm sure I'm just missing something simple with the syntax.
std::map is not compatible with constexpr. There exists an experimental(?) library called frozen, which provides a constexpr-compatible frozen::map (besides frozen::unordered_map, frozen::string, and others).
However, most probably you just want to pick a simpler solution (e.g., a switch statement in a constexpr function).
Migrating the answer from the comments section into the answer section.
There are no constexpr maps. It uses dynamic allocation, which is not possible with constexpr. Get rid of constexpr, or use a different container for compile-type map.
Related
Take the following code:
template <typename T, typename U>
constexpr bool can_represent(U&& w) noexcept
{
return [] (auto&& x) {
try {
return T(std::forward<U>(x)) == std::forward<U>(x);
} catch(...) {
return false;
}
} (std::forward<U>(w));
}
I am using this function in a constant expression (template).
gcc compiles it without a problem. clang and MSVC don't, lamenting that the function does not result in a constant expression.
Indeed, gcc did not immediately accept this either; it was getting hung up on the try, that normally wouldn't be allowed in a constexpr function. That's why I had to use an immediately invoked lambda expression. However, now it works, and considering it only works with gcc I'm quite confused.
Which compiler is correct?
Is there a property of the lambda that permits this to work in a constexpr context, or is this some kind of non-standard gcc extension?
[I've used godbolt to compile with clang and MSVC, where as I have gcc 8.1.0 on my machine]
[gcc] was getting hung up on the try, that normally wouldn't be allowed in a constexpr function.
This is correct for a C++17 program. (C++20 relaxed this, so a try block can now be used in a constexpr function. However, it is only the try that is allowed; it is not allowed for execution to hit something that throws an exception.)
That's why I had to use an immediately invoked lambda expression.
The implication here is that your approach made your code valid. This is incorrect. Using an an immediately invoked lambda did not work around the problem; it swept the problem under the rug. The try is still a problem, but now compilers do not have to tell you it is a problem.
Using a lambda switches the constexpr criterion from the straight-forward "the function body must not contain a try-block" to the indirect "there exists at least one set of argument values such that an invocation of the function could be an evaluated subexpression of a core constant expression". The tricky part here is that a violation of the latter criterion is "no diagnostic required", meaning that all the compilers are correct, whether or not they complain about this code. Hence my characterization of this as sweeping the problem under the rug.
So why is... that criterion is a long thing to repeat... what's the issue involving "core constant expressions"? C++17 removed the prohibition against lambdas in core constant expressions, so that much looks good. However, there is still a requirement that all function calls within the constexpr function also be themselves constexpr. Lambdas can become constexpr in two ways. First, they can be explicitly marked constexpr (but if you do that here, the complaint about the try block should come back). Second, they can simply satisfy the constexpr function requirements. However, your lambda contains a try, so it is not constexpr (in C++17).
Your lambda is not a valid constexpr function. Hence calling it is not allowed in a core constant expression. There is no execution path through can_represent() that avoids invoking your lambda. Therefore, can_represent is not a valid constexpr function, no diagnostic required.
I'm having trouble assigning an element in an enuma max value. First:
protected:
enum {DEFAULT_PADDING=std::numeric_limits<enum>::max()};
Results in:
./basecode.h:30:51: error: expected identifier or '{'
enum {DEFAULT_PADDING=std::numeric_limits<enum>::max()};
^
./basecode.h:30:59: error: expected a type
enum {DEFAULT_PADDING=std::numeric_limits<enum>::max()};
(and a couple of others)
Second, switching to:
protected:
enum {DEFAULT_PADDING=std::numeric_limits<unsigned int>::max()};
Results in:
./basecode.h:30:27: error: expression is not an integral constant expression
enum {DEFAULT_PADDING=std::numeric_limits<unsigned int>::max()};
How do I have numeric_limits give me a value that I can use at compile time for an enum?
The library is older, so it supports a lot of older compilers and IDEs. I need something that is at least C++03 and preferably C++98.
And standard caveats apply: this is a simple make based project. It does not use Autotools, it does not use Cmake, it does not use Boost, etc.
In C++03, std::numeric_limits<T>::max() was simply static. In C++11, it became static constexpr. You need the latter in order to be used in an integral constant expression, so simply compiling with -std=c++11 will do.
If you can't use C++11, you can just use UINT_MAX.
In C++14 we get upgraded version of constexpr meaning that now it will be possible to use loops, if-statements and switches.
Recursion is already possible as in C++11.
I understand that constexpr functions/code should be quite simple, but still the question arise: how to effectively debug it?
Even in "The C++ Programming Language, 4th Edition" there is a sentence that debugging can be hard.
There are two important aspects for debugging constexpr functions.
1) Make sure they compute the correct result
Here you can use regular unit-testing, asserts or a runtime debugger to step through your code. There is nothing new compared to testing regular functions here.
2) Make sure they can be evaluated at compile-time
This can be tested by evaluating the function as the right-hand side of a constexpr variable assignment.
constexpr auto my_var = my_fun(my_arg);
In order for this to work, my_fun can a) only have compile-time constant expression as actual arguments. I.e. my_arg is a literal (builtin or user-defined) or a previously computed constexpr variable or a template parameter, etc, and b) it can only call constexpr functions in its implementation (so no virtuals, no lambda expressions, etc.).
Note: it is very hard to actually debug the compiler's implementation of code generation during the compile-time evaluation of your constexpr function. You would have to attach a debugger to your compiler and actually be able to interpret the code path. Maybe some future version of Clang will let you do this, but this is not feasible with current technology.
Fortunately, because you can decouple the runtime and compile-time behavior of constexpr functions, debugging them isn't half as hard as debugging template metaprograms (which can only be run at compile-time).
The answer I wrote on 3 April '15 is clearly wrong. I can't understand what I was thinking.
Here is the "real" answer - the method I use now.
a) write your constexpr function as you normally would. So far it doesn't work.
b) when the function is invoked at compile time - compilation fails with nothing more than a message to the effect "invalid constexpr" function. This makes it hard to know what the problem actually is.
c) Make small test program which calls the function with parameters known only at runtime. Run your test program with the debugger. You'll find that you can trace through the function in the normal manner.
It took me an embarrassingly long time to figure this out.
If you are using gcc, you can try this
and there is a introduce about it
If by debugging you mean "make it known that a certain expression is not of a desired value", you could check it at runtime
#include <stdexcept>
#include <iostream>
constexpr int test(int x){ return x> 0 ? x : (throw std::domain_error("wtf")); }
int main()
{
test(42);
std::cout<< "42\n";
test(-1);
std::cout<< "-1\n";
}
This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.
In VC++ when I need to specify an array bound for a class member variable I do it this way:
class Class {
private:
static const int numberOfColors = 16;
COLORREF colors[numberOfColors];
};
(please don't tell me about using std::vector here)
This way I have a constant that can be used as an array bound and later in the class code to specify loop-statement constraints and at the same time it is not visible anywhere else.
The question is whether this usage of static const int member variables only allowed by VC++ or is it typically allowed by other widespread compilers?
This is valid C++ and most (all?) reasonably modern compilers support it. If you are using boost, you can get portable support for this feature in the form of BOOST_STATIC_CONSTANT macro:
class Class {
private:
BOOST_STATIC_CONSTANT(int, numberOfColors = 16);
COLORREF colors[numberOfColors];
};
The macro is expanded to static const int numberOfColors = 16 if the compiler supports this, otherwise it resorts to enum { numberOfColors=16 };.
That behavior is valid according to the C++ Standard. Any recent compiler should support it.
I believe that Visual Studio 2005 and beyond supports it. The XCode C++ compiler as well (this is gcc actually).
If you want to be safe you could always use the old enum hack that I learned from Effective C++. It goes like this:
class Class {
private:
enum {
numberOfColors = 16
};
COLORREF colors[numberOfColors];
};
Hope this helps.
This has been standard C++ for more than a decade now. It's even supported by VC -- what more could you want? (#Neil: What about SunCC? :^>)
Yes, it's 100% legal and should be portable. The C++ standard says this in 5.19 - Constant expressions" (emphasis mine):
In several places, C++ requires expressions that evaluate to an integral or enumeration constant: as array bounds (8.3.4, 5.3.4), as case-expressions (6.4.2), as bit-field lengths (9.6), as enumerator initializers (7.2), as static member initializers (9.4.2), and as integral or enumeration non-type template arguments (14.3).
constant-expression:
conditional-expression
An integral constant-expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5), non-type template parameters of integral or enumeration types, and sizeof expressions.
That said, it appears that VC6 doesn't support it. See StackedCrooked's answer for a good workaround. In fact, I generally prefer the enum method StackedCrooked mentions for this type of thing.
As an FYI, the "static const" technique works in VC9, GCC 3.4.5 (MinGW), Comeau and Digital Mars.
And don't forget that if you use a "`static const'" member, you'll need a definition for it in addition to the declaration strictly speaking. However, virtually all compilers will let you get away with skipping the definition in this case.
Besides other answers you can use following function do determine number of elements in statically alocated arrays:
template<typename T, size_t length>
size_t arrayLength(T (&a)[length])
{
return length;
}
I'm pretty sure that this will also work with gcc and Solaris, but I can't verify this at the moment.
In the future you could extend the idea like this:
template<int size>
class Class {
private:
COLORREF colors[size];
};
and use it like this:
Class<5> c;
so that you are not limited to exactly one buffer size in your application.
I've stopped bothering about the portability of that years ago. There are perhaps still compilers which don't support it, but I haven't met any of them recently.
It is possible to answer questions like this by referencing the ISO C++ speicifcation but the spec is hard for people to get and harder to read.
I think the simplest answer hinges on two things:
Microsoft Visual Studio 2005 and up is a relatively conformant C++ implementation. If it allows you to do something, chances are its standard.
Download something like Code::Blocks to get a GCC compiler to try stuff out. If it works in MS and GCC, chances really are, its standard.