Given this innocent snippet:
#include <cstdint>
template <unsigned int n> constexpr uint64_t bit = (1ull << n);
template <unsigned int n> constexpr uint64_t mask = (n == 64) ? ~0ull : bit<n> - 1;
namespace this_works_fine
{
template <unsigned int n> constexpr uint64_t bit = (1ull << n);
template <unsigned int n> constexpr uint64_t mask = []() constexpr { if constexpr (n == 64) return ~0ull; else return bit<n> - 1; }();
}
int main()
{
auto a = mask<64>;
(void)a;
}
... I expected that to "just work, zero errors, zero warnings". It's quite clear and simple and there's not much room for doing something wrong. The only thing to be aware of is that shifting more than an integer's width is UB (happens for N == 64), but that is being explicitly taken care of. It'll probably produce a warning/error for values larger than 64, but that's fine, no need for an explicit error check.
The conditional operator only evaluates either the second or the third operand based on the first operand's evaluation. So as long as the code altogether is in principle syntactically correct, we're good to go.
Now, GCC (9.1.0) tells me the following:
g++.exe -Wall -fexceptions -O2 --std=c++17 -c main.cpp -o obj\main.o
g++.exe -o lib\gcc-bug.exe obj\main.o -s
main.cpp: In instantiation of 'constexpr const uint64_t bit<64>':
main.cpp:4:73: required from 'constexpr const uint64_t mask<64>'
main.cpp:14:12: required from here
main.cpp:3:59: error: right operand of shift expression '(1 << 64)' is >= than the precision of the left operand [-fpermissive]
3 | template <unsigned int n> constexpr uint64_t bit = (1ull << n);
| ~~~~~~^~~~~
The exact same thing rewritten with if constexpr() instead compiles (and, of course, works) without any trouble. No error, no warning. No surprise. Why wouldn't it work!
While I was about to submit a bug report to GCC which is "obviously broken", it occurred to me that I might first check with version 9.2 (which isn't available for MinGW yet) as well as trunk on Godbolt, and while we're at it with Clang as well since that's just one more click.
Unsurprisingly, the other GCC versions produce the same error, but much to my surprise, Clang doesn't compile it either. It claims that (1ull << n) is not a constant expression. Which is another story, but equally stunning.
So I'm a bit unsettled there. It seems like I am not understanding the rules of the conditional operator correctly? Is there any special exception for templates or template variables where it evaluates differently?
When you are using the if constexpr then this part of the code
else return bit<n> - 1;
is not instantiated when n is equal to 64.
From the C++ Standard (9.4.1 The if statement)
2 If the if statement is of the form if constexpr, the value of the
condition shall be a contextually converted constant expression of
type bool (8.6); this form is called a constexpr if statement. If the
value of the converted condition is false, the first substatement is a
discarded statement, otherwise the second substatement, if present, is
a discarded statement. During the instantiation of an enclosing
templated entity (Clause 17), if the condition is not
value-dependent after its instantiation, the discarded substatement
(if any) is not instantiated.
Opposite to this code all parts of the code
template <unsigned int n> constexpr uint64_t mask = (n == 64) ? ~0ull : bit<n> - 1;
are instantiated. So the compiler issues an error.
Just try the following semantically equivalent code and you will get the same error.
#include <cstdint>
template <unsigned int n> constexpr uint64_t bit = (1ull << n);
template uint64_t bit<64>;
int main()
{
}
Related
I was curious how far I could push gcc as far as compile-time evaluation is concerned, so I made it compute the Ackermann function, specifically with input values of 4 and 1 (anything higher than that is impractical):
consteval unsigned int A(unsigned int x, unsigned int y)
{
if(x == 0)
return y+1;
else if(y == 0)
return A(x-1, 1);
else
return A(x-1, A(x, y-1));
}
unsigned int result = A(4, 1);
(I think the recursion depth is bounded at ~16K but just to be safe I compiled this with -std=c++20 -fconstexpr-depth=100000 -fconstexpr-ops-limit=12800000000)
Not surprisingly, this takes up an obscene amount of stack space (in fact, it causes the compiler to crash if run with the default process stack size of 8mb) and takes several minutes to compute. However, it does eventually get there so evidently the compiler could handle it.
After that I decided to try implementing the Ackermann function using templates, with metafunctions and partial specialization pattern matching. Amazingly, the following implementation only takes a few seconds to evaluate:
template<unsigned int x, unsigned int y>
struct A {
static constexpr unsigned int value = A<x-1, A<x, y-1>::value>::value;
};
template<unsigned int y>
struct A<0, y> {
static constexpr unsigned int value = y+1;
};
template<unsigned int x>
struct A<x, 0> {
static constexpr unsigned int value = A<x-1, 1>::value;
};
unsigned int result = A<4,1>::value;
(compile with -ftemplate-depth=17000)
Why is there such a dramatic difference in evaluation time? Aren't these essentially equivalent? I guess I can understand the consteval solution requiring slightly more memory and evaluation time because semantically it consists of a bunch of function calls, but that doesn't explain why this exact same (non-consteval) function computed at runtime only takes slightly longer than the metafunction version (compiled without optimizations).
Why is consteval so slow? And how can the metafunction version be so fast? It's actually not much slower than optimized machine-code.
In the template version of A, when a particular specialization, say A<2,3>, is instantiated, the compiler remembers this type, and never needs to instantiate it again. This comes from the fact that types are unique, and each "call" to this meta-function is just computing a type.
The consteval function version is not optimized to do this, and so A(2,3) may be evaluated multiple times, depending on the control flow, resulting in the performance difference you observe. There's nothing stopping compilers from "caching" the results of function calls, but these optimizations likely just haven't been implemented yet.
constexpr uint32_t BitPositionToMask(int i,int Size){
static_assert(i < Size,"bit position out of range");
return 1 << i;
}
this generates:
error: non-constant condition for static assertion
on GCC 4.6.2 Am I not getting something or is this a GCC bug?
A constexpr function can also be invoked with arguments evaluated at run-time (in that case, it just gets executed just like any regular function). See, for instance, this live example.
A static_assert(), on the other hand, strictly requires its condition to be a constant expression that can be evaluated at compile time.
This answer was posted by odinthenerd (under the CC BY-SA 3.0 license) as an edit to the question Why is comparing two parameters of a constexpr function not a constant condition for static assertion?. Reposted here to conform to the site's Q&A format.
If the values are known at compile time, they can be passed as template parameters and it works as intended.
template<int i,int Size>
constexpr uint32_t BitPositionToMask() {
static_assert(i < Size,"bit position out of range");
return 1 << i;
}
I'm exploring constant expressions in combination with templates in C++, and I've run into a problem I cannot understand.
I want to check if the value of a template argument (in my case an unsigned int) is zero, but the compiler never thinks the value is zero even though I have a static_assert that confirms that it has actually passed below zero.
I'm implementing a simple (or at least I thought so) template function that is supposed to just summarize all integer values in the range from e.g. 5 down to zero.
It's supposed to perform a recursive invocation of the template function until it reaches zero, and then it should stop, but the compiler never thinks the template parameter value is zero.
Here's my problematic function:
template <unsigned int Value>
constexpr unsigned int sumAllValues()
{
static_assert (Value >= 0, "Value is less than zero!");
return Value == 0 ? 0 : Value + sumAllValues<Value - 1>();
}
And it's invoked like this:
constexpr unsigned int sumVals = sumAllValues<5>();
For some reason the compiler never thinks Value == 0, and hence it continues until it stops on the static_assert. If I remove the assert the compiler continues until it reaches the max instantiation depth:
error: template instantiation depth exceeds maximum of 900 (use -ftemplate-depth= to increase the maximum)
return Value == 0 ? 0 : Value + sumAllValues();
What am I doing wrong in the above function? Can I not check the value of the template parameter itself?
I was inspired by an example I found on Wikipedia:
template<int B, int N>
struct Pow
{
// recursive call and recombination.
enum{ value = B*Pow<B, N-1>::value };
};
template< int B >
struct Pow<B, 0>
{
// ''N == 0'' condition of termination.
enum{ value = 1 };
};
int quartic_of_three = Pow<3, 4>::value;
See reference: C++11
And I actually do have a working example which is constructed more in the way the above Pow example code is:
template <unsigned int Value>
constexpr unsigned int sumAllValues()
{
static_assert (Value > 0, "Value too small!");
return Value + sumAllValues<Value - 1>();
}
template <>
constexpr unsigned int sumAllValues<0>()
{
return 0;
}
And it's invoked in the same way as my problematic function:
constexpr unsigned int sumVals = sumAllValues<5>();
It also performs a recursive invocation of the template function until it reaches zero. The zero case has been specialized to interrupt the recursion. This code works, and produces the value 15 if I input 5 as template argument.
But I thought I could simplify it with the function I'm having problems with.
I'm developing on Linux (Ubuntu 18.04) in Qt 5.12.2.
Update:
StoryTeller suggests a working solution which is to use the C++17 feature "if constexpr" to halt the recursion:
template <unsigned int Value>
constexpr unsigned int sumAllValues()
{
if constexpr (Value > 0)
return Value + sumAllValues<Value - 1>()
return 0;
}
Instantiating the body of a function template means instantiating everything it uses. How does the body of sumAllValues<0> look like? It's something like this:
template <>
constexpr unsigned int sumAllValues<0>()
{
static_assert (0 >= 0, "Value is less than zero!");
return Value == 0 ? 0 : 0 + sumAllValues<0 - 1>();
}
See the call to sumAllValues<-1>? While it's not going to be evaluated, it still appears there and must therefore be instantiated. But Value is unsigned, so you get wrap around. (unsigned)-1 is a very large unsigned number, not something less than zero. So the recursion continues, and it may continue indefinitely if not for the implementation having its limits.
The version with the specialization doesn't have the same function body for sumAllValues<0>, so it never tries to instantiate sumAllValues<-1>. The recursion really stops at 0 there.
Prior to C++17, the specialization is probably the shortest way to get at the functionality you want. But with the addition of if constexpr, we can reduce the code to one function:
template <unsigned int Value>
constexpr unsigned int sumAllValues()
{
if constexpr (Value > 0)
return Value + sumAllValues<Value - 1>()
return 0;
}
if constexpr will entirely discard the code in its branch if the condition isn't met. So for the 0 argument, there will not be a recursive call present in the body of the function at all, and so nothing will need to be instantiated further.
In addition to StoryTeller's answer:
An interesting detail on how if constexpr works (inverting the condition for illustration):
if constexpr(Value == 0)
return 0;
return Value + sumAllValues<Value - 1>();
While the code after the if won't be executed, it is still there and must be compiled, and you fall into the same error you had already. In contrast to:
if constexpr(Value == 0)
return 0;
else
return Value + sumAllValues<Value - 1>();
Now, as residing in the else branch to the constexpr if, it will again be entirely discarded if the condition does match and we are fine again...
constexpr uint32_t BitPositionToMask(int i,int Size){
static_assert(i < Size,"bit position out of range");
return 1 << i;
}
this generates:
error: non-constant condition for static assertion
on GCC 4.6.2 Am I not getting something or is this a GCC bug?
A constexpr function can also be invoked with arguments evaluated at run-time (in that case, it just gets executed just like any regular function). See, for instance, this live example.
A static_assert(), on the other hand, strictly requires its condition to be a constant expression that can be evaluated at compile time.
This answer was posted by odinthenerd (under the CC BY-SA 3.0 license) as an edit to the question Why is comparing two parameters of a constexpr function not a constant condition for static assertion?. Reposted here to conform to the site's Q&A format.
If the values are known at compile time, they can be passed as template parameters and it works as intended.
template<int i,int Size>
constexpr uint32_t BitPositionToMask() {
static_assert(i < Size,"bit position out of range");
return 1 << i;
}
I want to write a template that returns me the smallest signed integer type that can represent a given number. This is my solution:
/**
* Helper for IntTypeThatFits.
* Template parameters indicate whether the given number fits into 8, 16 or 32
* bits. If neither of them is true, it is assumed that it fits 64 bits.
*/
template <bool fits8, bool fits16, bool fits32>
struct IntTypeThatFitsHelper { };
// specializations for picking the right type
// these are all valid combinations of the flags
template<> struct IntTypeThatFitsHelper<true, true, true> { typedef int8_t Result; };
template<> struct IntTypeThatFitsHelper<false, true, true> { typedef int16_t Result; };
template<> struct IntTypeThatFitsHelper<false, false, true> { typedef int32_t Result; };
template<> struct IntTypeThatFitsHelper<false, false, false> { typedef int64_t Result; };
/// Finds the smallest integer type that can represent the given number.
template <int64_t n>
struct IntTypeThatFits
{
typedef typename IntTypeThatFitsHelper<
(n <= numeric_limits<int8_t>::max()) && (n >= numeric_limits<int8_t>::min()),
(n <= numeric_limits<int16_t>::max()) && (n >= numeric_limits<int16_t>::min()),
(n <= numeric_limits<int32_t>::max()) && (n >= numeric_limits<int32_t>::min())
>::Result Result;
};
However, GCC does not accept this code. I get an error "comparison is always true due to limited range of data type [-Werror=type-limits]". Why does that happen? n is a signed 64bit integer, and all of the comparisons may be true or false for different values of n, or am I overlooking something?
I will be glad for any help.
Edit: I should mention that I am using C++11.
It's an issue with gcc, warnings in templated code can be frustrating. You can either change the warning or use another approach.
As you may know, templated code is analyzed twice:
once when first encountered (parsing)
once when instantiated for a given type/value
The problem here is that at instantiation, the check is trivial (yes 65 fits into an int thank you), and the compiler fails to realize that this warning does not hold for all instantiations :( It is very frustrating indeed for those of us who strive to have a warning-free compiling experience with warnings on.
You have 3 possibilities:
deactivate this warning, or demote it to a non-error
use a pragma to selectively deactivate it for this code
rework the code in another format so that it does not trigger the warning any longer
Note that sometimes the 3rd possibility involves a massive change and much more complicated solution. I advise against complicated one's code just to get rid of clueless warnings.
EDIT:
One possible workaround:
template <int64_t n>
struct IntTypeThatFits {
static int64_t const max8 = std::numeric_limits<int8_t>::max();
static int64_t const min8 = std::numeric_limits<int8_t>::min();
static int64_t const max16 = std::numeric_limits<int16_t>::max();
static int64_t const min16 = std::numeric_limits<int16_t>::min();
static int64_t const max32 = std::numeric_limits<int32_t>::max();
static int64_t const min32 = std::numeric_limits<int32_t>::min();
typedef typename IntTypeThatFitsHelper<
(n <= max8 ) && (n >= min8 ),
(n <= max16) && (n >= min16),
(n <= max32) && (n >= min32)
>::Result Result;
};
... by changing the type of the data used in the comparison, it should silence the compiler warning. I suppose explicit casting (int64_t(std::numeric_limits<int8_t>::max())) could work too, but I found this more readable.
The error is happening because you asked GCC to give you errors about this warning with -Werror=type-limits. The warning -Wtype-limits gives you a warning if you ever do a comparison which will always be true due to the ranges of the given data types, for example:
uint8_t x;
if(x >= 0) { ... } // always true, unsigned integers are non-negative
if(x >= 256) { ... } // always false
int32_t x;
if(x < 9223372036854775808LL) { ... } // always true
This warning can sometimes be useful, but in many cases including this it's just useless pedantry and can be ignored. It's normally a warning (enabled as part of -Wextra, if you use that), but with -Werror or -Werror=type-limits, GCC makes it an error.
Since in this case it's not actually indicative of a potential problem with your code, just turn off the warning with -Wno-type-limits, or make it not be an error with Werror=no-type-limits if you don't mind seeing those warnings in the compiler output.
typedef typename IntTypeThatFitsHelper<
(n <= numeric_limits<int8_t>::max()) && (n >= numeric_limits<int8_t>::min()),
(n <= numeric_limits<int16_t>::max()) && (n >= numeric_limits<int16_t>::min()),
(n <= numeric_limits<int32_t>::max()) && (n >= numeric_limits<int32_t>::min())
>::Result Result;
You can't do that in C++ (in C++11 you can) - numeric_limits<int8_t>::max() is not compile time constant. Are you using C++11?
BTW, Boost provides this for you already: http://www.boost.org/doc/libs/1_49_0/libs/integer/doc/html/boost_integer/integer.html
I think the other answers of what the problem is are wrong. I don't believe this is a case of an over-eager compiler, but I believe it's a compiler bug. This code still fires the warning:
template<int64_t n>
bool a() {
return (n <= static_cast<int64_t>(std::numeric_limits<int8_t>::max()));
}
When calling a<500>();, but this code does not:
template<int64_t n>
bool a() {
return (n <= static_cast<int64_t>(127));
}
std::numeric_limits::max() evaluates to 127. I'll file a bugzilla report for this later today if no one else does.
You get the warning because for some instantations of template <int64_t n>
struct IntTypeThatFits with small n (smaller than 2^32) some of the comparisons are always true (sic!) because of the range of the operand during compile time .
This could be considered a bogus warning in this case, because your code relies on it, OTOH you explictly requested to make it an error with a -Werror or similar command line switch, you basically get what you asked here.