The C++ standard allows the implicit conversion of zero integer constant to pointer of any type.
The following code is invalid, because the value v is not constant here:
float* foo()
{
int v = 0;
return v; // Error
}
But the following code is correct:
float* foo()
{
const int v = 0;
return v; // Ok in C++98 mode, error in C++11 mode
}
The question is: why gcc and clang (tried different versions) compile the code correctly in c++98/03 mode but return warning/error when compiled in c++11/14 mode (-std=c++11)? I tried to find the changes in C++11 working draft PDF, but got no success.
Intel compiler 16.0 and VS2015 compilers show no errors and warnings in both cases.
GCC and Clang behave differently with -std=c++11 because C++11 changed the definition of a null pointer constant, and then C++14 changed it again, see Core DR 903 which changed the rules in C++14 so that only literals are null pointer constants.
In C++03 4.10 [conv.ptr] said:
A null pointer constant is an integral constant expression (5.19) rvalue of integer type that evaluates to zero.
That allows all sorts of of expressions, as long as they are constant and evaluate to zero. Enumerations, false, (5 - 5) etc. etc. ... this used to cause lots of problems in C++03 code.
In C++11 it says:
A null pointer constant is an integral constant expression (5.19) prvalue of integer type that evaluates to zero or a prvalue of type std::nullptr_t.
And in C++14 it says:
A null pointer constant is an integer literal (2.14.2) with value zero or a prvalue of type std::nullptr_t.
This is a much more restrictive rule, and makes far more sense.
Related
Consider the following code.
void f(double p) {}
void f(double* p) {}
int main()
{ f(1-1); return 0; }
MSVC 2017 doesn't compile that. It figures there is an ambiguous overloaded call, as 1-1 is the same as 0 and therefore can be converted into double*. Other tricks, like 0x0, 0L, or static_cast<int>(0), do not work either. Even declaring a const int Zero = 0 and calling f(Zero) produces the same error. It only works properly if Zero is not const.
It looks like the same issue applies to GCC 5 and below, but not GCC 6. I am curious if this is a part of C++ standard, a known MSVC bug, or a setting in the compiler. A cursory Google did not yield results.
MSVC considers 1-1 to be a null pointer constant. This was correct by the standard for C++03, where all integral constant expressions with value 0 were null pointer constants, but it was changed so that only zero integer literals are null pointer constants for C++11 with CWG issue 903. This is a breaking change, as you can see in your example and as is also documented in the standard, see [diff.cpp03.conv] of the C++14 standard (draft N4140).
MSVC applies this change only in conformance mode. So your code will compile with the /permissive- flag, but I think the change was implemented only in MSVC 2019, see here.
In the case of GCC, GCC 5 defaults to C++98 mode, while GCC 6 and later default to C++14 mode, which is why the change in behavior seems to depend on the GCC version.
If you call f with a null pointer constant as argument, then the call is ambiguous, because the null pointer constant can be converted to a null pointer value of any pointer type and this conversion has same rank as the conversion of int (or any integral type) to double.
The compiler works correctly, in accordance to [over.match] and [conv], more specifically [conv.fpint] and [conv.ptr].
A standard conversion sequence is [blah blah] Zero or one [...] floating-integral conversions, pointer conversions, [...].
and
A prvalue of an integer type or of an unscoped enumeration type can be converted to a prvalue of a floating-point type. The result is exact if possible [blah blah]
and
A null pointer constant is an integer literal with value zero or [...]. A null pointer constant can be converted to a pointer type; the result is the null pointer value of that type [blah blah]
Now, overload resolution is to choose the best match among all candidate functions (which, as a fun feature, need not even be accessible at the call location!). The best match is the one with exact parameters or, alternatively, the fewest possible conversions. Zero or one standard conversions may happen (... for every parameter), and zero is "better" than one.
(1-1) is an integer literal with value 0.
You can convert the zero integer literal to each of either double or double* (or nullptr_t), with exactly one conversion. So, assuming that more than one of these functions is declared (as is the case in the example), there exists more than a single candidate, and all candidates are equally good, there exists no best match. It's ambiguous, and the compiler is right about complaining.
Is comparing a pointer to '\0' legal?
On the trunk version of clang++ (25836be2c)
const char *a = "foo";
if(a == '\0')
gives an error: comparison between pointer and integer ('const char *' and 'int')
whereas
if(a == 0)
does not give any error as expected.
Isn't the null character equivalent to the null pointer for comparisons with pointer? Is this a compiler bug?
Another point is that this error does not show up with "-std=c++03" flag but shows up with "-std=c++11" flag. However, I don't get the error in both standards when I use g++ (v4.8.5)
This was a change from C++03 to C++14. In C++03, [conv.ptr]p1 says:
A null pointer constant is an integral constant expression rvalue of integer type that evaluates to zero.
A character literal is an integral constant expression.
In C++14, [conv.ptr]p1 says:
A null pointer constant is an integer literal with value zero or a prvalue of type std::nullptr_t.
A character literal is not an integer literal, nor of type std::nullptr_t.
The originally published version of C++11 didn't contain this change; however, it was introduced due to defect report DR903 and incorporated into the standard sometime after January 2013 (the date of the last comment on that DR).
Because the change is the result of a DR, compilers treat it as a bugfix to the existing standard, not part of the next one, and so Clang and GCC both made the behavior change when -std=c++11, not just when -std=c++14. However, apparently this change wasn't implemented in GCC until after version 4.8. (Specifically, it seems to have only been implemented in GCC 7 and up.)
From [conv.ptr]§1:
A null pointer constant is an integer literal with value zero or a prvalue of type std::nullptr_t. [...]
'\0' is not an integer literal, it's a character literal, thus the conversion does not apply.
int main() {
const int x = 0;
int* y = x; // line 3
int* z = x+x; // line 4
}
Quoth the standard (C++11 §4.10/1)
A null pointer constant is an integral constant expression (5.19) prvalue of integer type that evaluates to
zero or a prvalue of type std::nullptr_t. A null pointer constant can be converted to a pointer type; ...
There are four possibilities:
Line 4 is OK, but line 3 isn't. This is because x and x+x are both constant expressions that evaluate to 0, but only x+x is a prvalue. It appears that gcc takes this interpretation (live demo)
Lines 3 and 4 are both OK. Although x is an lvalue, the lvalue-to-rvalue conversion is applied, giving a prvalue constant expression equal to 0. The clang on my system (clang-3.0) accepts both lines 3 and 4.
Lines 3 and 4 are both not OK. clang-3.4 errors on both lines (live demo).
Line 3 is OK, but line 4 isn't. (Included for the sake of completeness even though no compiler I tried exhibits this behaviour.)
Who is right? Does it depend on which version of the standard we are considering?
The wording in the standard changed as a result of DR 903. The new wording is
A null pointer constant is an integer literal (2.14.2) with value zero or a prvalue of type std::nullptr_t.
Issue 903 involves a curious corner case where it is impossible to produce the "correct" overload resolution in certain cases where a template parameter is a (possibly 0) integer constant.
Apparently a number of possible resolutions were considered, but
There was a strong consensus among the CWG that only the literal 0 should be considered a null pointer constant, not any arbitrary zero-valued constant expression as is currently specified.
So, yes, it depends on whether the compiler has implemented the resolution to DR 903 or not.
The following code compiles under gcc 4.8 and Clang 3.2:
int main()
{
int size = 10;
int arr[size];
}
8.3.4/1 of the C++ Standard says that the size of an array must be an integral constant expression, which size does not seem to be. Is this a bug in both compilers, or am I missing something?
The latest VC++ CTP rejects the code with this interesting message:
error C2466: cannot allocate an array of constant size 0
The interesting part is how it seems to think that size is zero. But at least it rejects the code. Shouldn't gcc and Clang do the same?
This is variable length arrays or VLA which is a C99 feature but gcc and clang support it as an extension in C++ while Visual Studio does not. So Visual Studio is adhering to the standard in this case and is technically correct. Not to say that extensions are bad, the Linux kernel depends on many gcc extensions, so they can be useful in certain contexts.
If you add the -pedantic flag both gcc and clang will warn you about this, for example gcc says (see it live):
warning: ISO C++ forbids variable length array 'arr' [-Wvla]
int arr[size];
^
Using the -pedantic-errors flag will make this an error. You can read more about extensions in these documents Language Standards Supported by GCC and clangs Language Compatibility section.
Update
The draft C++ standard covers what is a integral constant expression in section 5.19 Constant expressions paragraph 3 and says:
An integral constant expression is an expression of integral or unscoped enumeration type, implicitly converted to a prvalue, where the converted expression is a core constant expression. [...]
It is not intuitively obvious from reading this what all the possibilities are but Boost's Coding Guidelines for Integral Constant Expressions does a great job of that .
In this case since you are initializing size with a literal using const would suffice to make it an integral constant expression (see [expr.const]p2.9.1) and also bring the code back to being standard C++:
const int size = 10;
using constexpr would work too:
constexpr int size = 10;
It would probably help to read Difference between constexpr and const.
For reference the equivalent section to 8.3.4 paragraph 1 in the C99 draft standard would be section 6.7.5.2 Array declarators paragraph 4 which says (emphasis mine):
If the size is not present, the array type is an incomplete type. If the size is * instead of being an expression, the array type is a variable length array type of unspecified size, which can only be used in declarations with function prototype scope;124) such arrays are nonetheless complete types. If the size is an integer constant expression and the element type has a known constant size, the array type is not a variable length array type; otherwise, the array type is a variable length array type.
With the code,
const double rotationStep = 0.001;
const int N = 2*int(M_PI/rotationStep) + 3;
static unsigned int counts[N];
g++ gives the error:
array bound is not an integer constant before »]« token
I am using g++/gcc version 4.6.1
Can anybody tell me why g++ complains about the expression?
As of the ISO C++ standard of 2003, that's not an integral constant-expression. Quoting section 5.19 of the standard:
An integral constant-expression can involve only literals (2.13),
enumerators, const variables or static data members of integral or
enumeration types initialized with constant expressions (8.5),
non-type tem-plate parameters of integral or enumeration types, and
sizeof expressions. Floating literals (2.13.3) can appear only if
they are cast to integral or enumeration types.
You could change this:
const double rotationStep = 0.001;
const int N = 2*int(M_PI/rotationStep) + 3;
to this:
const int inverseRotationStep = 1000;
const int N = 2*int(M_PI)*inverseRotationStep + 3;
(That's assuming M_PI is defined somewhere; it's not specified in the standard, but it's a common extension.)
The 2011 ISO C++ standard loosens this up a bit. 5.19p3 (quoting the N3337 draft) says:
An integral constant expression is a literal constant expression of
integral or unscoped enumeration type.
I think 2*int(M_PI/rotationStep) + 3, and therefore N, qualifies under the new rules, but it's likely your compiler doesn't yet implement them.
The problem is that...
g++ gives: array bound is not an integer constant before »]« token
A const value is not a constant expression (though its quite understandable why this would confuse you).
EDIT: I assumed C when I first read this. The problem here is that this expression is not being evaluated at compile time:
const int N = 2*int(M_PI/rotationStep) + 3;
While this would be
const int N = 10;
As #ildjarn noted in the comments, floating point arithmetic is not guaranteed to be evaluated at compile time. Here is a related SO post I found.
As Ed already pointed out, optimizations of floating point operations, including constant folding, are not guaranteed to happen at compile time. Intel's page on the subject gives a few examples, but mainly it's that the rounding behavior may be different and that floating point operations may throw exceptions. This paper goes a bit more in-depth (section 8.3, "Arithmetic Reduction").
GCC does only support
"floating-point expression contraction such as forming of fused multiply-add operations if the target has native support for them"
as mentioned in the description for the ffp-contract flag in the compiler optimizations manual.