int numeral -> pointer conversion rules - c++

Consider the following code.
void f(double p) {}
void f(double* p) {}
int main()
{ f(1-1); return 0; }
MSVC 2017 doesn't compile that. It figures there is an ambiguous overloaded call, as 1-1 is the same as 0 and therefore can be converted into double*. Other tricks, like 0x0, 0L, or static_cast<int>(0), do not work either. Even declaring a const int Zero = 0 and calling f(Zero) produces the same error. It only works properly if Zero is not const.
It looks like the same issue applies to GCC 5 and below, but not GCC 6. I am curious if this is a part of C++ standard, a known MSVC bug, or a setting in the compiler. A cursory Google did not yield results.

MSVC considers 1-1 to be a null pointer constant. This was correct by the standard for C++03, where all integral constant expressions with value 0 were null pointer constants, but it was changed so that only zero integer literals are null pointer constants for C++11 with CWG issue 903. This is a breaking change, as you can see in your example and as is also documented in the standard, see [diff.cpp03.conv] of the C++14 standard (draft N4140).
MSVC applies this change only in conformance mode. So your code will compile with the /permissive- flag, but I think the change was implemented only in MSVC 2019, see here.
In the case of GCC, GCC 5 defaults to C++98 mode, while GCC 6 and later default to C++14 mode, which is why the change in behavior seems to depend on the GCC version.
If you call f with a null pointer constant as argument, then the call is ambiguous, because the null pointer constant can be converted to a null pointer value of any pointer type and this conversion has same rank as the conversion of int (or any integral type) to double.

The compiler works correctly, in accordance to [over.match] and [conv], more specifically [conv.fpint] and [conv.ptr].
A standard conversion sequence is [blah blah] Zero or one [...] floating-integral conversions, pointer conversions, [...].
and
A prvalue of an integer type or of an unscoped enumeration type can be converted to a prvalue of a floating-point type. The result is exact if possible [blah blah]
and
A null pointer constant is an integer literal with value zero or [...]. A null pointer constant can be converted to a pointer type; the result is the null pointer value of that type [blah blah]
Now, overload resolution is to choose the best match among all candidate functions (which, as a fun feature, need not even be accessible at the call location!). The best match is the one with exact parameters or, alternatively, the fewest possible conversions. Zero or one standard conversions may happen (... for every parameter), and zero is "better" than one.
(1-1) is an integer literal with value 0.
You can convert the zero integer literal to each of either double or double* (or nullptr_t), with exactly one conversion. So, assuming that more than one of these functions is declared (as is the case in the example), there exists more than a single candidate, and all candidates are equally good, there exists no best match. It's ambiguous, and the compiler is right about complaining.

Related

c++ function call with square parenthesis

int func(int n)
{return n;}
int main()
{ cout << func[4] ;
cout << func[4,3,5] ;}
what do these actually mean? I guess it is about accessing func+4 and func is allocated space on calling func[4].
But, func[4,3,5] is just absurd.
The reason this code compiles and func[4] is not a syntax error is:
1.Function types can implicitly convert to pointers of the same type.
So, if we have code like this:
int f(int);
using func_t = int(*)(int);
void g(func_t);
we can write
g(f)
and aren't forced to write g(&f). The &, taking us from type int(int) to int(*)(int) happens implicitly.
2.In C (and necessarily in C++ for compatibility) pointers are connected to arrays, and when p is a pointer p[x] is the same as *(p + x). So func[4] is the same as *(func + 4).
3.*(p+x) has the type of a function int(int), but again can implicitly decay to a pointer type whenever necessary. So *(func + 4) can implicitly just be (func + 4).
4.Pointers of any type are streamable to std::cout.
Note, that just because it isn't a syntax error doesn't mean it is valid. Of course it is undefined behavior, and as the compiler warning emitted by gcc and clang indicates, pointer arithmetic with a function pointer is generally wrong, because you cannot make an array of functions. The implementation places functions however it likes. (You can make an array of function pointers but that is something else entirely.)
Edit: I should correct myself -- this answer is not entirely correct. func[4] is not valid, because the pointer is not a pointer to an object type. #holyblackcat answer is correct, see his answer for reference in the standard.
This code should be ill-formed, and gcc only compiles it without an error because they are using a nonstandard extension by default. Clang and msvc correctly reject this code.
I'm surprised no answer mentions it, but:
The code in the question is simply not valid C++.
It's rejected by Clang and MSVC with no flags. GCC rejects it with -pedantic-errors.
a[b] (in absence of operator overloading) is defined as *(a + b), and the builtin operator + requires the pointer operand to be a pointer to an object type (which functions pointers are not).
[expr.add]/1
...either both operands shall have arithmetic or unscoped enumeration type, or one operand shall be a pointer to a completely-defined object type and the other shall have integral or unscoped enumeration type.
(Emphasis mine.)
GCC compiles the code because an extension allowing arithmetic on function pointer is enabled by default.
Due to function-to-pointer decay, func[4] is treated as &(&func)[4], which effectively means &func + 4, which (as the link explains) simply adds 4 to the numerical value of the pointer. Calling resulting pointer will most likely cause a crash or unpredicatble results.
std::cout doesn't have an overload of << suitable for printing function pointers, and the best suitable overload the compiler is able to find is the one for printing bools. The pointer gets converted to bool, and since it's non-null, it becomes true, which is then printed as 1.
Lastly, func[4,3,5] has the same effect as func[5], since in this context , is treated as an operator, and x , y is equal to y.
Since it has not been mentioned yet: func[3, 4, 5] is identical to func[5] - the commas in there are the builtin comma operator which evaluates the left hand side expression, discards it and then evaluates the right hand side expression. There is no function call happening here and the commas in the code are not delimiting function parameters.
Yes,It is about accessing the func+4 which is not already defined leading to a garbage value.So the compiler will indicate you with the following warning message.
hereProgram: In function 'int main()':
Program:7: warning: pointer to a function used in arithmetic

C++11 backwards compatibility (conversion of null integer constant to pointer)

The C++ standard allows the implicit conversion of zero integer constant to pointer of any type.
The following code is invalid, because the value v is not constant here:
float* foo()
{
int v = 0;
return v; // Error
}
But the following code is correct:
float* foo()
{
const int v = 0;
return v; // Ok in C++98 mode, error in C++11 mode
}
The question is: why gcc and clang (tried different versions) compile the code correctly in c++98/03 mode but return warning/error when compiled in c++11/14 mode (-std=c++11)? I tried to find the changes in C++11 working draft PDF, but got no success.
Intel compiler 16.0 and VS2015 compilers show no errors and warnings in both cases.
GCC and Clang behave differently with -std=c++11 because C++11 changed the definition of a null pointer constant, and then C++14 changed it again, see Core DR 903 which changed the rules in C++14 so that only literals are null pointer constants.
In C++03 4.10 [conv.ptr] said:
A null pointer constant is an integral constant expression (5.19) rvalue of integer type that evaluates to zero.
That allows all sorts of of expressions, as long as they are constant and evaluate to zero. Enumerations, false, (5 - 5) etc. etc. ... this used to cause lots of problems in C++03 code.
In C++11 it says:
A null pointer constant is an integral constant expression (5.19) prvalue of integer type that evaluates to zero or a prvalue of type std::nullptr_t.
And in C++14 it says:
A null pointer constant is an integer literal (2.14.2) with value zero or a prvalue of type std::nullptr_t.
This is a much more restrictive rule, and makes far more sense.

using Boolean to represent integers

In c++, bool is used to represent Boolean. that is it holds true or false. But in some case we can use bool to represent integers also.
what is the meaning of bool a=5; in c++?
"what is the meaning of bool a=5; in c++?"
It's actually equivalent to writing
bool a = (5 != 0);
"But in some case we can use bool to represent integers also."
Not really. The bool value only represents whether the integer used to initialize it was zero (-> false) or not (-> true).
The other way round (as mentioned in #Daniel Frey's comment) false will be converted back to an integer 0, and true will become 1.
So the original integer value's (or any other expression results like pointers, besides nullptr, or double values not exactly representing 0.0) will be lost.
Conclusion
As mentioned in LRiO's answer, it's not possible to store information other than false or true in a bool variable.
There are guaranteed rules of conversion though (citation from cppreference.com):
The safe bool problem
Until the introduction of explicit conversion functions in C++11, designing a class that should be usable in boolean contexts (e.g. if(obj) { ... }) presented a problem: given a user-defined conversion function, such as T::operator bool() const;, the implicit conversion sequence allowed one additional standard conversion sequence after that function call, which means the resultant bool could be converted to int, allowing such code as obj << 1; or int i = obj;.
One early solution for this can be seen in std::basic_ios, which defines operator! and operator void* (until C++11), so that the code such as if(std::cin) {...} compiles because void* is convertible to bool, but int n = std::cout; does not compile because void* is not convertible to int. This still allows nonsense code such as delete std::cout; to compile, and many pre-C++11 third party libraries were designed with a more elaborate solution, known as the Safe Bool idiom.
No, we can't.
It's just that there's an implicit conversion from integer to boolean.
Zero becomes false; anything else becomes true.
In fact this declaration
bool a=5;
is equivalent to
bool a=true;
except that in the first declaration 5 as it is not equal to zero is implicitly converted to true.
From the C++ Standard (4.12 Boolean conversions )
1 A prvalue of arithmetic, unscoped enumeration, pointer, or pointer
to member type can be converted to a prvalue of type bool. A zero
value, null pointer value, or null member pointer value is converted
to false; any other value is converted to true. For
direct-initialization (8.5), a prvalue of type std::nullptr_t can be
converted to a prvalue of type bool; the resulting value is false.
One more funny example
bool a = new int;
Of course it does not mean that we may use bool to represent pointers. Simply if the allocation is successfull then the returned poimter is not equal to zero and implicitly converted to true according to the quote I showed.
Take into account that till now some compilers have a bug and compile this code successfully
bool a = nullptr;
Though according to the same quote a valid declaration will look like
bool a( nullptr );
And, basically, I would opine that bool b=5 is "just plain wrong."
Both C and C++ have a comparatively weak system of typing. Nevertheless, I'm of the opinion that this should have produced a compile error ... and, if not, certainly a rejected code review, because it is (IMHO) more likely to be an unintentional mistake, than to be intention.
You should examine the code to determine whether b is or is not "really" being treated as a bool value. Barring some "magic bit-twiddling purpose" (which C/C++ programs sometimes are called-upon to do ...), this is probably a typo or a bug.
Vlad's response (below) shows what the standard says will happen if you do this, but I suggest that, "since it does not make human-sense to do this, it's probably an error and should be treated accordingly by the team.
A "(typecast)?" Maybe. A data-type mismatch such as this? Nyet.

std::nullptr_t arguments in variadic functions

So apparently a std::nullptr_t argument is converted to a null pointer of type void * (Section 5.2.2/7 of N3337) when passed without a parameter (via ...). This means that to properly pass a null char * pointer, for example, a cast is still needed:
some_variadic_function("a", "b", "c", (const char *) std::nullptr);
since there is no guarantee that a null void * has the same bit pattern as a null char *. Correct?
This also means that there is no advantage to std::nullptr over 0 in such cases, except perhaps for clarity.
You ask:
since there is no guarantee that a null void * has the same bit pattern as a null char *. Correct?
Well, actually, that guarantee does exist, Deduplicator's answer already shows where the standard requires this. But that is not relevant to your question.
Passing void * to a variadic function, and accessing it using va_arg as char *, is specifically allowed as a special exception.
C++11:
18.10 Other runtime support [support.runtime]
1 Headers <csetjmp> (nonlocal jumps), <csignal> (signal handling), <cstdalign> (alignment), <cstdarg> (variable arguments), <cstdbool> (__bool_true_false_are_defined). (runtime environment
getenv(), system()), and <ctime> (system clock clock(), time()) provide further compatibility with C code.
2 The contents of these headers are the same as the Standard C library headers <setjmp.h>, <signal.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>, <stdlib.h>, and <time.h>, respectively, with the following
changes:
[... nothing about va_arg]
C99:
7.15.1.1 The va_arg macro
[...] If there is no actual next argument, or if type is not compatible with the type of the actual next argument (as promoted according to the default argument promotions), the behavior is undefined, except for the following cases:
-- one type is a signed integer type, the other type is the corresponding unsigned integer type, and the value is representable in both types;
-- one type is pointer to void and the other is a pointer to a character type.
However, this does mean that in other cases where two types T1 and T2 have the same representation and alignment requirements, the behaviour is undefined if T1 is passed to a variadic function, and it is retrieved as T2.
An example of this: passing (void *) 0 and accessing it as char *, is allowed, passing (void *) 0 and accessing it as unsigned char * is also allowed, but passing (char *) 0 and accessing it as unsigned char * is not allowed. If a compiler is capable of inlining calls to variadic functions, and optimises based on the strict requirements of the standard, such mismatches could break badly.
This also means that there is no advantage to std::nullptr over 0 in such cases, except perhaps for clarity.
I would definitely not use nullptr without casting it, even though in this one special case it is valid. It is far too hard to see that it is valid. And if a cast is included anyway, (char *) 0 is just as clear as a null pointer value.
You are wrong. One of the few guarantees are that a char* has the same size and representation as the corresponding void*.
3.9.2 Compound Types ยง4
A pointer to cv-qualified (3.9.3) or cv-unqualified void can be used to point to objects of unknown type.
Such a pointer shall be able to hold any object pointer. An object of type cv void* shall have the same
representation and alignment requirements as cv char*.
Edit: Looks like this answer by hvd is better, showing a few more traps specific to the variadic function part of the question.

Why is it okay to compare a pointer with '\0'? (but not 'A')

I found a bug in my code where I compared the pointer with '\0'.
Wondering why the compiler didn't warn me about this bug I tried the following.
#include <cassert>
struct Foo
{
char bar[5];
};
int main()
{
Foo f;
Foo* p = &f;
p->bar[0] = '\0';
assert(p->bar == '\0'); // #1. I forgot [] Now, comparing pointer with NULL and fails.
assert(p->bar == 'A'); // #2. error: ISO C++ forbids comparison between pointer and integer
assert(p->bar[0] == '\0'); // #3. What I intended, PASSES
return 0;
}
What is special about '\0' which makes #1 legal and #2 illegal?
Please add a reference or quotation to your answer.
What makes it legal and well defined is the fact that '\0' is a null pointer constant so it can be converted to any pointer type to make a null pointer value.
ISO/IEC 14882:2011 4.10 [conv.ptr] / 1:
A null pointer constant is an integral constant expression prvalue of integer type that evaluates to zero or a prvalue of type std::nullptr_t. A null pointer constant can be converted to a pointer type; the result is the null pointer value of that type and is distinguishable from every other value of object pointer or function pointer type. Such a conversion is called a null pointer conversion.
'\0' meets the requirements of "integral constant expression prvalue of integer type that evaluates to zero" because char is an integer type and \0 has the value zero.
Other integers can only be explicitly converted to a pointer type via a reinterpret_cast and the result is only meaningful if the integer was the result of converting a valid pointer to an integer type of sufficient size.
'\0' is simply a different way of writing 0. I would guess that this is legal comparing pointers to 0 makes sense, no matter how you wrote the 0, while there is almost never any valid meaning to comparing a pointer to any other non-pointer type.
This is a design error of C++. The rule says that any integer constant expression with value zero can be considered as the null pointer constant.
This idiotic highly questionable decision allows to use as null pointer '\0' (as you found) but also things like (1==2) or even !!!!!!!!!!!1 (an example similar to one that is present on "The C++ programming language", no idea if Stroustrup thinks this is indeed a "cool" feature).
This ambiguity IMO even creates a loophole in the syntax definition when mixed with ternary operator semantic and implicit conversions rules: I remember finding a case in which out of three compilers one was not compiling and the other two were compiling with different semantic ... and after wasting a day on reading the standard and asking experts on c.c.l.c++.m I was not able to decide which of the three compilers was right.