Checking for NULL pointer in C/C++ [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In a recent code review, a contributor is trying to enforce that all NULL checks on pointers be performed in the following manner:
int * some_ptr;
// ...
if (some_ptr == NULL)
{
// Handle null-pointer error
}
else
{
// Proceed
}
instead of
int * some_ptr;
// ...
if (some_ptr)
{
// Proceed
}
else
{
// Handle null-pointer error
}
I agree that his way is a little more clear in the sense that it's explicitly saying "Make sure this pointer is not NULL", but I would counter that by saying that anyone who's working on this code would understand that using a pointer variable in an if statement is implicitly checking for NULL. Also I feel the second method has a smaller chance of introducing a bug of the ilk:
if (some_ptr = NULL)
which is just an absolute pain to find and debug.
Which way do you prefer and why?

In my experience, tests of the form if (ptr) or if (!ptr) are preferred. They do not depend on the definition of the symbol NULL. They do not expose the opportunity for the accidental assignment. And they are clear and succinct.
Edit: As SoapBox points out in a comment, they are compatible with C++ classes such as unique_ptr, shared_ptr, auto_ptr that are objects that act as pointers and which provide a conversion to bool to enable exactly this idiom. For these objects, an explicit comparison to NULL would have to invoke a conversion to pointer which may have other semantic side effects or be more expensive than the simple existence check that the bool conversion implies.
I have a preference for code that says what it means without unneeded text. if (ptr != NULL) has the same meaning as if (ptr) but at the cost of redundant specificity. The next logical thing is to write if ((ptr != NULL) == TRUE) and that way lies madness. The C language is clear that a boolean tested by if, while or the like has a specific meaning of non-zero value is true and zero is false. Redundancy does not make it clearer.

if (foo) is clear enough. Use it.

I'll start off with this: consistency is king, the decision is less important than the consistency in your code base.
In C++
NULL is defined as 0 or 0L in C++.
If you've read The C++ Programming Language Bjarne Stroustrup suggests using 0 explicitly to avoid the NULL macro when doing assignment, I'm not sure if he did the same with comparisons, it's been a while since I read the book, I think he just did if(some_ptr) without an explicit comparison but I am fuzzy on that.
The reason for this is that the NULL macro is deceptive (as nearly all macros are) it is actually 0 literal, not a unique type as the name suggests it might be. Avoiding macros is one of the general guidelines in C++. On the other hand, 0 looks like an integer and it is not when compared to or assigned to pointers. Personally I could go either way, but typically I skip the explicit comparison (though some people dislike this which is probably why you have a contributor suggesting a change anyway).
Regardless of personal feelings this is largely a choice of least evil as there isn't one right method.
This is clear and a common idiom and I prefer it, there is no chance of accidentally assigning a value during the comparison and it reads clearly:
if (some_ptr) {}
This is clear if you know that some_ptr is a pointer type, but it may also look like an integer comparison:
if (some_ptr != 0) {}
This is clear-ish, in common cases it makes sense... But it's a leaky abstraction, NULL is actually 0 literal and could end up being misused easily:
if (some_ptr != NULL) {}
C++11 has nullptr which is now the preferred method as it is explicit and accurate, just be careful about accidental assignment:
if (some_ptr != nullptr) {}
Until you are able to migrate to C++0x I would argue it's a waste of time worrying about which of these methods you use, they are all insufficient which is why nullptr was invented (along with generic programming issues which came up with perfect forwarding.) The most important thing is to maintain consistency.
In C
C is a different beast.
In C NULL can be defined as 0 or as ((void *)0), C99 allows for implementation defined null pointer constants. So it actually comes down to the implementation's definition of NULL and you will have to inspect it in your standard library.
Macros are very common and in general they are used a lot to make up for deficiencies in generic programming support in the language and other things as well. The language is much simpler and reliance on the preprocessor more common.
From this perspective I'd probably recommend using the NULL macro definition in C.

I use if (ptr), but this is completely not worth arguing about.
I like my way because it's concise, though others say == NULL makes it easier to read and more explicit. I see where they're coming from, I just disagree the extra stuff makes it any easier. (I hate the macro, so I'm biased.) Up to you.
I disagree with your argument. If you're not getting warnings for assignments in a conditional, you need to turn your warning levels up. Simple as that. (And for the love of all that is good, don't switch them around.)
Note in C++0x, we can do if (ptr == nullptr), which to me does read nicer. (Again, I hate the macro. But nullptr is nice.) I still do if (ptr), though, just because it's what I'm used to.

Frankly, I don't see why it matters. Either one is quite clear and anyone moderately experienced with C or C++ should understand both. One comment, though:
If you plan to recognize the error and not continue executing the function (i.e., you are going to throw an exception or return an error code immediately), you should make it a guard clause:
int f(void* p)
{
if (!p) { return -1; }
// p is not null
return 0;
}
This way, you avoid "arrow code."

Personally I've always used if (ptr == NULL) because it makes my intent explicit, but at this point it's just a habit.
Using = in place of == will be caught by any competent compiler with the correct warning settings.
The important point is to pick a consistent style for your group and stick to it. No matter which way you go, you'll eventually get used to it, and the loss of friction when working in other people's code will be welcome.

Just one more point in favor of the foo == NULL practice:
If foo is, say, an int * or a bool *, then the if (foo) check can accidentally be interpreted by a reader as testing the value of the pointee, i.e. as if (*foo). The NULL comparison here is a reminder that we're talking about a pointer.
But I suppose a good naming convention makes this argument moot.

The C Programming Language (K&R) would have you check for null == ptr to avoid an accidental assignment.

Actually, I use both variants.
There are situations, where you first check for the validity of a pointer, and if it is NULL, you just return/exit out of a function. (I know this can lead to the discussion "should a function have only one exit point")
Most of the time, you check the pointer, then do what you want and then resolve the error case. The result can be the ugly x-times indented code with multiple if's.

If style and format are going to be part of your reviews, there should be an agreed upon style guide to measure against. If there is one, do what the style guide says. If there's not one, details like this should be left as they are written. It's a waste of time and energy, and distracts from what code reviews really ought to be uncovering. Seriously, without a style guide I would push to NOT change code like this as a matter of principle, even when it doesn't use the convention I prefer.
And not that it matters, but my personal preference is if (ptr). The meaning is more immediately obvious to me than even if (ptr == NULL).
Maybe he's trying to say that it's better to handle error conditions before the happy path? In that case I still don't agree with the reviewer. I don't know that there's an accepted convention for this, but in my opinion the most "normal" condition ought to come first in any if statement. That way I've got less digging to do to figure out what the function is all about and how it works.
The exception to this is if the error causes me to bail from the function, or I can recover from it before moving on. In those cases, I do handle the error first:
if (error_condition)
bail_or_fix();
return if not fixed;
// If I'm still here, I'm on the happy path
By dealing with the unusual condition up front, I can take care of it and then forget about it. But if I can't get back on the happy path by handling it up front, then it should be handled after the main case because it makes the code more understandable. In my opinion.
But if it's not in a style guide then it's just my opinion, and your opinion is just as valid. Either standardize or don't. Don't let a reviewer pseudo-standardize just because he's got an opinion.

This is one of the fundamentals of both languages that pointers evaluate to a type and value that can be used as a control expression, bool in C++ and int in C. Just use it.

I'm a huge fan of the fact that C/C++ doesn't check types in the boolean conditions in if, for and while statements. I always use the following:
if (ptr)
if (!ptr)
even on integers or other type that converts to bool:
while(i--)
{
// Something to do i times
}
while(cin >> a >> b)
{
// Do something while you've input
}
Coding in this style is more readable and clearer to me. Just my personal opinion.
Recently, while working on OKI 431 microcontroller, I've noticed that the following:
unsigned char chx;
if (chx) // ...
is more efficient than
if (chx == 1) // ...
because in later case the compiler has to compare the value of chx to 1. Where chx is just a true/false flag.

Pointers are not booleans
Modern C/C++ compilers emit a warning when you write if (foo = bar) by accident.
Therefore I prefer
if (foo == NULL)
{
// null case
}
else
{
// non null case
}
or
if (foo != NULL)
{
// non null case
}
else
{
// null case
}
However, if I were writing a set of style guidelines I would not be putting things like this in it, I would be putting things like:
Make sure you do a null check on the pointer.

Most compilers I've used will at least warn on the if assignment without further syntax sugar, so I don't buy that argument. That said, I've used both professionally and have no preference for either. The == NULL is definitely clearer though in my opinion.

Related

How to force a compile error in C++(17) if a function return value isn't checked? Ideally through the type system

We are writing safety-critical code and I'd like a stronger way than [[nodiscard]] to ensure that checking of function return values is caught by the compiler.
[Update]
Thanks for all the discussion in the comments. Let me clarify that this question may seem contrived, or not "typical use case", or not how someone else would do it. Please take this as an academic exercise if that makes it easier to ignore "well why don't you just do it this way?". The question is exactly whether it's possible to create a type(s) that fails compiling if it is not assigned to an l-value as the return result of a function call .
I know about [[nodiscard]], warnings-as-errors, and exceptions, and this question asks if it's possible to achieve something similar, that is a compile time error, not something caught at run-time. I'm beginning to suspect it's not possible, and so any explanation why is very much appreciated.
Constraints:
MSVC++ 2019
Something that doesn't rely on warnings
Warnings-as-Errors also doesn't work
It's not feasible to constantly run static analysis
Macros are OK
Not a runtime check, but caught by the compiler
Not exception-based
I've been trying to think how to create a type(s) that, if it's not assigned to a variable from a function return, the compiler flags an error.
Example:
struct MustCheck
{
bool success;
...???...
};
MustCheck DoSomething( args )
{
...
return MustCheck{true};
}
int main(void) {
MustCheck res = DoSomething(blah);
if( !res.success ) { exit(-1); }
DoSomething( bloop ); // <------- compiler error
}
If such a thing is provably impossible through the type system, I'll also accept that answer ;)
(EDIT) Note 1: I have been thinking about your problem and reached the conclusion that the question is ill posed. It is not clear what you are looking for because of a small detail: what counts as checking? How the checkings compose and how far from the point of calling?
For example, does this count as checking? note that composition of boolean values (results) and/or other runtime variable matters.
bool b = true; // for example
auto res1 = DoSomething1(blah);
auto res2 = DoSomething2(blah);
if((res1 and res2) or b){...handle error...};
The composition with other runtime variables makes it impossible to make any guarantee at compile-time and for composition with other "results" you will have to exclude certain logical operators, like OR or XOR.
(EDIT) Note 2: I should have asked before but 1) if the handling is supposed to always abort: why not abort from the DoSomething function directly? 2) if handling does a specific action on failure, then pass it as a lambda to DoSomething (after all you are controlling what it returns, and what it takese). 3) composition of failures or propagation is the only not trivial case, and it is not well defined in your question.
Below is the original answer.
This doesn't fulfill all the (edited) requirements you have (I think they are excessive) but I think this is the only path forward really.
Below my comments.
As you hinted, for doing this at runtime there are recipes online about "exploding" types (they assert/abort on destruction if they where not checked, tracked by an internal flag).
Note that this doesn't use exceptions (but it is runtime and it is not that bad if you test the code often, it is after all a logical error).
For compile-time, it is more tricky, returning (for example a bool) with [[nodiscard]] is not enough because there are ways of no discarding without checking for example assigning to a (bool) variable.
I think the next layer is to active -Wunused-variable -Wunused-expression -Wunused-parameter (and treat it like an error -Werror=...).
Then it is much harder to not check the bool because comparison is pretty much to only operation you can really do with a bool.
(You can assign to another bool but then you will have to use that variable).
I guess that's quite enough.
There are still Machiavelian ways to mark a variable as used.
For that you can invent a bool-like type (class) that is 1) [[nodiscard]] itself (classes can be marked nodiscard), 2) the only supported operation is ==(bool) or !=(bool) (maybe not even copyable) and return that from your function. (as a bonus you don't need to mark your function as [[nodiscard]] because it is automatic.)
I guess it is impossible to avoid something like (void)b; but that in itself becomes a flag.
Even if you cannot avoid the absence of checking, you can force patterns that will immediately raise eyebrows at least.
You can even combine the runtime and compile time strategy.
(Make CheckedBool exploding.)
This will cover so many cases that you have to be happy at this point.
If compiler flags don’t protect you, you will have still a backup that can be detected in unit tests (regardless of taking the error path!).
(And don’t tell me now that you don’t unit test critical code.)
What you want is a special case of substructural types. Rust is famous for implementing a special case called "affine" types, where you can "use" something "at most once". Here, you instead want "relevant" types, where you have to use something at least once.
C++ has no official built-in support for such things. Maybe we can fake it? I thought not. In the "appendix" to this answer I include my original logic for why I thought so. Meanwhile, here's how to do it.
(Note: I have not tested any of this; I have not written any C++ in years; use at your own risk.)
First, we create a protected destructor in MustCheck. Thus, if we simply ignore the return value, we will get an error. But how do we avoid getting an error if we don't ignore the return value? Something like this.
(This looks scary: don't worry, we wrap most of it in a macro.)
int main(){
struct Temp123 : MustCheck {
void f() {
MustCheck* mc = this;
*mc = DoSomething();
}
} res;
res.f();
if(!res.success) print "oops";
}
Okay, that looks horrible, but after defining a suitable macro, we get:
int main(){
CAPTURE_RESULT(res, DoSomething());
if(!res.success) print "oops";
}
I leave the macro as an exercise to the reader, but it should be doable. You should probably use __LINE__ or something to generate the name Temp123, but it shouldn't be too hard.
Disclaimer
Note that this is all sorts of hacky and terrible, and you likely don't want to actually use this. Using [[nodiscard]] has the advantage of allowing you to use natural return types, instead of this MustCheck thing. That means that you can create a function, and then one year later add nodiscard, and you only have to fix the callers that did the wrong thing. If you migrate to MustCheck, you have to migrate all the callers, even those that did the right thing.
Another problem with this approach is that it is unreadable without macros, but IDEs can't follow macros very well. If you really care about avoiding bugs then it really helps if your IDE and other static analyzers understand your code as well as possible.
As mentioned in the comments you can use [[nodiscard]] as per:
https://learn.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-160
And modify to use this warning as compile error:
https://learn.microsoft.com/en-us/cpp/preprocessor/warning?view=msvc-160
That should cover your use case.

difference between if(pointer) vs if(pointer != NULL) in c++, cpplint issue

I already checked this post Can I use if (pointer) instead of if (pointer != NULL)? and some other posts on net.
But it is not stating any difference between two statements.
Problem: As I run cpplint.py on my cpp code, I found issues where I check pointers for NULL.
I preferred to check using simple
if(pointer) //statement1
but cpplint says you should check like
if(pointer != NULL) //statement2
So I just want to know , Are there any benefits of statement2 over statement1 ? Are there some scenarios in which statement1 may create problem ?
Working: As per my knowledge there is no difference in working of both statements. Its just a change of coding style.
I prefer to use like statement1, because
Its Simple, Readable
No Tension of missing (=) by mistake over equality(==) in a comparison
But cpplint is raising this as issue, then there might be some benefit that I missed.
Note: Java also doesn't support statement1.
No, if pointer is really a pointer type there is no difference, so everything here is a question of coding style. Coding style in turn depends on habits in different communities so there can't be a general recommendation.
I personally prefer the first because it is shorter and more to the point and avoids the use of the bogus macro NULL.
In C NULL can be very different things (integer or pointer) and in C++ its use is even deprecated nowadays. You should at least use nullptr, there.
You are using Hungarian notation, where it's possible to tell if a variable is a pointer. As long as it is - either native or smart - there's no difference. However, when someone changes it to another indirect type (e.g., std::optional<>), then the second will fail. So my suggestion is to keep on using the first: it's not Java, it's C++.
In C++, assuming ptr is a pointer, the comparisons if (ptr) and if (ptr != NULL) are functionally equivalent.
In C++11 and later, it is often considered preferable to use the alternative if (ptr != nullptr).
For a simple check of a pointer, the differences in these options are really stylistic. The mechanisms might differ slightly, but the end result is the same.
cpplint, like most automated checkers, tends to - by default - complain more about breaches of some style guidelines more than others. Whether any particular set of guidelines is right or wrong depends on what is needed for your project.
For class types that can sensibly be compared with a pointer (e.g. smart pointer types) the preferred test depends on what set of operations (comparison operators, implicit conversions, etc) that type supports.
In C, onsider :
int *ptr=malloc(10*sizeof *ptr);
free(ptr); // though the memory is freed, the ptr is not auto-set to NULL
if (ptr)
{
printf ("ptr is not null\n");
}
So you are expected to put
ptr=NULL; // ptr is explicitly made to point at nothing
// The above step is mandatory.
after the free.
So as a response in the the if-statement, one might recommend to do
if ( ptr == NULL ) // This is mostly a coding style & improves readability?
or better
if ( NULL == ptr ) // less chances of error
Well, the [ site ] says about cpplintthat it is :
An automated checker to make sure a C++ file follows Google's C++ style guide
So again, it is somebody's style that matters. Say , if you contribute to somebody's code in google, they expect you to follow this style where it facilitates easy collaboration.
There is one scenario that may create a problem using statement1.
Consider the following code which could have two different meanings.
bool* is_done = ...;
// Is this checking if `is_done` is not null, or actually
// intended to check if `*is_done` is true?
if (is_done) {
...
}
If you intended to do a null check, you're fine. But if your original intent is to check if *is_done is true but missed an asterisk by accident, this code may result in a totally unwanted behavior and require you to spend X hours to figure out the culprit.
This could've avoided by explicitly checking the statement like
// Now this results in a compile error and forces you to write
// `*is_done` instead.
if (is_done == true) {
...
}
This is applicable to any types that could be implicitly converted to bool like std::unique_ptr.
Someone may argue that the above case is too rare and still prefer the statement1 in favor of simplicity. I think it is fair and both styles are acceptable. But some organizations, like Google, may encourage you to follow their coding style to keep the lesson they previously learned.
There is no difference between both if(pointer) and if(pointer != NULL). if(pointer) is used for the code optimization.

What is the difference between these (bCondition == NULL) and (NULL==bCondition)? [duplicate]

This question already has answers here:
What's the reasoning behind putting constants in 'if' statements first?
(8 answers)
Closed last month.
While exploring msdn sites ,most of the condition checking places they are using (NULL == bCondition).
what is the purpose of using these notations?
Provide some sample to explain these please.
Thanks.
The use of NULL == condition provides more useful behaviour in the case of a typo, when an assignment operator = is accidentally used rather then the comparison operator ==:
if (bCondition = NULL) // typo here
{
// code never executes
}
if (NULL = bCondition) // error -> compiler complains
{
// ...
}
C-compiler gives a warning in the former case, there are no such warnings in many languages.
It's called Yoda Conditions. (The original link, you need high rep to see it).
It's meant to guard against accidental assignment = in conditions where an equality comparison == was intended. If you stick to Yoda by habit and make a typo by writing = instead of ==, the code will fail to compile because you cannot assign to an rvalue.
Does it worth the awkwardness? Some disagree, saying that compilers do issue a warning when they see = in conditional expressions. I say that it simply happened just two or three times in my lifetime to do this mistake, which does not justify changing all the MLOCs I wrote in my life to this convention.
There is no difference. It is an ancient way of defensive programming that has been obsolete for over 20 years. The purpose was to protect from accidentally typing = instead of == when comparing two values. Pascal programmers migrating to C were especially prone to write this bug.
From Borland Turbo C released in 1990 and forward, every known compiler warns against "possibly incorrect assignment", when you manage to type out this bug.
So writing (NULL == bCondition) is not better or worse practice than the opposite, unless your compiler is extremely ancient. You don't need to bother about writing them in any particular order.
What you should bother with, is to adapt a coding style where you never write assignments inside if/loop conditions. There is never a reason to do so. It is a completely superfluous, risky and ugly feature of the C language. All industry de-facto coding standards ban assignment inside conditions.
References:
MISRA C:2004 13.1
CERT C EXP18-C
Many people prefer writing NULL == bCondition so that they accidently don't assign the NULL value to bCondition.
Because of typo it happens that instead of writing
bCondition == NULL
they end up writing
bCondition = NULL // the value will be assigned here.
In case of
NULL = bCondition // there will be an error
It's simply a good defensive measure. Some may also find it more convenient to read. In case of a mistyped assignment instead of the equality operator, the compiler will attempt to modify NULL, which is not an lvalue and will produce an error message. When using bCondition = NULL, the compiler might produce a warning about using an assignment as a truth value, a message which can get lost and go unnoticed.
While usually there is no difference between variable == value and value == variable, and in principle there shouldn't be, in C++ there sometimes can be a difference in the general case if operator overloading is involved. For example, although == is expected to be symmetric, someone could write a pathological implementation that isn't.
Microsoft's _bstr_t string class suffers from an asymmetry problem in its operator== implementation.

How to define NULL using #define

I want to redefine NULL in my program such as
#define MYNULL ((void*)0)
But this definition is not working in the following statement:
char *ch = MYNULL;
Error : can not convert from void* to char *
What would be the best way to define NULL?
#define MYNULL NULL
is the safest, I see no reason in doing so but if you really want to, go ahead.
Here's how C and C++ do it respectively:
#define NULL 0 //C++
#define NULL ((void*)0) //C
Generally speaking, defining 0 for NULL is a bad habit, you actually want it to be part of the language. C++0x adresses this.
This is what Bjarne Stroustrup has to say on this:
Should I use NULL or 0?
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.
If you have to name the null pointer, call it nullptr; that's what it's called in C++11. Then, "nullptr" will be a keyword.
#ifdef __cplusplus
#define MYNULL 0
#else
#define MYNULL ((void*)0)
#endif
will work in both of them.
What exactly is the problem with getting your NULL from where you're supposed to?, i.e.,
#include <stddef.h>
or
#include <cstddef>
as alluded to in #Johannes Rudolph's answer, any trickery you do is not likely be very future proof in the face of things like nullptr etc.
EDIT: while stdlib (and many others) are mandated to include a NULL, stddef is the most canonical header [and has been for decades].
PS In general, it's just a bad idea to get involved in this sort of trickery unless you have a really good reason. You didnt expand on the thinking that led you to feeling the need to do this. If you could add some detail on that, it's likely to lead to better answers. Other people answering the question should have pointed this out in their answers too, but I guess does FGITW as FGITW does best :D
EDIT 2: As pointed out by #Yossarian: The single justification for doing this is if there isnt a NULL defined in an appropriately language-agnostic form elsewhere in your system. Naked compilers with no headers and/or if you're writing your own custom standard library from scratch are examples of such a circumstance. (In such a bare-bones scenario, I'd go with #lilburne's answer (be sure to use 0 as much as possible))
#define MYNULL 0
will work in C++
Don't do this. There is nothing that says that NULL has to be the value zero, it's implementation specific.
It could be a value that represents the end of memory, some special place in memory, or even an object that represents no value exists.
Doing this is very dangerous, may break portability, and will most certainly screw with code-aware editors. It isn't buying you anything, trust your library's definition.
EDIT: Evan is correct! The code itself will say zero, under the hood the compiler can do what it wants with implementation specific details. Thanks Evan!
I think that anyone that doesn't know that setting a pointer in C/C++ to 0 is the same as setting it to NULL, nullptr, or any other equivalent shouldn't be messing with code. The difference in readability between
char* ch = NULL
and
char* ch = 0;
is minimal. When it comes to expressions the forms
if (NULL == ch) {
}
if (0 == ch) {
}
if (nullptr == ch) {
}
are no more readable than
if (!ch) {
}
In contrast to what some people state here, 0 is a perfectly valid definition for NULL in C. Thus you have to be careful when you give NULL as an argument to a variadic function, because it may be mistaken as the integer value 0, ending in non-portability.
http://c-faq.com/null/null2.html
BTW, the comp.lang.c FAQ is a highly recommended read for every C programmer. See for example here:
http://c-faq.com/null/null1.html
containing such gems of nearly-forgotten wisdom like "As mentioned above, there is a null pointer for each pointer type, and the internal values of null pointers for different types may be different." Which means that calloc or memset are NOT a portable initialization for pointers.
#define NULL 0 //for C
is the perfect definition in C
e.g.
char *ch = NULL ;
*ch++ ;// will cause error
it causes error since ch pointing to nothing while executing increment statement
is known by compiler by seeing the value of pointer in LOOK-UP table to be 0
if u try to update this pointer then u are actually changing the contents of
CODE region which start at 0 physical address.
FOR that reason the first entry of page table prior to code region starts
is kept empty
What exactly is the problem with getting your NULL from where you're supposed to?, i.e.,
#include <stddef.h>
or
#include <cstddef>
as alluded to in #Johannes Rudolph's answer, any trickery you do is not likely be very future proof in the face of things like nullptr etc.
EDIT: while stdlib (and many others) are mandated to include a NULL, stddef is the most canonical header [and has been for decades].
PS In general, it's just a bad idea to get involved in this sort of trickery unless you have a really good reason. You didnt expand on the thinking that led you to feeling the need to do this. If you could add some detail on that, it's likely to lead to better answers. Other people answering the question should have pointed this out in their answers too, but I guess does FGITW as FGITW does best :D
EDIT 2: As pointed out by #Yossarian: The single justification for doing this is if there isnt a NULL defined in an appropriately language-agnostic form elsewhere in your system. Naked compilers with no headers and/or if you're writing your own custom standard library from scratch are examples of such a circumstance. (In such a bare-bones scenario, I'd go with #lilburne's answer (be sure to use 0 as much as possible))

Do you use NULL or 0 (zero) for pointers in C++?

In the early days of C++ when it was bolted on top of C, you could not use NULL as it was defined as (void*)0. You could not assign NULL to any pointer other than void*, which made it kind of useless. Back in those days, it was accepted that you used 0 (zero) for null pointers.
To this day, I have continued to use zero as a null pointer but those around me insist on using NULL. I personally do not see any benefit to giving a name (NULL) to an existing value - and since I also like to test pointers as truth values:
if (p && !q)
do_something();
then using zero makes more sense (as in if you use NULL, you cannot logically use p && !q - you need to explicitly compare against NULL, unless you assume NULL is zero, in which case why use NULL).
Is there any objective reason to prefer zero over NULL (or vice versa), or is all just personal preference?
Edit: I should add (and meant to originally say) that with RAII and exceptions, I rarely use zero/NULL pointers, but sometimes you do need them still.
Here's Stroustrup's take on this: C++ Style and Technique FAQ
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.
If you have to name the null pointer, call it nullptr; that's what it's called in C++11. Then, nullptr will be a keyword.
That said, don't sweat the small stuff.
There are a few arguments (one of which is relatively recent) which I believe contradict Bjarne's position on this.
Documentation of intent
Using NULL allows for searches on its use and it also highlights that the developer wanted to use a NULL pointer, irrespective of whether it is being interpreted by the compiler as NULL or not.
Overload of pointer and 'int' is relatively rare
The example that everybody quotes is:
void foo(int*);
void foo (int);
void bar() {
foo (NULL); // Calls 'foo(int)'
}
However, at least in my opinion, the problem with the above is not that we're using NULL for the null pointer constant: it's that we have overloads of foo() which take very different kinds of arguments. The parameter must be an int too, as any other type will result in an ambiguous call and so generate a helpful compiler warning.
Analysis tools can help TODAY!
Even in the absence of C++0x, there are tools available today that verify that NULL is being used for pointers, and that 0 is being used for integral types.
C++ 11 will have a new std::nullptr_t type.
This is the newest argument to the table. The problem of 0 and NULL is being actively addressed for C++0x, and you can guarantee that for every implementation that provides NULL, the very first thing that they will do is:
#define NULL nullptr
For those who use NULL rather than 0, the change will be an improvement in type-safety with little or no effort - if anything it may also catch a few bugs where they've used NULL for 0. For anybody using 0 today... well, hopefully they have a good knowledge of regular expressions...
Use NULL. NULL shows your intent. That it is 0 is an implementation detail that should not matter.
I always use:
NULL for pointers
'\0' for chars
0.0 for floats and doubles
where 0 would do fine. It is a matter of signaling intent. That said, I am not anal about it.
I stopped using NULL in favor of 0 long ago (as well as as most other macros). I did this not only because I wanted to avoid macros as much as possible, but also because NULL seems to have become over-used in C and C++ code. It seems to be used whenever a 0 value is needed, not just for pointers.
On new projects, I put this in a project header:
static const int nullptr = 0;
Now, when C++0x compliant compilers arrive, all I have to do is remove that line.
A nice benefit of this is that Visual Studio already recognizes nullptr as a keyword and highlights it appropriately.
cerr << sizeof(0) << endl;
cerr << sizeof(NULL) << endl;
cerr << sizeof(void*) << endl;
============
On a 64-bit gcc RHEL platform you get:
4
8
8
================
The moral of the story. You should use NULL when you're dealing with pointers.
1) It declares your intent (don't make me search through all your code trying to figure out if a variable is a pointer or some numeric type).
2) In certain API calls that expect variable arguments, they'll use a NULL-pointer to indicate the end of the argument list. In this case, using a '0' instead of NULL can cause problems. On a 64-bit platform, the va_arg call wants a 64-bit pointer, yet you'll be passing only a 32-bit integer. Seems to me like you're relying on the other 32-bits to be zeroed out for you? I've seen certain compilers (e.g. Intel's icpc) that aren't so gracious -- and this has resulted in runtime errors.
If I recall correctly NULL is defined differently in the headers that I have used. For C it is defined as (void*)0, and for C++ it's defines as just 0. The code looked something like:
#ifndef __cplusplus
#define NULL (void*)0
#else
#define NULL 0
#endif
Personally I still use the NULL value to represent null pointers, it makes it explicit that you're using a pointer rather than some integral type. Yes internally the NULL value is still 0 but it isn't represented as such.
Additionally I don't rely on the automatic conversion of integers to boolean values but explicitly compare them.
For example prefer to use:
if (pointer_value != NULL || integer_value == 0)
rather than:
if (pointer_value || !integer_value)
Suffice to say that this is all remedied in C++11 where one can simply use nullptr instead of NULL, and also nullptr_t that is the type of a nullptr.
I would say history has spoken and those who argued in favour of using 0 (zero) were wrong (including Bjarne Stroustrup). The arguments in favour of 0 were mostly aesthetics and "personal preference".
After the creation of C++11, with its new nullptr type, some compilers have started complaining (with default parameters) about passing 0 to functions with pointer arguments, because 0 is not a pointer.
If the code had been written using NULL, a simple search and replace could have been performed through the codebase to make it nullptr instead. If you are stuck with code written using the choice of 0 as a pointer it is far more tedious to update it.
And if you have to write new code right now to the C++03 standard (and can't use nullptr), you really should just use NULL. It'll make it much easier for you to update in the future.
I usually use 0. I don't like macros, and there's no guarantee that some third party header you're using doesn't redefine NULL to be something odd.
You could use a nullptr object as proposed by Scott Meyers and others until C++ gets a nullptr keyword:
const // It is a const object...
class nullptr_t
{
public:
template<class T>
operator T*() const // convertible to any type of null non-member pointer...
{ return 0; }
template<class C, class T>
operator T C::*() const // or any type of null member pointer...
{ return 0; }
private:
void operator&() const; // Can't take address of nullptr
} nullptr = {};
Google "nullptr" for more info.
I once worked on a machine where 0 was a valid address and NULL was defined as a special octal value. On that machine (0 != NULL), so code such as
char *p;
...
if (p) { ... }
would not work as you expect. You HAD to write
if (p != NULL) { ... }
Although I believe most compilers define NULL as 0 these days I still remember the lesson from those years ago: NULL is not necessarily 0.
I think the standard guarantees that NULL == 0, so you can do either. I prefer NULL because it documents your intent.
Using either 0 or NULL will have the same effect.
However, that doesn't mean that they are both good programming practices. Given that there is no difference in performance, choosing a low-level-aware option over an agnostic/abstract alternative is a bad programming practice. Help readers of your code understand your thought process.
NULL, 0, 0.0, '\0', 0x00 and whatelse all translate to the same thing, but are different logical entities in your program. They should be used as such. NULL is a pointer, 0 is quantity, 0x0 is a value whose bits are interesting etc. You wouldn't assign '\0' to a pointer whether it compiles or not.
I know some communities encourage demonstrating in-depth knowledge of an environment by breaking the environment's contracts. Responsible programmers, however, make maintainable code and keep such practices out of their code.
Strange, nobody, including Stroustroup mentioned that. While talking a lot about standards and aesthetics nobody noticed that it is dangerous to use 0 in NULL's stead, for instance, in variable argument list on the architecture where sizeof(int) != sizeof(void*). Like Stroustroup, I prefer 0 for aesthetic reasons, but one has to be careful not to use it where its type might be ambiguous.
I try to avoid the whole question by using C++ references where possible. Rather than
void foo(const Bar* pBar) { ... }
you might often be able to write
void foo(const Bar& bar) { ... }
Of course, this doesn't always work; but null pointers can be overused.
I'm with Stroustrup on this one :-)
Since NULL is not part of the language, I prefer to use 0.
Mostly personal preference, though one could make the argument that NULL makes it quite obvious that the object is a pointer which currently doesn't point to anything, e.g.
void *ptr = &something;
/* lots o' code */
ptr = NULL; // more obvious that it's a pointer and not being used
IIRC, the standard does not require NULL to be 0, so using whatever is defined in <stddef.h> is probably best for your compiler.
Another facet to the argument is whether you should use logical comparisons (implicit cast to bool) or explicity check against NULL, but that comes down to readability as well.
I prefer to use NULL as it makes clear that your intent is the value represents a pointer not an arithmetic value. The fact that it's a macro is unfortunate, but since it's so widely ingrained there's little danger (unless someone does something really boneheaded). I do wish it were a keyword from the beginning, but what can you do?
That said, I have no problem with using pointers as truth values in themselves. Just as with NULL, it's an ingrained idiom.
C++09 will add the the nullptr construct which I think is long overdue.
I always use 0. Not for any real thought out reason, just because when I was first learning C++ I read something that recommended using 0 and I've just always done it that way. In theory there could be a confusion issue in readability but in practice I have never once come across such an issue in thousands of man-hours and millions of lines of code. As Stroustrup says, it's really just a personal aesthetic issue until the standard becomes nullptr.
Someone told me once... I am going to redefine NULL to 69. Since then I don't use it :P
It makes your code quite vulnerable.
Edit:
Not everything in the standard is perfect. The macro NULL is an implementation-defined C++ null pointer constant not fully compatible with C NULL macro, what besides the type hiding implicit convert it in a useless and prone to errors tool.
NULL does not behaves as a null pointer but as a O/OL literal.
Tell me next example is not confusing:
void foo(char *);
void foo(int);
foo(NULL); // calls int version instead of pointer version!
Is because of all that, in the new standard appears std::nullptr_t
If you don't want to wait for the new standard and want to use a nullptr, use at least a decent one like the proposed by Meyers (see jon.h comment).
Well I argue for not using 0 or NULL pointers at all whenever possible.
Using them will sooner or later lead to segmentation faults in your code. In my experience this, and pointers in gereral is one of the biggest source of bugs in C++
also, it leads to "if-not-null" statements all over your code. Much nicer if you can rely on always a valid state.
There is almost always a better alternative.
Setting a pointer to 0 is just not that clear. Especially if you come a language other than C++. This includes C as well as Javascript.
I recently delt with some code like so:
virtual void DrawTo(BITMAP *buffer) =0;
for pure virtual function for the first time. I thought it to be some magic jiberjash for a week. When I realized it was just basically setting the function pointer to a null (as virtual functions are just function pointers in most cases for C++) I kicked myself.
virtual void DrawTo(BITMAP *buffer) =null;
would have been less confusing than that basterdation without proper spacing to my new eyes. Actually, I am wondering why C++ doesn't employ lowercase null much like it employes lowercase false and true now.