Is it safe to #define NULL nullptr? - c++

I have seen below macro in many topmost header files:
#define NULL 0 // C++03
In all over the code, NULL and 0 are used interchangeably. If I change it to.
#define NULL nullptr // C++11
Will it cause any bad side effect ? I can think of the only (good) side effect as following usage will become ill-formed;
int i = NULL;

I have seen below macro in topmost header file:
You shouldn't have seen that, the standard library defines it in <cstddef> (and <stddef.h>). And, IIRC, according to the standard, redefining names defined by standard header files results in undefined behaviour. So from a purely standardese viewpoint, you shouldn't do that.
I've seen people do the following, for whatever reason their broken mind thought of:
struct X{
virtual void f() = NULL;
}
(As in [incorrectly]: "set the virtual table pointer to NULL")
This is only valid if NULL is defined as 0, because = 0 is the valid token for pure-virtual functions (§9.2 [class.mem]).
That said, if NULL was correctly used as a null pointer constant, then nothing should break.
However, beware that, even if seemingly used correctly, this will change:
void f(int){}
void f(char*){}
f(0); // calls f(int)
f(nullptr); // calls f(char*)
However, if that was ever the case, it was almost certainly broken anyways.

Far better is to search and replace NULL with nullptr throughout the code.
It may be syntactically safe, but where would you put the #define? It creates code organisation problems.

No. You're not allowed to (re)define standard macros. And if you see
#define NULL 0
at the top of any file other than a standard header (and even there, it
should be in include guards, and typically in additional guards as
well), then that file is broken. Remove it.
Note that good compilers will typically define NULL with something
like:
#define NULL __builtin_null
, to access a compiler builtin which will trigger a warning if it is
used in a non-pointer context.

You shouldn't be defining it at all, unless you're writing your own version of <cstddef>; it certainly shouldn't be in "many topmost header files".
If you are implementing your own standard library, then the only requirement is
18.2/3 The macro NULL is an implementation-defined C++ null pointer constant
so either 0 or nullptr is acceptable, and nullptr is better (if your compiler supports it) for the reason you give.

Maybe Not
If you have a particular format of overloading behaviour:
void foo(int);
void foo(char*);
Then the behaviour of the code:
foo(NULL);
will change depending on whether NULL is changed to nullptr or not.
Of course, there's another question as to whether it's safe to write such code as is present in this answer...

While it might break backwards-compatibility with older stuff that was badly written (either that, or overly clever...), for your newer code, this is a non-issue. You should use nullptr, and not NULL, where you mean nullptr. Also, you should use 0 where you mean zero.

Related

Mark variable as not NULL after BOOST_REQUIRE in PVS-Studio

I'm using PVS-Studio to analyze my Testcode. There are often constructs of the form
const noAnimal* animal = dynamic_cast<noAnimal*>(...);
BOOST_REQUIRE(animal);
BOOST_REQUIRE_EQUAL(animal->GetSpecies(), ...);
However I still get a warning V522 There might be dereferencing of a potential null pointer 'animal' for the last line.
I know it is possible, to mark functions as "not returning NULL" but is it also possible to mark a function as a valid NULL check or make PVS-Studio somehow else aware that animal can't be NULL after BOOST_REQUIRE(animal);?
This also happens if the pointer is checked via any assert flavour first.
Thank you for the interesting example. We'll think, what we can do with the BOOST_REQUIRE macro.
At the moment, I can advise you the following solution:
Somewhere after
#include <boost/test/included/unit_test.hpp>
you can write:
#ifdef PVS_STUDIO
#undef BOOST_REQUIRE
#define BOOST_REQUIRE(expr) do { if (!(expr)) throw "PVS-Studio"; } while (0)
#endif
This way, you will give a hint to the analyzer, that the false condition causes the abort of the control flow.
It is not the most beautiful solution, but I think it was worth telling you about.
Responding to a comment with a large one is a bad idea, so here is my detailed response on the following subject:
Although this is possible it would be a pain to include that define in
all testcase files. Also this is not limited to BOOST_REQUIRE only but
also applies to assert, SDL_Assert or any other custom macro the user
might use.
One should understand that there are three types of test macros and each should be discussed separately.
Macros of the first type simply warn you that something went wrong in the Debug version. A typical example is assert macro. The following code will cause PVS-Studio analyzer to generate a warning:
T* p = dynamic_cast<T *>(x);
assert(p);
p->foo();
The analyzer will point out a possible null-pointer dereferencing here and will be right. A check that uses assert is not sufficient because it will be removed from the Release version. That is, it turns out there’s no check. A better way to implement it is to rewrite the code into something like this:
T* p = dynamic_cast<T *>(x);
if (p == nullptr)
{
assert(false);
throw Error;
}
p->foo();
This code won’t trigger the warning.
You may argue that you are 100% sure that dynamic_cast will never return nullptr. I don’t accept this argument. If you are totally sure that the cast is ALWAYS correct, you should use the faster static_cast. If you are not that sure, you must test the pointer before dereferencing it.
Well, OK, I see your point. You are sure that the code is alright, but you need to have that check with dynamic_cast just in case. OK, use the following code then:
assert(dynamic_cast<T *>(x) != nullptr);
T* p = static_cast<T *>(x);
p->foo();
I don’t like it, but at least it’s faster, since the slower dynamic_cast operator will be left out in the Release version, while the analyzer will keep silent.
Moving on to the next type of macros.
Macros of the second type simply warn you that something went wrong in the Debug version and are used in tests. What makes them different from the previous type is that they stop the algorithm under test if the condition is false and generate an error message.
The basic problem with these macros is that the functions are not marked as non-returning. Here’s an example.
Suppose we have a function that generates an error message by throwing an exception. This is what its declaration looks like:
void Error(const char *message);
And this is how the test macro is declared:
#define ENSURE(x) do { if (!x) Error("zzzz"); } while (0)
Using the pointer:
T* p = dynamic_cast<T *>(x);
ENSURE(p);
p->foo();
The analyzer will issue a warning about a possible null-pointer dereferencing, but the code is actually safe. If the pointer is null, the Error function will throw an exception and thus prevent the pointer dereferencing.
We simply need to tell the analyzer about that by using one of the function annotation means, for example:
[[noreturn]] void Error(const char *message);
or:
__declspec(noreturn) void Error(const char *message);
This will help eliminate the false warning. So, as you can see, it’s quite easy to fix things in most cases when using your own macros.
It might be trickier, however, if you deal with carelessly implemented macros from third-party libraries.
This leads us to the third type of macros. You can’t change them, and the analyzer can’t figure out how exactly they work. This is a common situation, as macros may be implemented in quite exotic ways.
There are three options left for you in this case:
suppress the warning using one of the false-positive suppression means described in the documentation;
use the technique I described in the previous answer;
email us.
We are gradually adding support for various tricky macros from popular libraries. In fact, the analyzer is already familiar with most of the specific macros you might encounter, but programmers’ imagination is inexhaustible and we just can’t foresee every possible implementation.

Why use 'function address == NULL' instead of 'false'?

Browsing among some legacy code I've found such function:
static inline bool EmptyFunc()
{
return (void*) EmptyFunc == NULL;
}
What are the differences from this one:
static inline bool EmptyFunc()
{
return false;
}
This code was created to compile under several different platforms, like PS2, Wii, PC... Are there any reason to use the first function? Like better optimization or avoiding some strange compiler misbehavior?
Semantically both functions are the same: they always return false*. Folding the first expression to a constant value "false" is completely allowed by the standard since it would not change any observable side-effects (of which there are none). Since the compiler sees the entire function it also free to optimize away any calls to it and replace it with a constant "false" value.
That is, there is no "general" value in the first form and is likely a mistake on the part of the programmer. The only possibility is that it exploits some special behaviour (or defect) in a specific compiler/version. To what end I don't know however. If you wish to prevent inlining using a compiler-specific attribute would be the correct approach -- anything else is prone to breaking should the compiler change.
(*This assumes that NULL is never defined to be EmptyFunc, which would result in true being returned.).
Strictly speaking, a function pointer may not be cast to a void pointer, what happens then is outside the scope of the standard. The C11 standard lists it as a "common extension" in J.5.7 (I suspect that the same applies in C++). So the only difference between the two cases in that the former is non-portable.
It would seem that the most likely cause of the former version is either a confused programmer or a confused compiler. We can tell for certain that the programmer was confused/sloppy by the lack of an explaining comment.
It doesn't really make much sense to declare a function as inline and then try to trick the compiler into not inlining the code by including the function address in the code. So I think we can rule out that theory, unless of course the programmer was confused and thought it made sense.

Is NULL defined as nullptr in C++11?

Will C++11 implementations define NULLas nullptr?
Would this be prescribed by the new C++ standard?
From the horse's mouth
C.3.2.4 Macro NULL [diff.null]
1/ The macro NULL, defined in any of <clocale>, <cstddef>, <cstdio>, <cstdlib>, <cstring>, <ctime>, or <cwchar>, is an implementation-defined C++ null pointer constant in this International Standard (18.2).
It is up to each implementation to provide its own definition, gcc if I recall correctly defines it to __nullptr for which it has special checks (verifies that it is not used in arithmetic contexts for example).
So it is possible to define it as nullptr, you will have to check your compiler/Standard Library documentation to see what has been done.
No, NULL is still the same as before. Too many people used the NULL macro in surprising ways, redefining it to nullptr would have broken a lot of code.
To elaborate: people have used NULL for example for many kinds of handle typedefs. If the real type behind such a typedef is not a pointer, defining NULL as nullptr would be a problem. Also, it seems some people have indeed used NULL to initialize numeric types.
At least that is what Microsoft found when they added the nullptr to MSVC10, and why they decided to keep NULL as it always was. Other compilers might choose a different path, but I don't think they would.
FDIS of the upcoming standard C++11, integral expression is still a null pointer
constant. NULL macro is still implementation defined but must be a null
pointer constant. So in practice it means it is good as 0 or can be
nullptr.
Your code that used either 0 or NULL will work just as before.
Read the details here.
NULL comes from C, so its definition must be compatible with both C and C++. So it can't be nullptr, because that is C++ keyword, not C, unless it is defined differently for C and C++ (we can use #ifdef __cplusplus to distinguish between them). So NULL is usually defined as smth compiler specific, like __nullptr in gcc, or just ((void*)0).

How to define NULL using #define

I want to redefine NULL in my program such as
#define MYNULL ((void*)0)
But this definition is not working in the following statement:
char *ch = MYNULL;
Error : can not convert from void* to char *
What would be the best way to define NULL?
#define MYNULL NULL
is the safest, I see no reason in doing so but if you really want to, go ahead.
Here's how C and C++ do it respectively:
#define NULL 0 //C++
#define NULL ((void*)0) //C
Generally speaking, defining 0 for NULL is a bad habit, you actually want it to be part of the language. C++0x adresses this.
This is what Bjarne Stroustrup has to say on this:
Should I use NULL or 0?
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.
If you have to name the null pointer, call it nullptr; that's what it's called in C++11. Then, "nullptr" will be a keyword.
#ifdef __cplusplus
#define MYNULL 0
#else
#define MYNULL ((void*)0)
#endif
will work in both of them.
What exactly is the problem with getting your NULL from where you're supposed to?, i.e.,
#include <stddef.h>
or
#include <cstddef>
as alluded to in #Johannes Rudolph's answer, any trickery you do is not likely be very future proof in the face of things like nullptr etc.
EDIT: while stdlib (and many others) are mandated to include a NULL, stddef is the most canonical header [and has been for decades].
PS In general, it's just a bad idea to get involved in this sort of trickery unless you have a really good reason. You didnt expand on the thinking that led you to feeling the need to do this. If you could add some detail on that, it's likely to lead to better answers. Other people answering the question should have pointed this out in their answers too, but I guess does FGITW as FGITW does best :D
EDIT 2: As pointed out by #Yossarian: The single justification for doing this is if there isnt a NULL defined in an appropriately language-agnostic form elsewhere in your system. Naked compilers with no headers and/or if you're writing your own custom standard library from scratch are examples of such a circumstance. (In such a bare-bones scenario, I'd go with #lilburne's answer (be sure to use 0 as much as possible))
#define MYNULL 0
will work in C++
Don't do this. There is nothing that says that NULL has to be the value zero, it's implementation specific.
It could be a value that represents the end of memory, some special place in memory, or even an object that represents no value exists.
Doing this is very dangerous, may break portability, and will most certainly screw with code-aware editors. It isn't buying you anything, trust your library's definition.
EDIT: Evan is correct! The code itself will say zero, under the hood the compiler can do what it wants with implementation specific details. Thanks Evan!
I think that anyone that doesn't know that setting a pointer in C/C++ to 0 is the same as setting it to NULL, nullptr, or any other equivalent shouldn't be messing with code. The difference in readability between
char* ch = NULL
and
char* ch = 0;
is minimal. When it comes to expressions the forms
if (NULL == ch) {
}
if (0 == ch) {
}
if (nullptr == ch) {
}
are no more readable than
if (!ch) {
}
In contrast to what some people state here, 0 is a perfectly valid definition for NULL in C. Thus you have to be careful when you give NULL as an argument to a variadic function, because it may be mistaken as the integer value 0, ending in non-portability.
http://c-faq.com/null/null2.html
BTW, the comp.lang.c FAQ is a highly recommended read for every C programmer. See for example here:
http://c-faq.com/null/null1.html
containing such gems of nearly-forgotten wisdom like "As mentioned above, there is a null pointer for each pointer type, and the internal values of null pointers for different types may be different." Which means that calloc or memset are NOT a portable initialization for pointers.
#define NULL 0 //for C
is the perfect definition in C
e.g.
char *ch = NULL ;
*ch++ ;// will cause error
it causes error since ch pointing to nothing while executing increment statement
is known by compiler by seeing the value of pointer in LOOK-UP table to be 0
if u try to update this pointer then u are actually changing the contents of
CODE region which start at 0 physical address.
FOR that reason the first entry of page table prior to code region starts
is kept empty
What exactly is the problem with getting your NULL from where you're supposed to?, i.e.,
#include <stddef.h>
or
#include <cstddef>
as alluded to in #Johannes Rudolph's answer, any trickery you do is not likely be very future proof in the face of things like nullptr etc.
EDIT: while stdlib (and many others) are mandated to include a NULL, stddef is the most canonical header [and has been for decades].
PS In general, it's just a bad idea to get involved in this sort of trickery unless you have a really good reason. You didnt expand on the thinking that led you to feeling the need to do this. If you could add some detail on that, it's likely to lead to better answers. Other people answering the question should have pointed this out in their answers too, but I guess does FGITW as FGITW does best :D
EDIT 2: As pointed out by #Yossarian: The single justification for doing this is if there isnt a NULL defined in an appropriately language-agnostic form elsewhere in your system. Naked compilers with no headers and/or if you're writing your own custom standard library from scratch are examples of such a circumstance. (In such a bare-bones scenario, I'd go with #lilburne's answer (be sure to use 0 as much as possible))

Do you use NULL or 0 (zero) for pointers in C++?

In the early days of C++ when it was bolted on top of C, you could not use NULL as it was defined as (void*)0. You could not assign NULL to any pointer other than void*, which made it kind of useless. Back in those days, it was accepted that you used 0 (zero) for null pointers.
To this day, I have continued to use zero as a null pointer but those around me insist on using NULL. I personally do not see any benefit to giving a name (NULL) to an existing value - and since I also like to test pointers as truth values:
if (p && !q)
do_something();
then using zero makes more sense (as in if you use NULL, you cannot logically use p && !q - you need to explicitly compare against NULL, unless you assume NULL is zero, in which case why use NULL).
Is there any objective reason to prefer zero over NULL (or vice versa), or is all just personal preference?
Edit: I should add (and meant to originally say) that with RAII and exceptions, I rarely use zero/NULL pointers, but sometimes you do need them still.
Here's Stroustrup's take on this: C++ Style and Technique FAQ
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.
If you have to name the null pointer, call it nullptr; that's what it's called in C++11. Then, nullptr will be a keyword.
That said, don't sweat the small stuff.
There are a few arguments (one of which is relatively recent) which I believe contradict Bjarne's position on this.
Documentation of intent
Using NULL allows for searches on its use and it also highlights that the developer wanted to use a NULL pointer, irrespective of whether it is being interpreted by the compiler as NULL or not.
Overload of pointer and 'int' is relatively rare
The example that everybody quotes is:
void foo(int*);
void foo (int);
void bar() {
foo (NULL); // Calls 'foo(int)'
}
However, at least in my opinion, the problem with the above is not that we're using NULL for the null pointer constant: it's that we have overloads of foo() which take very different kinds of arguments. The parameter must be an int too, as any other type will result in an ambiguous call and so generate a helpful compiler warning.
Analysis tools can help TODAY!
Even in the absence of C++0x, there are tools available today that verify that NULL is being used for pointers, and that 0 is being used for integral types.
C++ 11 will have a new std::nullptr_t type.
This is the newest argument to the table. The problem of 0 and NULL is being actively addressed for C++0x, and you can guarantee that for every implementation that provides NULL, the very first thing that they will do is:
#define NULL nullptr
For those who use NULL rather than 0, the change will be an improvement in type-safety with little or no effort - if anything it may also catch a few bugs where they've used NULL for 0. For anybody using 0 today... well, hopefully they have a good knowledge of regular expressions...
Use NULL. NULL shows your intent. That it is 0 is an implementation detail that should not matter.
I always use:
NULL for pointers
'\0' for chars
0.0 for floats and doubles
where 0 would do fine. It is a matter of signaling intent. That said, I am not anal about it.
I stopped using NULL in favor of 0 long ago (as well as as most other macros). I did this not only because I wanted to avoid macros as much as possible, but also because NULL seems to have become over-used in C and C++ code. It seems to be used whenever a 0 value is needed, not just for pointers.
On new projects, I put this in a project header:
static const int nullptr = 0;
Now, when C++0x compliant compilers arrive, all I have to do is remove that line.
A nice benefit of this is that Visual Studio already recognizes nullptr as a keyword and highlights it appropriately.
cerr << sizeof(0) << endl;
cerr << sizeof(NULL) << endl;
cerr << sizeof(void*) << endl;
============
On a 64-bit gcc RHEL platform you get:
4
8
8
================
The moral of the story. You should use NULL when you're dealing with pointers.
1) It declares your intent (don't make me search through all your code trying to figure out if a variable is a pointer or some numeric type).
2) In certain API calls that expect variable arguments, they'll use a NULL-pointer to indicate the end of the argument list. In this case, using a '0' instead of NULL can cause problems. On a 64-bit platform, the va_arg call wants a 64-bit pointer, yet you'll be passing only a 32-bit integer. Seems to me like you're relying on the other 32-bits to be zeroed out for you? I've seen certain compilers (e.g. Intel's icpc) that aren't so gracious -- and this has resulted in runtime errors.
If I recall correctly NULL is defined differently in the headers that I have used. For C it is defined as (void*)0, and for C++ it's defines as just 0. The code looked something like:
#ifndef __cplusplus
#define NULL (void*)0
#else
#define NULL 0
#endif
Personally I still use the NULL value to represent null pointers, it makes it explicit that you're using a pointer rather than some integral type. Yes internally the NULL value is still 0 but it isn't represented as such.
Additionally I don't rely on the automatic conversion of integers to boolean values but explicitly compare them.
For example prefer to use:
if (pointer_value != NULL || integer_value == 0)
rather than:
if (pointer_value || !integer_value)
Suffice to say that this is all remedied in C++11 where one can simply use nullptr instead of NULL, and also nullptr_t that is the type of a nullptr.
I would say history has spoken and those who argued in favour of using 0 (zero) were wrong (including Bjarne Stroustrup). The arguments in favour of 0 were mostly aesthetics and "personal preference".
After the creation of C++11, with its new nullptr type, some compilers have started complaining (with default parameters) about passing 0 to functions with pointer arguments, because 0 is not a pointer.
If the code had been written using NULL, a simple search and replace could have been performed through the codebase to make it nullptr instead. If you are stuck with code written using the choice of 0 as a pointer it is far more tedious to update it.
And if you have to write new code right now to the C++03 standard (and can't use nullptr), you really should just use NULL. It'll make it much easier for you to update in the future.
I usually use 0. I don't like macros, and there's no guarantee that some third party header you're using doesn't redefine NULL to be something odd.
You could use a nullptr object as proposed by Scott Meyers and others until C++ gets a nullptr keyword:
const // It is a const object...
class nullptr_t
{
public:
template<class T>
operator T*() const // convertible to any type of null non-member pointer...
{ return 0; }
template<class C, class T>
operator T C::*() const // or any type of null member pointer...
{ return 0; }
private:
void operator&() const; // Can't take address of nullptr
} nullptr = {};
Google "nullptr" for more info.
I once worked on a machine where 0 was a valid address and NULL was defined as a special octal value. On that machine (0 != NULL), so code such as
char *p;
...
if (p) { ... }
would not work as you expect. You HAD to write
if (p != NULL) { ... }
Although I believe most compilers define NULL as 0 these days I still remember the lesson from those years ago: NULL is not necessarily 0.
I think the standard guarantees that NULL == 0, so you can do either. I prefer NULL because it documents your intent.
Using either 0 or NULL will have the same effect.
However, that doesn't mean that they are both good programming practices. Given that there is no difference in performance, choosing a low-level-aware option over an agnostic/abstract alternative is a bad programming practice. Help readers of your code understand your thought process.
NULL, 0, 0.0, '\0', 0x00 and whatelse all translate to the same thing, but are different logical entities in your program. They should be used as such. NULL is a pointer, 0 is quantity, 0x0 is a value whose bits are interesting etc. You wouldn't assign '\0' to a pointer whether it compiles or not.
I know some communities encourage demonstrating in-depth knowledge of an environment by breaking the environment's contracts. Responsible programmers, however, make maintainable code and keep such practices out of their code.
Strange, nobody, including Stroustroup mentioned that. While talking a lot about standards and aesthetics nobody noticed that it is dangerous to use 0 in NULL's stead, for instance, in variable argument list on the architecture where sizeof(int) != sizeof(void*). Like Stroustroup, I prefer 0 for aesthetic reasons, but one has to be careful not to use it where its type might be ambiguous.
I try to avoid the whole question by using C++ references where possible. Rather than
void foo(const Bar* pBar) { ... }
you might often be able to write
void foo(const Bar& bar) { ... }
Of course, this doesn't always work; but null pointers can be overused.
I'm with Stroustrup on this one :-)
Since NULL is not part of the language, I prefer to use 0.
Mostly personal preference, though one could make the argument that NULL makes it quite obvious that the object is a pointer which currently doesn't point to anything, e.g.
void *ptr = &something;
/* lots o' code */
ptr = NULL; // more obvious that it's a pointer and not being used
IIRC, the standard does not require NULL to be 0, so using whatever is defined in <stddef.h> is probably best for your compiler.
Another facet to the argument is whether you should use logical comparisons (implicit cast to bool) or explicity check against NULL, but that comes down to readability as well.
I prefer to use NULL as it makes clear that your intent is the value represents a pointer not an arithmetic value. The fact that it's a macro is unfortunate, but since it's so widely ingrained there's little danger (unless someone does something really boneheaded). I do wish it were a keyword from the beginning, but what can you do?
That said, I have no problem with using pointers as truth values in themselves. Just as with NULL, it's an ingrained idiom.
C++09 will add the the nullptr construct which I think is long overdue.
I always use 0. Not for any real thought out reason, just because when I was first learning C++ I read something that recommended using 0 and I've just always done it that way. In theory there could be a confusion issue in readability but in practice I have never once come across such an issue in thousands of man-hours and millions of lines of code. As Stroustrup says, it's really just a personal aesthetic issue until the standard becomes nullptr.
Someone told me once... I am going to redefine NULL to 69. Since then I don't use it :P
It makes your code quite vulnerable.
Edit:
Not everything in the standard is perfect. The macro NULL is an implementation-defined C++ null pointer constant not fully compatible with C NULL macro, what besides the type hiding implicit convert it in a useless and prone to errors tool.
NULL does not behaves as a null pointer but as a O/OL literal.
Tell me next example is not confusing:
void foo(char *);
void foo(int);
foo(NULL); // calls int version instead of pointer version!
Is because of all that, in the new standard appears std::nullptr_t
If you don't want to wait for the new standard and want to use a nullptr, use at least a decent one like the proposed by Meyers (see jon.h comment).
Well I argue for not using 0 or NULL pointers at all whenever possible.
Using them will sooner or later lead to segmentation faults in your code. In my experience this, and pointers in gereral is one of the biggest source of bugs in C++
also, it leads to "if-not-null" statements all over your code. Much nicer if you can rely on always a valid state.
There is almost always a better alternative.
Setting a pointer to 0 is just not that clear. Especially if you come a language other than C++. This includes C as well as Javascript.
I recently delt with some code like so:
virtual void DrawTo(BITMAP *buffer) =0;
for pure virtual function for the first time. I thought it to be some magic jiberjash for a week. When I realized it was just basically setting the function pointer to a null (as virtual functions are just function pointers in most cases for C++) I kicked myself.
virtual void DrawTo(BITMAP *buffer) =null;
would have been less confusing than that basterdation without proper spacing to my new eyes. Actually, I am wondering why C++ doesn't employ lowercase null much like it employes lowercase false and true now.