Which version of safe_delete is better? - c++

#define SAFE_DELETE(a) if( (a) != NULL ) delete (a); (a) = NULL;
OR
template<typename T> void safe_delete(T*& a) {
delete a;
a = NULL;
}
or any other better way

I would say neither, as both will give you a false sense of security. For example, suppose you have a function:
void Func( SomePtr * p ) {
// stuff
SafeDelete( p );
}
You set p to NULL, but the copies of p outside the function are unaffected.
However, if you must do this, go with the template - macros will always have the potential for tromping on other names.

Clearly the function, for a simple reason. The macro evaluates its argument multiple times. This can have evil side effects. Also the function can be scoped. Nothing better than that :)

delete a;
ISO C++ specifies, that delete on a NULL pointer just doesn't do anything.
Quote from iso 14882:
5.3.5 Delete [expr.delete]
2 [...] In either alternative, if the value of the operand of delete is the
null pointer the operation has no effect. [...]
Regards, Bodo
/edit: I didn't notice the a=NULL; in the original post, so new version: delete a; a=NULL; however, the problem with setting a=NULL has already been pointed out (false feeling of security).

Generally, prefer inline functions over macros, as macros don't respect scope, and may conflict with some symbols during preprocessing, leading to very strange compile errors.
Of course, sometimes templates and functions won't do, but here this is not the case.
Additionally, the better safe-delete is not necessary, as you could use smart-pointers, therefore not requiring to remember using this method in the client-code, but encapsulating it.
(edit) As others have pointed out, safe-delete is not safe, as even if somebody does not forget to use it, it still may not have the desired effect. So it is actually completely worthless, because using safe_delete correctly needs more thought than just setting to 0 by oneself.

You don't need to test for nullity with delete, it is equivalent to a no-op. (a) = NULL makes me lift an eyebrow. The second option is better.
However, if you have a choice, you should use smart pointers, such as std::auto_ptr or tr1::shared_ptr, which already do this for you.

I think
#define SAFE_DELETE(pPtr) { delete pPtr; pPtr = NULL } is better
its ok to call delete if pPtr is NULL. So if check is not required.
in case if you call SAFE_DELETE(ptr+i), it will result in compilation error.
Template definition will create multiple instances of the function for each data type. In my opinion in this case, these multiple definitions donot add any value.
Moreover, with template function definition, you have overhead of function call.

Usage of SAFE_DELETE really appears to be a C programmers approach to commandeering the built in memory management in C++. My question is: Will C++ allow this method of using a SAFE_DELETE on pointers that have been properly encapsulated as Private? Would this macro ONLY work on pointer that are declared Public? OOP BAD!!

As mentioned quite a bit above, the second one is the better one, not a macro with potential unintended side effects, doesn't have the unneeded check against NULL (although I suspect you are doing that as a type check), etc. But neither are promising any safety. If you do use something like tr1::smart_ptr, please make sure you read the docs on them and are sure that it has the right semantics for your task. I just recently had to hunt down and clean up a huge memory leak due to a co-worker putting smart_ptrs into a data structure with circular links :) (he should have used weak_ptrs for back references)

I prefer this version:
~scoped_ptr() {
delete this->ptr_; //this-> for emphasis, ptr_ is owned by this
}
Setting the pointer to null after deleting it is quite pointless, as the only reason that you would use pointers is to allow an object to be referenced in multiple places at once. Even if the pointer in one part of the program is 0 there may well be others that are not set to 0.
Furthermore the safe_delete macro / function template is very difficult to use right, because there are only two places that it can be used if there is code that may throw between the new and delete for the given pointer.
1) Inside either a catch (...) block that rethrows the exception and also duplicated next to the catch (...) block for the path that doesn't throw. (Also duplicated next to every break, return, continue etc that may allow the pointer to fall out of scope)
2) Inside a destructor for an object that owns the pointer (unless there is no code between the new and delete that can throw).
Even if there is no code that could throw when you write the code, this could change in the future (all it takes is for someone to came along and add another new after the first one). It is better write code in a way that stays correct even in the face of exceptions.
Option 1 creates so much code duplication and is so easy to get wrong that I am doubtful to even call it an option.
Option 2 makes safe_delete redundant, as the ptr_ that you are setting to 0 will go out of scope on the next line.
In summary -- don't use safe_delete as it is not safe at all (it is very difficult to use correctly, and leads to redundant code even when its use is correct). Use SBRM and smart pointers.

Related

Runtime error on calling a function after deletion of 'this' pointer [duplicate]

Is it allowed to delete this; if the delete-statement is the last statement that will be executed on that instance of the class? Of course I'm sure that the object represented by the this-pointer is newly-created.
I'm thinking about something like this:
void SomeModule::doStuff()
{
// in the controller, "this" object of SomeModule is the "current module"
// now, if I want to switch over to a new Module, eg:
controller->setWorkingModule(new OtherModule());
// since the new "OtherModule" object will take the lead,
// I want to get rid of this "SomeModule" object:
delete this;
}
Can I do this?
The C++ FAQ Lite has a entry specifically for this
https://isocpp.org/wiki/faq/freestore-mgmt#delete-this
I think this quote sums it up nicely
As long as you're careful, it's OK for an object to commit suicide (delete this).
Yes, delete this; has defined results, as long as (as you've noted) you assure the object was allocated dynamically, and (of course) never attempt to use the object after it's destroyed. Over the years, many questions have been asked about what the standard says specifically about delete this;, as opposed to deleting some other pointer. The answer to that is fairly short and simple: it doesn't say much of anything. It just says that delete's operand must be an expression that designates a pointer to an object, or an array of objects. It goes into quite a bit of detail about things like how it figures out what (if any) deallocation function to call to release the memory, but the entire section on delete (§[expr.delete]) doesn't mention delete this; specifically at all. The section on destructors does mention delete this in one place (§[class.dtor]/13):
At the point of definition of a virtual destructor (including an implicit definition (15.8)), the non-array deallocation function is determined as if for the expression delete this appearing in a non-virtual destructor of the destructor’s class (see 8.3.5).
That tends to support the idea that the standard considers delete this; to be valid -- if it was invalid, its type wouldn't be meaningful. That's the only place the standard mentions delete this; at all, as far as I know.
Anyway, some consider delete this a nasty hack, and tell anybody who will listen that it should be avoided. One commonly cited problem is the difficulty of ensuring that objects of the class are only ever allocated dynamically. Others consider it a perfectly reasonable idiom, and use it all the time. Personally, I'm somewhere in the middle: I rarely use it, but don't hesitate to do so when it seems to be the right tool for the job.
The primary time you use this technique is with an object that has a life that's almost entirely its own. One example James Kanze has cited was a billing/tracking system he worked on for a phone company. When you start to make a phone call, something takes note of that and creates a phone_call object. From that point onward, the phone_call object handles the details of the phone call (making a connection when you dial, adding an entry to the database to say when the call started, possibly connect more people if you do a conference call, etc.) When the last people on the call hang up, the phone_call object does its final book-keeping (e.g., adds an entry to the database to say when you hung up, so they can compute how long your call was) and then destroys itself. The lifetime of the phone_call object is based on when the first person starts the call and when the last people leave the call -- from the viewpoint of the rest of the system, it's basically entirely arbitrary, so you can't tie it to any lexical scope in the code, or anything on that order.
For anybody who might care about how dependable this kind of coding can be: if you make a phone call to, from, or through almost any part of Europe, there's a pretty good chance that it's being handled (at least in part) by code that does exactly this.
If it scares you, there's a perfectly legal hack:
void myclass::delete_me()
{
std::unique_ptr<myclass> bye_bye(this);
}
I think delete this is idiomatic C++ though, and I only present this as a curiosity.
There is a case where this construct is actually useful - you can delete the object after throwing an exception that needs member data from the object. The object remains valid until after the throw takes place.
void myclass::throw_error()
{
std::unique_ptr<myclass> bye_bye(this);
throw std::runtime_exception(this->error_msg);
}
Note: if you're using a compiler older than C++11 you can use std::auto_ptr instead of std::unique_ptr, it will do the same thing.
One of the reasons that C++ was designed was to make it easy to reuse code. In general, C++ should be written so that it works whether the class is instantiated on the heap, in an array, or on the stack. "Delete this" is a very bad coding practice because it will only work if a single instance is defined on the heap; and there had better not be another delete statement, which is typically used by most developers to clean up the heap. Doing this also assumes that no maintenance programmer in the future will cure a falsely perceived memory leak by adding a delete statement.
Even if you know in advance that your current plan is to only allocate a single instance on the heap, what if some happy-go-lucky developer comes along in the future and decides to create an instance on the stack? Or, what if he cuts and pastes certain portions of the class to a new class that he intends to use on the stack? When the code reaches "delete this" it will go off and delete it, but then when the object goes out of scope, it will call the destructor. The destructor will then try to delete it again and then you are hosed. In the past, doing something like this would screw up not only the program but the operating system and the computer would need to be rebooted. In any case, this is highly NOT recommended and should almost always be avoided. I would have to be desperate, seriously plastered, or really hate the company I worked for to write code that did this.
It is allowed (just do not use the object after that), but I wouldn't write such code on practice. I think that delete this should appear only in functions that called release or Release and looks like: void release() { ref--; if (ref<1) delete this; }.
Well, in Component Object Model (COM) delete this construction can be a part of Release method that is called whenever you want to release aquisited object:
void IMyInterface::Release()
{
--instanceCount;
if(instanceCount == 0)
delete this;
}
This is the core idiom for reference-counted objects.
Reference-counting is a strong form of deterministic garbage collection- it ensures objects manage their OWN lifetime instead of relying on 'smart' pointers, etc. to do it for them. The underlying object is only ever accessed via "Reference" smart pointers, designed so that the pointers increment and decrement a member integer (the reference count) in the actual object.
When the last reference drops off the stack or is deleted, the reference count will go to zero. Your object's default behavior will then be a call to "delete this" to garbage collect- the libraries I write provide a protected virtual "CountIsZero" call in the base class so that you can override this behavior for things like caching.
The key to making this safe is not allowing users access to the CONSTRUCTOR of the object in question (make it protected), but instead making them call some static member- the FACTORY- like "static Reference CreateT(...)". That way you KNOW for sure that they're always built with ordinary "new" and that no raw pointer is ever available, so "delete this" won't ever blow up.
You can do so. However, you can't assign to this. Thus the reason you state for doing this, "I want to change the view," seems very questionable. The better method, in my opinion, would be for the object that holds the view to replace that view.
Of course, you're using RAII objects and so you don't actually need to call delete at all...right?
This is an old, answered, question, but #Alexandre asked "Why would anyone want to do this?", and I thought that I might provide an example usage that I am considering this afternoon.
Legacy code. Uses naked pointers Obj*obj with a delete obj at the end.
Unfortunately I need sometimes, not often, to keep the object alive longer.
I am considering making it a reference counted smart pointer. But there would be lots of code to change, if I was to use ref_cnt_ptr<Obj> everywhere. And if you mix naked Obj* and ref_cnt_ptr, you can get the object implicitly deleted when the last ref_cnt_ptr goes away, even though there are Obj* still alive.
So I am thinking about creating an explicit_delete_ref_cnt_ptr. I.e. a reference counted pointer where the delete is only done in an explicit delete routine. Using it in the one place where the existing code knows the lifetime of the object, as well as in my new code that keeps the object alive longer.
Incrementing and decrementing the reference count as explicit_delete_ref_cnt_ptr get manipulated.
But NOT freeing when the reference count is seen to be zero in the explicit_delete_ref_cnt_ptr destructor.
Only freeing when the reference count is seen to be zero in an explicit delete-like operation. E.g. in something like:
template<typename T> class explicit_delete_ref_cnt_ptr {
private:
T* ptr;
int rc;
...
public:
void delete_if_rc0() {
if( this->ptr ) {
this->rc--;
if( this->rc == 0 ) {
delete this->ptr;
}
this->ptr = 0;
}
}
};
OK, something like that. It's a bit unusual to have a reference counted pointer type not automatically delete the object pointed to in the rc'ed ptr destructor. But it seems like this might make mixing naked pointers and rc'ed pointers a bit safer.
But so far no need for delete this.
But then it occurred to me: if the object pointed to, the pointee, knows that it is being reference counted, e.g. if the count is inside the object (or in some other table), then the routine delete_if_rc0 could be a method of the pointee object, not the (smart) pointer.
class Pointee {
private:
int rc;
...
public:
void delete_if_rc0() {
this->rc--;
if( this->rc == 0 ) {
delete this;
}
}
}
};
Actually, it doesn't need to be a member method at all, but could be a free function:
map<void*,int> keepalive_map;
template<typename T>
void delete_if_rc0(T*ptr) {
void* tptr = (void*)ptr;
if( keepalive_map[tptr] == 1 ) {
delete ptr;
}
};
(BTW, I know the code is not quite right - it becomes less readable if I add all the details, so I am leaving it like this.)
Delete this is legal as long as object is in heap.
You would need to require object to be heap only.
The only way to do that is to make the destructor protected - this way delete may be called ONLY from class , so you would need a method that would ensure deletion

Is "delete p; p = NULL(nullptr);" an antipattern?

Searching something on SO, I stumbled across this question and one of the comments to the most voted answer (the fifth comment to that most voted answer) suggests that delete p; p = NULL; is an antipattern. I must confess that I happen to use it quite often paring it sometimes\most of the times with the check if (NULL != p). The Man himself seems to suggest it (please see the destroy() function example) so I'm really confused to why it might be such a dreaded thing to be considered an antipattern. I use it for the following reasons:
when I'm releasing a resource I also want to kind of invalidate it for further usage and NULL is the right tool to say a pointer is invalid
I don't want to leave behind dangling pointers
I want to avoid double\multiple free bugs - deleting a NULL pointer is like a nop but deleting a dangling pointer is like "shooting yourself in the foot"
Please note that I'm not asking the question in the context of the "this" pointer and let's assume we don't live in a perfect C++ world and that legacy code does exist and it has to be maintained, so please do not suggest any kind of smart pointer :).
Yes, I would not recommended doing it.
The reason is that the extra setting to null will only help in very limited contexts. If you are in a destructor, the pointer itself will not exist right after the destructor execution, which means that whether it is null or not does not matter.
If the pointer was copied, setting p to null will not set the rest of the pointers, and you can then hit the same problems with the extra issue that you will be expecting to find deleted pointers being null, and it won't make sense how your pointer became non-zero and yet the object is not there....
Additionally it might hide other errors, like for example if your application is trying to delete a pointer many times, by setting the pointer to null, the effect is that the second delete will be converted to a no-op, and while the application will not crash the error in the logic is still there (consider that a later edit accesses the pointer right before the delete that is not failing... how can that fail?
I recommend doing so.
Obviously, it's valid in a context where NULL is a valid value of the pointer. This, of course, means if it's used in other places, it must be checked.
Even if the pointer may, technically, not be NULL, it does help in real-world scenarios when customers send you a crash dump. If it's NULL and it's not supposed to be (and it wasn't trapped in testing with the assert() that you should do), then it's easy to see that this is the problem - you'll crash at something like mov eax, [edx+4] or something, you'll see that edx is 0, and you know what the problem is. If, on the other hand, you don't NULL the pointer, but it is deleted, then all sorts of things may happen. It may work, it may crash immediately, it may crash later, it may show weird things - at this point, anything that happens is soft-non-deterministic.
Defensive programming is King. That includes setting a pointer to NULL extraneously, even if you think you don't have to, and doing that extra NULL check in a few places even though you technically know it isn't supposed to happen.
Even if you have the logic error of a pointer going through delete twice, it's better to not crash and handle it safely than to crash. That may mean you do that extra check, and you'll probably want to log that, or maybe even end the program gracefully, but it's not just a crash. Customers don't like that.
Personally, I use an inline function that takes the pointer as a reference and sets it to NULL.
No, it's not an anti-pattern.
Setting a pointer to NULL is a perfectly good way of indicating that the pointer is no longer pointing at anything valid. In fact, that's exactly what NULL value is intended to mean.
If there's an anti-pattern to be avoided here, the anti-pattern would be not having a simple and well-defined set of rules/conventions for how your program manages its memory. It doesn't matter so much what those rules are, as long as they work to avoid leaks and crashes, and as long as you can and do follow them consistently.
It depends on how the pointer is used. If all the code that can see the pointer should "know" that it's no longer valid — especially in a destructor where the pointer is about to go out of scope anyway — there's no need to set it to null.
On the other hand, the pointer may represent an object that sometimes exists, sometimes doesn't, and you have code like if (p) { p->doStuff(); } to act upon the object when it exists. In that case, obviously, you should set it to null after deleting the object.
The important distinction in the latter case is that the lifetime of the pointer variable is much longer than the lifetime of the objects it (sometimes) points to, and its null-ness carries some significant meaning that has to be communicated to other parts of the program.
In my opinion the anti-pattern is not delete p; p = NULL;, it's assert(this != NULL).
I use the anti-pattern as you call it for two reasons - first to enhance the likelyhood that bad code will crash spectacularly without hiding, and second to make the core problem more obvious in debugging. I wouldn't litter my code with asserts just on the off chance that it might catch something.
Although I think #Mark Ransom is on sort of the right track, I would suggest there's an even more fundamental problem than just assert(this!=NULL).
I think it's much more telling that you're using raw pointers and new (directly) often enough to care. While this isn't (necessarily) a code smell/anti-pattern/whatever in itself, it often points toward code that's unnecessarily similar to C, and isn't taking advantage of the things like containers in the standard library.
Even if the standard containers don't fit your needs well, you can still normally wrap your pointer up into a small enough, simple enough package that techniques like this are irrelevant -- you've restricted access to the pointer in question to such a small amount of code that you can glance through it and assure that it's only ever used when it's valid. As Hoare said, there are two ways of doing things: one is to keep the code so simple there are obviously no deficiencies; the other is to make the code so complex there are no obvious deficiencies. This technique only appears relevant (to me) when you're already dealing with the latter case.
Ultimately, I see the desire to do this at all as basically admitting defeat -- rather than even attempting to assure that the code is correct, you've done the equivalent of setting up a bug-zapper next to a swamp. It reduces the appearance of bugs in a small area by a small percentage, but leaves so many more free to breed that if there's any effect on the total population, it's far too small to measure.
The reason is that the extra setting to null will only help in very limited contexts. If you are in a destructor, the pointer itself will not exist right after the destructor execution, which means that whether it is null or not does not matter.
Got to correct this statement since it is false in C++.
The other functions of an object being destroyed may be called while the object is getting destroyed (because somehow the destruction process requires it.) It is generally viewed as ugly, but there not always good work around.
Therefore, clearing your pointers may be the only good solution to avoid problems (i.e. these other functions being called can then test to see whether the object is valid before using it.)
Yet, a good idea in C++ is to use smart objects (probably what you are talking about). More or less, a class that holds a reference to an object and which makes sure said object is released when the destructor is hit and not add many objects to destroy all at once in one object (although the result is the same, it is cleaner.)

C++ Dynamic Allocation Mismatch: Is this problematic?

I have been assigned to work on some legacy C++ code in MFC. One of the things I am finding all over the place are allocations like the following:
struct Point
{
float x,y,z;
};
...
void someFunc( void )
{
int numPoints = ...;
Point* pArray = (Point*)new BYTE[ numPoints * sizeof(Point) ];
...
//do some stuff with points
...
delete [] pArray;
}
I realize that this code is atrociously wrong on so many levels (C-style cast, using new like malloc, confusing, etc). I also realize that if Point had defined a constructor it would not be called and weird things would happen at delete [] if a destructor had been defined.
Question: I am in the process of fixing these occurrences wherever they appear as a matter of course. However, I have never seen anything like this before and it has got me wondering. Does this code have the potential to cause memory leaks/corruption as it stands currently (no constructor/destructor, but with pointer type mismatch) or is it safe as long as the array just contains structs/primitive types?
Formally the code causes undefined behavior because of the pointer type mismatch in new[]/delete[]. In practice it should work fine.
The pointer type mismatch issue can easily be fixed by adding a cast to the delete-expression
delete [] (BYTE *) pArray;
If Point type is defined as shown in the question (i.e. with trivial constructor and destructor), then this correction solves all formal issues there are in this code. From the language point of view, the lifetime of an object with trivial constructor (destructor) begins (ends) simultaneously with its storage duration. I.e. there's no requirement to perform the actual invocation of constructor (destructor).
As long as the constructors and destructors do nothing, then you're safe.
As long as you assure that it really does invoke a matching delete[] for every new[], it shouldn't leak -- but if an exception might be thrown by any of the code that's been commented out, that's going to be difficult to assure (basically, you need to catch any possible exceptions, delete the memory, then re-throw the exception).
I would first try to figure out why the code was written that way in the first place. It might be simply because the programmer didn't know any better, or because they were trying to work around some funky defect int he compiler. But there might be a real reason that you are unaware of. If there is, then unless you understand that reason and its side effects, you may introduce a defect by changing this code.
That out of the way, and assuming there is no particular reason why the code needs to be this way now, you should be safe in changing the code to use more modern and correct constructs.
But why? I understand the motivation to make the code more correct. But what do you really gain by this? If the code works the way it is now (a big assumption), then by changing the code you possibly gain the benefit of making the code more understandable to future programmers, but every line of code you change introduces the possibility for a new bug to be written.
And if finally you do decide to go ahead with the change, why stop halfway? Consider getting rid of all the news and deletes altogether, and replace them with vectors etc.

It is good programming practice to always check for null pointers before using an object in C++?

This seems like a lot of work; to check for null each time an object is used.
I have been advised that it is a good idea to check for null pointers so you don't have to spend time looking for where segmentation faults occur.
Just wondering what the community here thinks?
Use references whenever you can, because they can't be null, therefore you don't have to check if they are null.
It's good practice to check for null in function parameters and other places you may be dealing with pointers someone else is passing you. However, in your own code, you might have pointers you know will always be pointing to a valid object, so a null check is probably overkill... just use your common sense.
I don't know if it really helps with debugging because any debugger will be showing you pretty clearly that a null pointer was used and it won't take long to find it. It's more about making sure you don't crash if another programmer passes in NULL, or that the mistake is picked up by an assert in a debug build.
No. You should instead make sure the pointers were not set to NULL in the first place. Note that in Standard C++:
int * p = new int;
then p can never be NULL because new will throw an exception if the allocation fails.
If you are writing functions that can take a pointer as a parameter, you should treat them like this
// does something with p
// note that p cannot be NULL
void f( int * p );
In other words you should document the requirements of the function. You can also use assert() to check if someone has ignored your documentation (if they have, it's their problem, not yours), but I must admit I have gone off this as time has gone on - simply say what the function requires, and leave the responsibility with the caller.
A third bit of advice is simply not to use pointers - most C++ code that I've seen overuses pointers to a ridiculous extent - you should use values and references wherever possible.
In general, I would advise against doing this, as it makes your code harder to read and you also have to come up with some sensible way of dealing with the situation if a pointer is actually NULL.
In my C++ projects, I only check if a pointer (if I am using pointers at all) is NULL, only if it could be a valid state of the pointer. Checking for NULL if the pointer should never actually be NULL is a bit pointless, because you are then trying work around some programming error you should fix instead.
Additionally, when you feel the need to check if a pointer is NULL, you probably should define more clearly who owns pointer/object.
Also, you never have to check if new returns NULL, because it never will return NULL. It will throw an exception if it could not create an object.
I hate the amount of code checking for nulls adds, so I only do it for functions I export to another person.
If use the function internally, and I know how I use it, I don't check for nulls since it would get the code too messy.
the answer is yes, if you are not in control of the object. that is, if the object is returned from some method you do not control, or if in your own code you expect (or it is possible) that an object can be null.
it also depends on where the code will run. if you are writing professional code that customers / users will see, it's generally bad for them to see null pointer problems. it's better if you can detect it beforehand and print out some debugging information or otherwise report it to them in a "nicer" way.
if it's just code you are using informally, you will probably be able to understand the source of the null pointer without any additional information.
I figure I can do a whole lot of checks for NULL pointers for the cost of (debugging) just one segfault.
And the performance hit is negligible. TWO INSTRUCTIONS. Test for register == zero, branch if test succeeds. Depending on the machine, maybe only ONE instruction, if the register load sets the condition codes (and some do).
Others (AshleysBrain and Neil Butterworth), already answered correctly, but I will summarize it here:
Use references as much as possible
If using pointers, initialize them either to NULL or to a valid memory address/object
If using pointers, always verify if they are NULL before using them
Use references (again)... This is C++, not C.
Still, there is one corner case where a reference can be invalid/NULL :
void foo(T & t)
{
t.blah() ;
}
void bar()
{
T * t = NULL ;
foo(*t) ;
}
The compiler will probably compile this, and then, at execution, the code will crash at the t.blah() line (if T::blah() uses this one way or another).
Still, this is cheating/sabotage : The writer of the bar() function dereferenced t without verifying t was NOT null. So, even if the crash happens in foo(), the error is in the code of bar(), and the writer of bar() is responsible.
So, yes, use references as much as possible, know this corner case, and don't bother to protect against sabotaged references...
And if you really need to use a pointer in C++, unless you are 100% sure the pointer is not NULL (some functions guarantee that kind of thing), then always test the pointer.
I think that is a good idea for a debug version.
In a release version, checking for null pointers can result in a performance degradation.
Moreover, there are cases where you can check the pointer value in a parent function and avoid the checking in its children.
If the pointers are coming to you as parameters to a function, then make sure they are valid at the beginning of the function. Otherwise, there is not much point. new throws an exception on failure.

Checking for null before pointer usage

Most people use pointers like this...
if ( p != NULL ) {
DoWhateverWithP();
}
However, if the pointer is null for whatever reason, the function won't be called.
My question is, could it possibly be more beneficial to just not check for NULL? Obviously on safety critical systems this isn't an option, but your program crashing in a blaze of glory is more obvious than a function not being called if the program can still run without it.
In relation to the first question, do you always check for NULL before you use pointers?
Secondly, consider you have a function that takes a pointer as an argument, and you use this function multiple times on multiple pointers throughout your program. Do you find it more beneficial to test for NULL in the function (the benefit being you don't have to test for NULL all over the place), or on the pointer before calling the function (the benefit being no overhead from calling the function)?
You are right in thinking that NULL pointers often result in immediate crashes, but do not forget that if you are indexing into a large array through a NULL pointer, you might indeed get a valid memory address if your index is high enough. And then, you'll get memory corruption or incorrect memory reads, which will be much harder to locate.
Whenever I can assume that calling a function with NULL is a bug, which should never happen in production code, I prefer using ASSERT guards in the function, which are only compiled into real code in a debug build, and not checking for NULL otherwise.
And from my point of view, generally, a function should check its arguments, not the caller. You should always assume that your callers might have been a bit sloppy about the checking, or that they might contain bugs...
Morality: check for NULL in the function being called, either through some if() statement that throws, or using some ASSERT construct (possibly with a clear message of why this happened). Also check for NULL in the callers, but only if the callers know that this condition might happen in a normal program execution, and act accordingly.
When it's acceptable for the program to just crash if a NULL pointer comes up, I'm partial to:
assert(p);
DoWhateverWithP();
This will only check the pointer in debug builds since defining NDEBUG usually #undefs assert() at the preprocessor level. It documents your assumption and assists with debugging but has zero performance impact on the released binary (though, to be fair, checking for a NULL pointer should have effectively zero impact on performance in the vast majority of circumstances).
As a side benefit, this is legal for C as well as C++ and, in the latter case, doesn't require exceptions to be enabled in your compiler/runtime.
Concerning your second question, I prefer to put the assertions at the beginning of the subroutine. Again, the beauty of assert() is the fact that there's really no 'overhead' to speak of. As such, there's nothing to weigh against the benefits of only requiring one assertion in the subroutine definition.
Of course, the caveat is that you never want to assert an expression with side-effects:
assert(p = malloc(1)); // NEVER DO THIS!
DoSomethingWithP(); // If NDEBUG was defined, malloc() was never called!
Don't make it a rule to just check for null and do nothing if you find it.
If the pointer is allowed to be null, then you have to think about what your code does in the case that it actually is null. Usually, just doing nothing is the wrong answer. With care it's possible to define APIs which work like that, but this requires more than just scattering a few NULL checks about the place.
So, if the pointer is allowed to be null, then you must check for null, and you must do whatever is appropriate.
If the pointer is not allowed be null, then it's perfectly reasonable to write code which invokes undefined behaviour if it is null. It's no different from writing string-handling routines which invoke undefined behaviour if the input is not NUL-terminated, or writing buffer-using routines which invoke undefined behaviour if the caller passes in the wrong value for the length, or writing a function that takes a file* parameter, and invokes undefined behaviour if the user passes in a file descriptor reinterpret_cast to file*. In C and C++, you simply have to be able to rely on what your caller tells you. Garbage in, garbage out.
However, you might like to write code which helps out your caller (who is probably you, after all) when the most likely kinds of garbage are passed in. Asserts and exceptions are good for this.
Taking up the analogy from Franci's comment on the question: most people do not look for cars when crossing a footpath, or before sitting down on their sofa. They could still be hit by a car. It happens. But it would generally be considered paranoid to spend any effort checking for cars in those circumstances, or for the instructions on a can of soup to say "first, check for cars in your kitchen. Then, heat the soup".
The same goes for your code. It's much easier to pass an invalid value to a function than it is to accidentally drive your car into someone's kitchen. But it's still the fault of the driver if they do so and hit someone, not a failure of the cook to exercise due care. You don't necessarily want cooks (or callees) to clutter up their recipes (code) with checks that ought to be redundant.
There are other ways to find problems, such as unit tests and debuggers. In any case it is much safer to create a car-free environment except where necessary (roads), than it is to drive cars willy-nilly all over the place and hope everybody can cope with them at all times. So, if you do check for null in cases where it isn't allowed, you shouldn't let this give people the idea that it is allowed after all.
[Edit - I literally just hit an example of a bug where checking for null would not find an invalid pointer. I'm going to use a map to hold some objects. I will be using pointers to those objects (to represent a graph), which is fine because map never relocates its contents. But I haven't defined an ordering for the objects yet (and it's going to be a bit tricky to do so). So, to get things moving and prove that some other code works, I used a vector and a linear search instead of a map. That's right, I didn't mean vector, I meant deque. So after the first time the vector resized, I wasn't passing null pointers into functions, but I was passing pointers to memory which had been freed.
I make dumb errors which pass invalid garbage approximately as often as I make dumb errors which pass null pointers invalidly. So regardless of whether I add checking for null, I still need to be able to diagnose problems where the program just crashes for reasons I can't check. Since this will also diagnose null pointer accesses, I usually don't bother checking for null unless I'm writing code to generally check the preconditions on entry to the function. In that case it should if possible do a lot more than just check null.]
I prefer this style:
if (p == NULL) {
// throw some exception here
}
DoWhateverWithP();
This means that whatever function this code lives in will fail quickly in the event that p is NULL. You are correct that if p is NULL there is no way that DoWhateverWithP can execute but using a null pointer or simply not executing the function are both unacceptable ways to handle the fack the p is NULL.
The important thing to remember is to exit early and fail fast - this kind of approach yields code that is easier to debug.
In addition to the other answers, it depends upon what NULL signifies. For example, this code is perfectly OK, and is pretty idiomatic:
while (fgets(buf, sizeof buf, fp) != NULL) {
process(buf);
}
Here, NULL value indicates not only error, but end-of-file condition as well. Similarly, strtok() returns NULL to say, "there are no more tokens" (although one should avoid strtok() to begin with, but I digress). In cases like this, it is perfectly OK to call a function if the returned pointer is not NULL, and do nothing otherwise.
Edit: another example, closer to what was asked:
const char *data = "this;is;a;test;";
const char *curr = data;
const char *p;
while ((p = strchr(curr, ';')) != NULL) {
/* process data in [curr, p) */
process(curr, p);
curr = p + 1;
}
Once again, NULL here is an indication from strchr() that it couldn't find a ;, and that we should stop processing the data further.
Having said that, if NULL is not used as an indication, then it depends:
If the pointer can't be NULL at this point in code, it's useful to have an assert(p != NULL); when developing, and also having a fprintf(stderr, "Can't happen\n"); or equivalent statement, and then take whatever action as appropriate (abort() or similar is probably the only sane choice at this point).
If the pointer can be NULL, and it's not critical, it might be better to just bypass the usage of the null pointer. Suppose you're trying to allocate memory for writing a log message, and malloc() fails. You shouldn't abort the program because of this. If malloc() succeeds, you want to call a function (sprintf()/whatever) to fill the buffer.
If the pointer can be NULL, and it's critical. In this case, you probably want to fail, and hopefully such conditions don't happen too often.
Secondly, consider you have a function
that takes a pointer as an argument,
and you use this function multiple
times on multiple pointers throughout
your program. Do you find it more
beneficial to test for NULL in the
function (the benefit being you don't
have to test for NULL all over the
place), or on the pointer before
calling the function (the benefit
being no overhead from calling the
function)?
This depends upon a lot of factors. If I can be sure sometimes or most of the times that the pointer passed to a function cannot be NULL, the extra check in the function is wasteful. If the pointer passed comes out of a lot of places, and it's tricky to put in a check everywhere, sure, then the check is good to have in the function itself.
The standard library functions, for the most part, don't check for NULL: str*, mem* functions for example. An exception is free(), it does check for NULL.
A comment about assert: assert is a no-op if NDEBUG is defined, so one should not use it for debugging—its only use is during development to catch programming errors. Also, in C89, assert takes an int, so assert(p != NULL) is better in such cases than a just plain assert(p).
This non-NULLness check can be avoided by using references instead of pointers. This way, the compiler ensures the parameter passed is not NULL. For example:
void f(Param& param)
{
// "param" is a pointer that is guaranteed not to be NULL
}
In this case, it is up to the client to do the checking. However, mostly the client situation will be like this:
Param instance;
f(instance);
No non-NULLness checking is needed.
When using with objects allocated on the heap, you can do the following:
Param& instance = *new Param();
f(*instance);
Update: As user Crashworks remarks, it is still possible to make you program crash. However, when using references, it is the responsibility of the client to pass a valid reference, and as I show in the example, this is very easy to do.
How about: a comment clarifying the intent? If the intent is "this can't happen", then perhaps an assert would be the right thing to do instead of the if statement.
On the other hand, if a null value is normal, perhaps an "else comment" explaining why we can skip the "then" step would be in order. Stevel McConnel has a good section in "Code Complete" about if/else statements, and how a missing else is a very common error (distracted, forgot it?).
Consequently, I usually put a comment in for a "no-op else", unless it is something of the form "if done, return/break".
When you check for NULL, it is not good idea just to skip the function call. You should have an else-part that does something meaningful in case of NULL, for example throws an error or returns error code to upper level.
On the other hand, NULL is not always an error. It is often used to indicate for example that end of data has been reached. In such case, you will have to handle the situation as normal program flow.
Well the answer to the first question is: you are talking about ideal situation, most of the code that I see which uses if ( p != NULL ) are legacy. Also suppose, you want to return an evaluator, and then call the evaluator with the data, but say there is no evaluator for that data, its make logical sense to return NULL and check for NULL before calling the evaluator.
The answer to the second question is, it depends on the situation, like the delete checks for the NULL pointer, whereas lots of other function don't. Sometimes, if you test the pointer inside the function, then you might have to test it in lots of functions like:
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
but this code would be much better:
if(p)
{
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
}
Could it possibly be more beneficial to just not check for NULL?
I wouldn't do it, I favor assertions on the frontline and some form of recovery in the body past that. What would assertions not provide to you, that not checking for null would? Similar effect, with easier interpretation and a formal acknowledgement.
In relation to the first question, do you always check for NULL before you use pointers?
It really depends on the code and the time available, but I am irritatingly good at it; a good chunk of 'implementation' in my programs consists of what a program should not do, rather than the usual 'what it should do'.
Secondly, consider you have a function that takes a pointer as an argument...
I test it in the function, as the function is (hopefully) the program that is reused more frequently. I also tend to test it before making the call, without that test, the error loses localization (useful for reporting and isolation).
I think I've seen more of. This way you don't proceed if you know it's going to blow up anyway.
if (NULL == p)
{
goto FunctionExit; // or some other common label to exit the function.
}
I think it is better to check for null. Although, you can cut down on the amount of checks you need to make.
For most cases I prefer a simple guard clause at the top of a function:
if (p == NULL) return;
That said, I typically only put the check on functions that are publicly exposed.
However, when the null pointer in unexpected I will throw an exception. (There are some functions it doesn't make any sense to call with null, and the consumer should be responsible enough to use it right.)
Constructor initialization can be used as an alternative to checking for null all the time. This is especially useful when the class contains a collection. The collection can be used throughout the class without checking whether it has been initialized.
Dereferencing a null pointer is undefined behavior. If you want to crash if the pointer is null, use an assert or something similar (and, depending on the defined behavior of your class, that can be a perfectly valid response - it's certainly better than continuing to run when people may be expecting something to have been done!).
Since the behavior of dereferencing a null pointer is undefined, it can do anything. Crash, corrupt memory, create a wormhole to an alternate dimension allowing the Elder Gods to come forth and devour all of mankind... anything. While bugs happen, depending upon undefined behavior is, by definition, a bug. So don't do it deliberately.