Related
This is definitely subjective, but I'd like to try to avoid it
becoming argumentative. I think it could be an interesting question if
people treat it appropriately.
In my several recent projects I used to implement architectures where long delegation chains are a common thing.
Dual delegation chains can be encountered very often:
bool Exists = Env->FileSystem->FileExists( "foo.txt" );
And triple delegation is not rare at all:
Env->Renderer->GetCanvas()->TextStr( ... );
Delegation chains of higher order exist but are really scarce.
In above mentioned examples no NULL run-time checks are performed since the objects used are always there and are vital to the functioning of the program and
explicitly constructed when execution starts. Basically I used to split a delegation chain in these cases:
1) I reuse the object obtained through a delegation chain:
{ // make C invisible to the parent scope
clCanvas* C = Env->Renderer->GetCanvas();
C->TextStr( ... );
C->TextStr( ... );
C->TextStr( ... );
}
2) An intermediate object somewhere in the middle of the delegation chain should be checked for NULL before usage. Eg.
clCanvas* C = Env->Renderer->GetCanvas();
if ( C ) C->TextStr( ... );
I used to fight the case (2) by providing proxy objects so that a method can be invoked on non-NULL object leading to an empty result.
My questions are:
Is either of cases (1) or (2) a pattern or an antipattern?
Is there a better way to deal with long delegation chains in C++?
Here are some pros and cons I considered while making my choice:
Pros:
it is very descriptive: it is clear out of 1 line of code where did the object came from
long delegation chains look nice
Cons:
interactive debugging is labored since it is hard to inspect more than one temporary object in the delegation chain
I would like to know other pros and cons of the long delegation chains. Please, present your reasoning and vote based on how well-argued opinion is and not how well you agree with it.
I wouldn't go so far to call either an anti-pattern. However, the first has the disadvantage that your variable C is visible even after it's logically relevant (too gratuitous scoping).
You can get around this by using this syntax:
if (clCanvas* C = Env->Renderer->GetCanvas()) {
C->TextStr( ... );
/* some more things with C */
}
This is allowed in C++ (while it's not in C) and allows you to keep proper scope (C is scoped as if it were inside the conditional's block) and check for NULL.
Asserting that something is not NULL is by all means better than getting killed by a SegFault. So I wouldn't recommend simply skipping these checks, unless you're a 100% sure that that pointer can never ever be NULL.
Additionally, you could encapsulate your checks in an extra free function, if you feel particularly dandy:
template <typename T>
T notNULL(T value) {
assert(value);
return value;
}
// e.g.
notNULL(notNULL(Env)->Renderer->GetCanvas())->TextStr();
In my experience, chains like that often contains getters that are less than trivial, leading to inefficiencies. I think that (1) is a reasonable approach. Using proxy objects seems like an overkill. I would rather see a crash on a NULL pointer rather than using a proxy objects.
Such long chain of delegation should not happens if you follow the Law of Demeter. I've often argued with some of its proponents that they where holding themselves to it too conscientiously, but if you come to the point to wonder how best to handle long delegation chains, you should probably be a little more compliant with its recommendations.
Interesting question, I think this is open to interpretation, but:
My Two Cents
Design patterns are just reusable solutions to common problems which are generic enough to be widely applied in object oriented (usually) programming. Many common patterns will start you out with interfaces, inheritance chains, and/or containment relationships that will result in you using chaining to call things to some extent. The patterns are not trying to solve a programming issue like this though - chaining is just a side effect of them solving the functional problems at hand. So, I wouldn't really consider it a pattern.
Equally, anti-patterns are approaches that (in my mind) counter-act the purpose of design patterns. For example, design patterns are all about structure and the adaptability of your code. People consider a singleton an anti-pattern because it (often, not always) results in spider-web like code due to the fact that it inherently creates a global, and when you have many, your design deteriorates fast.
So, again, your chaining problem doesn't necessarily indicate good or bad design - it's not related to the functional objectives of patterns or the drawbacks of anti-patterns. Some designs just have a lot of nested objects even when designed well.
What to do about it:
Long delegation chains can definitely be a pain in the butt after a while, and as long as your design dictates that the pointers in those chains won't be reassigned, I think saving a temporary pointer to the point in the chain you're interested in is completely fine (function scope or less preferably).
Personally though, I'm against saving a permanent pointer to a part of the chain as a class member as I've seen that end up in people having 30 pointers to sub objects permanently stored, and you lose all conception of how the objects are laid out in the pattern or architecture you're working with.
One other thought - I'm not sure if I like this or not, but I've seen some people create a private (for your sanity) function that navigates the chain so you can recall that and not deal with issues about whether or not your pointer changes under the covers, or whether or not you have nulls. It can be nice to wrap all that logic up once, put a nice comment at the top of the function stating which part of the chain it gets the pointer from, and then just use the function result directly in your code instead of using your delegation chain each time.
Performance
My last note would be that this wrap-in-function approach as well as your delegation chain approach both suffer from performance drawbacks. Saving a temporary pointer lets you avoid the extra two dereferences potentially many times if you're using these objects in a loop. Equally, storing the pointer from the function call will avoid the over head of an extra function call every loop cycle.
For bool Exists = Env->FileSystem->FileExists( "foo.txt" ); I'd rather go for an even more detailed breakdown of your chain, so in my ideal world, there are the following lines of code:
Environment* env = GetEnv();
FileSystem* fs = env->FileSystem;
bool exists = fs->FileExists( "foo.txt" );
and why? Some reasons:
readability: my attention gets lost till I have to read to the end of the line in case of bool Exists = Env->FileSystem->FileExists( "foo.txt" ); It's just too long for me.
validity: regardles that you mentioned the objects are, if your company tomorrow hires a new programmer and he starts writing code, the day after tomorrow the objects might not be there. These long lines are pretty unfriendly, new people might get scared of them and will do something interesting such as optimising them... which will take more experienced programmer extra time to fix.
debugging: if by any chance (and after you have hired the new programmer) the application throws a segmentation fault in the long list of chain it is pretty difficult to find out which object was the guilty one. The more detailed the breakdown the more easier to find the location of the bug.
speed: if you need to do lots of calls for getting the same chain elements, it might be faster to "pull out" a local variable from the chain instead of calling a "proper" getter function for it. I don't know if your code is production or not, but it seems to miss the "proper" getter function, instead it seems to use only the attribute.
Long delegation chains are a bit of a design smell to me.
What a delegation chain tells me is that one piece of code has deep access to an unrelated piece of code, which makes me think of high coupling, which goes against the SOLID design principles.
The main problem I have with this is maintainability. If you're reaching two levels deep, that is two independent pieces of code that could evolve on their own and break under you. This quickly compounds when you have functions inside the chain, because they can contain chains of their own - for example, Renderer->GetCanvas() could be choosing the canvas based on information from another hierarchy of objects and it is difficult to enforce a code path that does not end up reaching deep into objects over the life time of the code base.
The better way would be to create an architecture that obeyed the SOLID principles and used techniques like Dependency Injection and Inversion Of Control to guarantee your objects always have access to what they need to perform their duties. Such an approach also lends itself well to automated and unit testing.
Just my 2 cents.
If it is possible I would use references instead of pointers. So delegates are guaranteed to return valid objects or throw exception.
clCanvas & C = Env.Renderer().GetCanvas();
For objects which can not exist i will provide additional methods such as has, is, etc.
if ( Env.HasRenderer() ) clCanvas* C = Env.Renderer().GetCanvas();
If you can guarantee that all the objects exist, I don't really see a problem in what you're doing. As others have mentioned, even if you think that NULL will never happen, it may just happen anyway.
This being said, I see that you use bare pointers everywhere. What I would suggest is that you start using smart pointers instead. When you use the -> operator, a smart pointer will usually throw if the pointer is NULL. So you avoid a SegFault. Not only that, if you use smart pointers, you can keep copies and the objects don't just disappear under your feet. You have to explicitly reset each smart pointer before the pointer goes to NULL.
This being said, it wouldn't prevent the -> operator from throwing once in a while.
Otherwise I would rather use the approach proposed by AProgrammer. If object A needs a pointer to object C pointed by object B, then the work that object A is doing is probably something that object B should actually be doing. So A can guarantee that it has a pointer to B at all time (because it holds a shared pointer to B and thus it cannot go NULL) and thus it can always call a function on B to do action Z on object C. In function Z, B knows whether it always has a pointer to C or not. That's part of its B's implementation.
Note that with C++11 you have std::smart_ptr<>, so use it!
This seems like a lot of work; to check for null each time an object is used.
I have been advised that it is a good idea to check for null pointers so you don't have to spend time looking for where segmentation faults occur.
Just wondering what the community here thinks?
Use references whenever you can, because they can't be null, therefore you don't have to check if they are null.
It's good practice to check for null in function parameters and other places you may be dealing with pointers someone else is passing you. However, in your own code, you might have pointers you know will always be pointing to a valid object, so a null check is probably overkill... just use your common sense.
I don't know if it really helps with debugging because any debugger will be showing you pretty clearly that a null pointer was used and it won't take long to find it. It's more about making sure you don't crash if another programmer passes in NULL, or that the mistake is picked up by an assert in a debug build.
No. You should instead make sure the pointers were not set to NULL in the first place. Note that in Standard C++:
int * p = new int;
then p can never be NULL because new will throw an exception if the allocation fails.
If you are writing functions that can take a pointer as a parameter, you should treat them like this
// does something with p
// note that p cannot be NULL
void f( int * p );
In other words you should document the requirements of the function. You can also use assert() to check if someone has ignored your documentation (if they have, it's their problem, not yours), but I must admit I have gone off this as time has gone on - simply say what the function requires, and leave the responsibility with the caller.
A third bit of advice is simply not to use pointers - most C++ code that I've seen overuses pointers to a ridiculous extent - you should use values and references wherever possible.
In general, I would advise against doing this, as it makes your code harder to read and you also have to come up with some sensible way of dealing with the situation if a pointer is actually NULL.
In my C++ projects, I only check if a pointer (if I am using pointers at all) is NULL, only if it could be a valid state of the pointer. Checking for NULL if the pointer should never actually be NULL is a bit pointless, because you are then trying work around some programming error you should fix instead.
Additionally, when you feel the need to check if a pointer is NULL, you probably should define more clearly who owns pointer/object.
Also, you never have to check if new returns NULL, because it never will return NULL. It will throw an exception if it could not create an object.
I hate the amount of code checking for nulls adds, so I only do it for functions I export to another person.
If use the function internally, and I know how I use it, I don't check for nulls since it would get the code too messy.
the answer is yes, if you are not in control of the object. that is, if the object is returned from some method you do not control, or if in your own code you expect (or it is possible) that an object can be null.
it also depends on where the code will run. if you are writing professional code that customers / users will see, it's generally bad for them to see null pointer problems. it's better if you can detect it beforehand and print out some debugging information or otherwise report it to them in a "nicer" way.
if it's just code you are using informally, you will probably be able to understand the source of the null pointer without any additional information.
I figure I can do a whole lot of checks for NULL pointers for the cost of (debugging) just one segfault.
And the performance hit is negligible. TWO INSTRUCTIONS. Test for register == zero, branch if test succeeds. Depending on the machine, maybe only ONE instruction, if the register load sets the condition codes (and some do).
Others (AshleysBrain and Neil Butterworth), already answered correctly, but I will summarize it here:
Use references as much as possible
If using pointers, initialize them either to NULL or to a valid memory address/object
If using pointers, always verify if they are NULL before using them
Use references (again)... This is C++, not C.
Still, there is one corner case where a reference can be invalid/NULL :
void foo(T & t)
{
t.blah() ;
}
void bar()
{
T * t = NULL ;
foo(*t) ;
}
The compiler will probably compile this, and then, at execution, the code will crash at the t.blah() line (if T::blah() uses this one way or another).
Still, this is cheating/sabotage : The writer of the bar() function dereferenced t without verifying t was NOT null. So, even if the crash happens in foo(), the error is in the code of bar(), and the writer of bar() is responsible.
So, yes, use references as much as possible, know this corner case, and don't bother to protect against sabotaged references...
And if you really need to use a pointer in C++, unless you are 100% sure the pointer is not NULL (some functions guarantee that kind of thing), then always test the pointer.
I think that is a good idea for a debug version.
In a release version, checking for null pointers can result in a performance degradation.
Moreover, there are cases where you can check the pointer value in a parent function and avoid the checking in its children.
If the pointers are coming to you as parameters to a function, then make sure they are valid at the beginning of the function. Otherwise, there is not much point. new throws an exception on failure.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Defensive programming
We had a great discussion this morning about the subject of defensive programming. We had a code review where a pointer was passed in and was not checked if it was valid.
Some people felt that only a check for null pointer was needed. I questioned whether it could be checked at a higher level, rather than every method it is passed through, and that checking for null was a very limited check if the object at the other end of the point did not meet certain requirements.
I understand and agree that a check for null is better than nothing, but it feels to me that checking only for null provides a false sense of security since it is limited in scope. If you want to ensure that the pointer is usable, check for more than the null.
What are your experiences on the subject? How do you write defenses in to your code for parameters that are passed to subordinate methods?
In Code Complete 2, in the chapter on error handling, I was introduced to the idea of barricades. In essence, a barricade is code which rigorously validates all input coming into it. Code inside the barricade can assume that any invalid input has already been dealt with, and that the inputs that are received are good. Inside the barricade, code only needs to worry about invalid data passed to it by other code within the barricade. Asserting conditions and judicious unit testing can increase your confidence in the barricaded code. In this way, you program very defensively at the barricade, but less so inside the barricade. Another way to think about it is that at the barricade, you always handle errors correctly, and inside the barricade you merely assert conditions in your debug build.
As far as using raw pointers goes, usually the best you can do is assert that the pointer is not null. If you know what is supposed to be in that memory then you could ensure that the contents are consistent in some way. This begs the question of why that memory is not wrapped up in an object which can verify it's consistency itself.
So, why are you using a raw pointer in this case? Would it be better to use a reference or a smart pointer? Does the pointer contain numeric data, and if so, would it be better to wrap it up in an object which managed the lifecycle of that pointer?
Answering these questions can help you find a way to be more defensive, in that you'll end up with a design that is easier to defend.
The best way to be defensive is not to check pointers for null at runtime, but to avoid using pointers that may be null to begin with
If the object being passed in must not be null, use a reference! Or pass it by value! Or use a smart pointer of some sort.
The best way to do defensive programming is to catch your errors at compile-time.
If it is considered an error for an object to be null or point to garbage, then you should make those things compile errors.
Ultimately, you have no way of knowing if a pointer points to a valid object. So rather than checking for one specific corner case (which is far less common than the really dangerous ones, pointers pointing to invalid objects), make the error impossible by using a data type that guarantees validity.
I can't think of another mainstream language that allows you to catch as many errors at compile-time as C++ does. use that capability.
There is no way to check if a pointer is valid.
In all serious, it depends on how many bugs you'd like to have to have inflicted upon you.
Checking for a null pointer is definitely something that I would consider necessary but not sufficient. There are plenty of other solid principles you can use starting with entry points of your code (e.g., input validation = does that pointer point to something useful) and exit points (e.g., you thought the pointer pointed to something useful but it happened to cause your code to throw an exception).
In short, if you assume that everyone calling your code is going to do their best to ruin your life, you'll probably find a lot of the worst culprits.
EDIT for clarity: some other answers are talking about unit tests. I firmly believe that test code is sometimes more valuable than the code that it's testing (depending on who's measuring the value). That said, I also think that units tests are also necessary but not sufficient for defensive coding.
Concrete example: consider a 3rd party search method that is documented to return a collection of values that match your request. Unfortunately, what wasn't clear in the documentation for that method is that the original developer decided that it would be better to return a null rather than an empty collection if nothing matched your request.
So now, you call your defensive and well unit-tested method thinking (that is sadly lacking an internal null pointer check) and boom! NullPointerException that, without an internal check, you have no way of dealing with:
defensiveMethod(thirdPartySearch("Nothing matches me"));
// You just passed a null to your own code.
I'm a big fan of the "let it crash" school of design. (Disclaimer: I don't work on medical equipment, avionics, or nuclear power-related software.) If your program blows up, you fire up the debugger and figure out why. In contrast, if your program keeps running after illegal parameters have been detected, by the time it crashes you'll probably have no idea what went wrong.
Good code consists of many small functions/methods, and adding a dozen lines of parameter-checking to every one of those snippets of code makes it harder to read and harder to maintain. Keep it simple.
I may be a bit extreme, but I don't like Defensive Programming, I think it's laziness that has introduced the principle.
For this particular example, there is no sense in assert that the pointer is not null. If you want a null pointer, there is no better way to actually enforce it (and document it clearly at the same time) than to use a reference instead. And it's documentation that will actually be enforced by the compiler and does not cost a ziltch at runtime!!
In general, I tend not to use 'raw' types directly. Let's illustrate:
void myFunction(std::string const& foo, std::string const& bar);
What are the possible values of foo and bar ? Well that's pretty much limited only by what a std::string may contain... which is pretty vague.
On the other hand:
void myFunction(Foo const& foo, Bar const& bar);
is much better!
if people mistakenly reverse the order of the arguments, it's detected by the compiler
each class is solely responsible for checking that the value is right, the users are not burdenned.
I have a tendency to favor Strong Typing. If I have an entry that should be composed only of alphabetical characters and be up to 12 characters, I'd rather create a small class wrapping a std::string, with a simple validate method used internally to check the assignments, and pass that class around instead. This way I know that if I test the validation routine ONCE, I don't have to actually worry about all the paths through which that value can get to me > it will be validated when it reaches me.
Of course, that doesn't me that the code should not be tested. It's just that I favor strong encapsulation, and validation of an input is part of knowledge encapsulation in my opinion.
And as no rule can come without an exception... exposed interface is necessarily bloated with validation code, because you never know what might come upon you. However with self-validating objects in your BOM it's quite transparent in general.
"Unit tests verifying the code does what it should do" > "production code trying to verify its not doing what its not supposed to do".
I wouldn't even check for null myself, unless its part of a published API.
It very much depends; is the method in question ever called by code external to your group, or is it an internal method?
For internal methods, you can test enough to make this a moot point, and if you're building code where the goal is highest possible performance, you might not want to spend the time on checking inputs you're pretty darn sure are right.
For externally visible methods - if you have any - you should always double check your inputs. Always.
From debugging point of view, it is most important that your code is fail-fast. The earlier the code fails, the easier to find the point of failure.
For internal methods, we usually stick to asserts for these kinds of checks. That does get errors picked up in unit tests (you have good test coverage, right?) or at least in integration tests that are running with assertions on.
checking for null pointer is only half of the story,
you should also assign a null value to every unassigned pointer.
most responsible API will do the same.
checking for a null pointer comes very cheap in CPU cycles, having an application crashing once its delivered can cost you and your company in money and reputation.
you can skip null pointer checks if the code is in a private interface you have complete control of and/or you check for null by running a unit test or some debug build test (e.g. assert)
There are a few things at work here in this question which I would like to address:
Coding guidelines should specify that you either deal with a reference or a value directly instead of using pointers. By definition, pointers are value types that just hold an address in memory -- validity of a pointer is platform specific and means many things (range of addressable memory, platform, etc.)
If you find yourself ever needing a pointer for any reason (like for dynamically generated and polymorphic objects) consider using smart pointers. Smart pointers give you many advantages with the semantics of "normal" pointers.
If a type for instance has an "invalid" state then the type itself should provide for this. More specifically, you can implement the NullObject pattern that specifies how an "ill-defined" or "un-initialized" object behaves (maybe by throwing exceptions or by providing no-op member functions).
You can create a smart pointer that does the NullObject default that looks like this:
template <class Type, class NullTypeDefault>
struct possibly_null_ptr {
possibly_null_ptr() : p(new NullTypeDefault) {}
possibly_null_ptr(Type* p_) : p(p_) {}
Type * operator->() { return p.get(); }
~possibly_null_ptr() {}
private:
shared_ptr<Type> p;
friend template<class T, class N> Type & operator*(possibly_null_ptr<T,N>&);
};
template <class Type, class NullTypeDefault>
Type & operator*(possibly_null_ptr<Type,NullTypeDefault> & p) {
return *p.p;
}
Then use the possibly_null_ptr<> template in cases where you support possibly null pointers to types that have a default derived "null behavior". This makes it explicit in the design that there is an acceptable behavior for "null objects", and this makes your defensive practice documented in the code -- and more concrete -- than a general guideline or practice.
Pointer should only be used if do you need to do something with the pointer. Such as pointer arithmetic to transverse some data structure. Then if possible that should be encapsulated in a class.
IF the pointer is passed into the function to do something with the object to which it points, then pass in a reference instead.
One method for defensive programming is to assert almost everything that you can. At the beginning of the project it is annoying but later it is a good adjunct to unit testing.
A number of answer address the question of how to write defenses in your code, but no much was said about "how defensive should you be?". That's something you have to evaluate based on the criticality of your software components.
We're doing flight software and the impacts of a software error range from a minor annoyance to loss of aircraft/crew. We categorize different pieces of software based on their potential adverse impacts which affects coding standards, testing, etc. You need to evaluate how your software will be used and the impacts of errors and set what level of defensiveness you want (and can afford). The DO-178B standard calls this "Design Assurance Level".
Most people use pointers like this...
if ( p != NULL ) {
DoWhateverWithP();
}
However, if the pointer is null for whatever reason, the function won't be called.
My question is, could it possibly be more beneficial to just not check for NULL? Obviously on safety critical systems this isn't an option, but your program crashing in a blaze of glory is more obvious than a function not being called if the program can still run without it.
In relation to the first question, do you always check for NULL before you use pointers?
Secondly, consider you have a function that takes a pointer as an argument, and you use this function multiple times on multiple pointers throughout your program. Do you find it more beneficial to test for NULL in the function (the benefit being you don't have to test for NULL all over the place), or on the pointer before calling the function (the benefit being no overhead from calling the function)?
You are right in thinking that NULL pointers often result in immediate crashes, but do not forget that if you are indexing into a large array through a NULL pointer, you might indeed get a valid memory address if your index is high enough. And then, you'll get memory corruption or incorrect memory reads, which will be much harder to locate.
Whenever I can assume that calling a function with NULL is a bug, which should never happen in production code, I prefer using ASSERT guards in the function, which are only compiled into real code in a debug build, and not checking for NULL otherwise.
And from my point of view, generally, a function should check its arguments, not the caller. You should always assume that your callers might have been a bit sloppy about the checking, or that they might contain bugs...
Morality: check for NULL in the function being called, either through some if() statement that throws, or using some ASSERT construct (possibly with a clear message of why this happened). Also check for NULL in the callers, but only if the callers know that this condition might happen in a normal program execution, and act accordingly.
When it's acceptable for the program to just crash if a NULL pointer comes up, I'm partial to:
assert(p);
DoWhateverWithP();
This will only check the pointer in debug builds since defining NDEBUG usually #undefs assert() at the preprocessor level. It documents your assumption and assists with debugging but has zero performance impact on the released binary (though, to be fair, checking for a NULL pointer should have effectively zero impact on performance in the vast majority of circumstances).
As a side benefit, this is legal for C as well as C++ and, in the latter case, doesn't require exceptions to be enabled in your compiler/runtime.
Concerning your second question, I prefer to put the assertions at the beginning of the subroutine. Again, the beauty of assert() is the fact that there's really no 'overhead' to speak of. As such, there's nothing to weigh against the benefits of only requiring one assertion in the subroutine definition.
Of course, the caveat is that you never want to assert an expression with side-effects:
assert(p = malloc(1)); // NEVER DO THIS!
DoSomethingWithP(); // If NDEBUG was defined, malloc() was never called!
Don't make it a rule to just check for null and do nothing if you find it.
If the pointer is allowed to be null, then you have to think about what your code does in the case that it actually is null. Usually, just doing nothing is the wrong answer. With care it's possible to define APIs which work like that, but this requires more than just scattering a few NULL checks about the place.
So, if the pointer is allowed to be null, then you must check for null, and you must do whatever is appropriate.
If the pointer is not allowed be null, then it's perfectly reasonable to write code which invokes undefined behaviour if it is null. It's no different from writing string-handling routines which invoke undefined behaviour if the input is not NUL-terminated, or writing buffer-using routines which invoke undefined behaviour if the caller passes in the wrong value for the length, or writing a function that takes a file* parameter, and invokes undefined behaviour if the user passes in a file descriptor reinterpret_cast to file*. In C and C++, you simply have to be able to rely on what your caller tells you. Garbage in, garbage out.
However, you might like to write code which helps out your caller (who is probably you, after all) when the most likely kinds of garbage are passed in. Asserts and exceptions are good for this.
Taking up the analogy from Franci's comment on the question: most people do not look for cars when crossing a footpath, or before sitting down on their sofa. They could still be hit by a car. It happens. But it would generally be considered paranoid to spend any effort checking for cars in those circumstances, or for the instructions on a can of soup to say "first, check for cars in your kitchen. Then, heat the soup".
The same goes for your code. It's much easier to pass an invalid value to a function than it is to accidentally drive your car into someone's kitchen. But it's still the fault of the driver if they do so and hit someone, not a failure of the cook to exercise due care. You don't necessarily want cooks (or callees) to clutter up their recipes (code) with checks that ought to be redundant.
There are other ways to find problems, such as unit tests and debuggers. In any case it is much safer to create a car-free environment except where necessary (roads), than it is to drive cars willy-nilly all over the place and hope everybody can cope with them at all times. So, if you do check for null in cases where it isn't allowed, you shouldn't let this give people the idea that it is allowed after all.
[Edit - I literally just hit an example of a bug where checking for null would not find an invalid pointer. I'm going to use a map to hold some objects. I will be using pointers to those objects (to represent a graph), which is fine because map never relocates its contents. But I haven't defined an ordering for the objects yet (and it's going to be a bit tricky to do so). So, to get things moving and prove that some other code works, I used a vector and a linear search instead of a map. That's right, I didn't mean vector, I meant deque. So after the first time the vector resized, I wasn't passing null pointers into functions, but I was passing pointers to memory which had been freed.
I make dumb errors which pass invalid garbage approximately as often as I make dumb errors which pass null pointers invalidly. So regardless of whether I add checking for null, I still need to be able to diagnose problems where the program just crashes for reasons I can't check. Since this will also diagnose null pointer accesses, I usually don't bother checking for null unless I'm writing code to generally check the preconditions on entry to the function. In that case it should if possible do a lot more than just check null.]
I prefer this style:
if (p == NULL) {
// throw some exception here
}
DoWhateverWithP();
This means that whatever function this code lives in will fail quickly in the event that p is NULL. You are correct that if p is NULL there is no way that DoWhateverWithP can execute but using a null pointer or simply not executing the function are both unacceptable ways to handle the fack the p is NULL.
The important thing to remember is to exit early and fail fast - this kind of approach yields code that is easier to debug.
In addition to the other answers, it depends upon what NULL signifies. For example, this code is perfectly OK, and is pretty idiomatic:
while (fgets(buf, sizeof buf, fp) != NULL) {
process(buf);
}
Here, NULL value indicates not only error, but end-of-file condition as well. Similarly, strtok() returns NULL to say, "there are no more tokens" (although one should avoid strtok() to begin with, but I digress). In cases like this, it is perfectly OK to call a function if the returned pointer is not NULL, and do nothing otherwise.
Edit: another example, closer to what was asked:
const char *data = "this;is;a;test;";
const char *curr = data;
const char *p;
while ((p = strchr(curr, ';')) != NULL) {
/* process data in [curr, p) */
process(curr, p);
curr = p + 1;
}
Once again, NULL here is an indication from strchr() that it couldn't find a ;, and that we should stop processing the data further.
Having said that, if NULL is not used as an indication, then it depends:
If the pointer can't be NULL at this point in code, it's useful to have an assert(p != NULL); when developing, and also having a fprintf(stderr, "Can't happen\n"); or equivalent statement, and then take whatever action as appropriate (abort() or similar is probably the only sane choice at this point).
If the pointer can be NULL, and it's not critical, it might be better to just bypass the usage of the null pointer. Suppose you're trying to allocate memory for writing a log message, and malloc() fails. You shouldn't abort the program because of this. If malloc() succeeds, you want to call a function (sprintf()/whatever) to fill the buffer.
If the pointer can be NULL, and it's critical. In this case, you probably want to fail, and hopefully such conditions don't happen too often.
Secondly, consider you have a function
that takes a pointer as an argument,
and you use this function multiple
times on multiple pointers throughout
your program. Do you find it more
beneficial to test for NULL in the
function (the benefit being you don't
have to test for NULL all over the
place), or on the pointer before
calling the function (the benefit
being no overhead from calling the
function)?
This depends upon a lot of factors. If I can be sure sometimes or most of the times that the pointer passed to a function cannot be NULL, the extra check in the function is wasteful. If the pointer passed comes out of a lot of places, and it's tricky to put in a check everywhere, sure, then the check is good to have in the function itself.
The standard library functions, for the most part, don't check for NULL: str*, mem* functions for example. An exception is free(), it does check for NULL.
A comment about assert: assert is a no-op if NDEBUG is defined, so one should not use it for debugging—its only use is during development to catch programming errors. Also, in C89, assert takes an int, so assert(p != NULL) is better in such cases than a just plain assert(p).
This non-NULLness check can be avoided by using references instead of pointers. This way, the compiler ensures the parameter passed is not NULL. For example:
void f(Param& param)
{
// "param" is a pointer that is guaranteed not to be NULL
}
In this case, it is up to the client to do the checking. However, mostly the client situation will be like this:
Param instance;
f(instance);
No non-NULLness checking is needed.
When using with objects allocated on the heap, you can do the following:
Param& instance = *new Param();
f(*instance);
Update: As user Crashworks remarks, it is still possible to make you program crash. However, when using references, it is the responsibility of the client to pass a valid reference, and as I show in the example, this is very easy to do.
How about: a comment clarifying the intent? If the intent is "this can't happen", then perhaps an assert would be the right thing to do instead of the if statement.
On the other hand, if a null value is normal, perhaps an "else comment" explaining why we can skip the "then" step would be in order. Stevel McConnel has a good section in "Code Complete" about if/else statements, and how a missing else is a very common error (distracted, forgot it?).
Consequently, I usually put a comment in for a "no-op else", unless it is something of the form "if done, return/break".
When you check for NULL, it is not good idea just to skip the function call. You should have an else-part that does something meaningful in case of NULL, for example throws an error or returns error code to upper level.
On the other hand, NULL is not always an error. It is often used to indicate for example that end of data has been reached. In such case, you will have to handle the situation as normal program flow.
Well the answer to the first question is: you are talking about ideal situation, most of the code that I see which uses if ( p != NULL ) are legacy. Also suppose, you want to return an evaluator, and then call the evaluator with the data, but say there is no evaluator for that data, its make logical sense to return NULL and check for NULL before calling the evaluator.
The answer to the second question is, it depends on the situation, like the delete checks for the NULL pointer, whereas lots of other function don't. Sometimes, if you test the pointer inside the function, then you might have to test it in lots of functions like:
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
but this code would be much better:
if(p)
{
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
}
Could it possibly be more beneficial to just not check for NULL?
I wouldn't do it, I favor assertions on the frontline and some form of recovery in the body past that. What would assertions not provide to you, that not checking for null would? Similar effect, with easier interpretation and a formal acknowledgement.
In relation to the first question, do you always check for NULL before you use pointers?
It really depends on the code and the time available, but I am irritatingly good at it; a good chunk of 'implementation' in my programs consists of what a program should not do, rather than the usual 'what it should do'.
Secondly, consider you have a function that takes a pointer as an argument...
I test it in the function, as the function is (hopefully) the program that is reused more frequently. I also tend to test it before making the call, without that test, the error loses localization (useful for reporting and isolation).
I think I've seen more of. This way you don't proceed if you know it's going to blow up anyway.
if (NULL == p)
{
goto FunctionExit; // or some other common label to exit the function.
}
I think it is better to check for null. Although, you can cut down on the amount of checks you need to make.
For most cases I prefer a simple guard clause at the top of a function:
if (p == NULL) return;
That said, I typically only put the check on functions that are publicly exposed.
However, when the null pointer in unexpected I will throw an exception. (There are some functions it doesn't make any sense to call with null, and the consumer should be responsible enough to use it right.)
Constructor initialization can be used as an alternative to checking for null all the time. This is especially useful when the class contains a collection. The collection can be used throughout the class without checking whether it has been initialized.
Dereferencing a null pointer is undefined behavior. If you want to crash if the pointer is null, use an assert or something similar (and, depending on the defined behavior of your class, that can be a perfectly valid response - it's certainly better than continuing to run when people may be expecting something to have been done!).
Since the behavior of dereferencing a null pointer is undefined, it can do anything. Crash, corrupt memory, create a wormhole to an alternate dimension allowing the Elder Gods to come forth and devour all of mankind... anything. While bugs happen, depending upon undefined behavior is, by definition, a bug. So don't do it deliberately.
I know most people think that as a bad practice but when you are trying to make your class public interface only work with references, keeping pointers inside and only when necessary, I think there is no way to return something telling that the value you are looking doesn't exist in the container.
class list {
public:
value &get(type key);
};
Let's think that you don't want to have dangerous pointers being saw in the public interface of the class, how do you return a not found in this case, throwing an exception?
What is your approach to that? Do you return an empty value and check for the empty state of it? I actually use the throw approach but I introduce a checking method:
class list {
public:
bool exists(type key);
value &get(type key);
};
So when I forget to check that the value exists first I get an exception, that is really an exception.
How would you do it?
The STL deals with this situation by using iterators. For example, the std::map class has a similar function:
iterator find( const key_type& key );
If the key isn't found, it returns 'end()'. You may want to use this iterator approach, or to use some sort of wrapper for your return value.
The correct answer (according to Alexandrescu) is:
Optional and Enforce
First of all, do use the Accessor, but in a safer way without inventing the wheel:
boost::optional<X> get_X_if_possible();
Then create an enforce helper:
template <class T, class E>
T& enforce(boost::optional<T>& opt, E e = std::runtime_error("enforce failed"))
{
if(!opt)
{
throw e;
}
return *opt;
}
// and an overload for T const &
This way, depending on what might the absence of the value mean, you either check explicitly:
if(boost::optional<X> maybe_x = get_X_if_possible())
{
X& x = *maybe_x;
// use x
}
else
{
oops("Hey, we got no x again!");
}
or implicitly:
X& x = enforce(get_X_if_possible());
// use x
You use the first way when you’re concerned about efficiency, or when you want to handle the failure right where it occurs. The second way is for all other cases.
The problem with exists() is that you'll end up searching twice for things that do exist (first check if it's in there, then find it again). This is inefficient, particularly if (as its name of "list" suggests) your container is one where searching is O(n).
Sure, you could do some internal caching to avoid the double search, but then your implementation gets messier, your class becomes less general (since you've optimised for a particular case), and it probably won't be exception-safe or thread-safe.
Don't use an exception in such a case. C++ has a nontrivial performance overhead for such exceptions, even if no exception is thrown, and it additially makes reasoning about the code much harder (cf. exception safety).
Best-practice in C++ is one of the two following ways. Both get used in the STL:
As Martin pointed out, return an iterator. Actually, your iterator can well be a typedef for a simple pointer, there's nothing speaking against it; in fact, since this is consistent with the STL, you could even argue that this way is superior to returning a reference.
Return a std::pair<bool, yourvalue>. This makes it impossible to modify the value, though, since a copycon of the pair is called which doesn't work with referende members.
/EDIT:
This answer has spawned quite some controversy, visible from the comments and not so visible from the many downvotes it got. I've found this rather surprising.
This answer was never meant as the ultimate point of reference. The “correct” answer had already been given by Martin: execeptions reflect the behaviour in this case rather poorly. It's semantically more meaningful to use some other signalling mechanism than exceptions.
Fine. I completely endorse this view. No need to mention it once again. Instead, I wanted to give an additional facet to the answers. While minor speed boosts should never be the first rationale for any decision-making, they can provide further arguments and in some (few) cases, they may even be crucial.
Actually, I've mentioned two facets: performance and exception safety. I believe the latter to be rather uncontroversial. While it's extremely hard to give strong exceptions guarantees (the strongest, of course, being “nothrow”), I believe it's essential: any code that is guaranteed to not throw exceptions makes the whole program easier to reason about. Many C++ experts emphasize this (e.g. Scott Meyers in item 29 of “Effective C++”).
About speed. Martin York has pointed out that this no longer applies in modern compilers. I respectfully disagree. The C++ language makes it necessary for the environment to keep track, at runtime, of code paths that may be unwound in the case of an exception. Now, this overhead isn't really all that big (and it's quite easy to verify this). “nontrivial” in my above text may have been too strong.
However, I find it important to draw the distinction between languages like C++ and many modern, “managed” languages like C#. The latter has no additional overhead as long as no exception is thrown because the information necessary to unwind the stack is kept anyway. By and large, stand by my choice of words.
STL Iterators?
The "iterator" idea proposed before me is interesting, but the real point of iterators is navigation through a container. Not as an simple accessor.
If you're accessor is one among many, then iterators are the way to go, because you will be able to use them to move in the container. But if your accessor is a simple getter, able to return either the value or the fact there is no value, then your iterator is perhaps only a glorified pointer...
Which leads us to...
Smart pointers?
The point of smart pointers is to simplify pointer ownership. With a shared pointer, you'll get a ressource (memory) which will be shared, at the cost of an overhead (shared pointers needs to allocate an integer as a reference counter...).
You have to choose: Either your Value is already inside a shared pointer, and then, you can return this shared pointer (or a weak pointer). Or Your value is inside a raw pointer. Then you can return the row pointer. You don't want to return a shared pointer if your ressource is not already inside a shared pointer: A World of funny things will happen when your shared pointer will get out of scope an delete your Value without telling you...
:-p
Pointers?
If your interface is clear about its ownership of its ressources, and by the fact the returned value can be NULL, then you could return a simple, raw pointer. If the user of your code is dumb enough ignore the interface contract of your object, or to play arithmetics or whatever with your pointer, then he/she will be dumb enough to break any other way you'll choose to return the value, so don't bother with the mentally challenged...
Undefined Value
Unless your Value type really has already some kind of "undefined" value, and the user knows that, and will accept to handle that, it is a possible solution, similar to the pointer or iterator solution.
But do not add a "undefined" value to your Value class because of the problem you asked: You'll end up raising the "references vs. pointer" war to another level of insanity. Code users want the objects you give them to either be Ok, or to not exist. Having to test every other line of code this object is still valid is a pain, and will complexify uselessly the user code, by your fault.
Exceptions
Exceptions are usually not as costly as some people would like them to be. But for a simple accessor, the cost could be not trivial, if your accessor is used often.
For example, the STL std::vector has two accessors to its value through an index:
T & std::vector::operator[]( /* index */ )
and:
T & std::vector::at( /* index */ )
The difference being that the [] is non-throwing . So, if you access outside the range of the vector, you're on your own, probably risking memory corruption, and a crash sooner or later. So, you should really be sure you verified the code using it.
On the other hand, at is throwing. This means that if you access outside the range of the vector, then you'll get a clean exception. This method is better if you want to delegate to another code the processing of an error.
I use personnaly the [] when I'm accessing the values inside a loop, or something similar. I use at when I feel an exception is the good way to return the current code (or the calling code) the fact something went wrong.
So what?
In your case, you must choose:
If you really need a lightning-fast access, then the throwing accessor could be a problem. But this means you already used a profiler on your code to determinate this is a bottleneck, doesn't it?
;-)
If you know that not having a value can happen often, and/or you want your client to propagate a possible null/invalid/whatever semantic pointer to the value accessed, then return a pointer (if your value is inside a simple pointer) or a weak/shared pointer (if your value is owned by a shared pointer).
But if you believe the client won't propagate this "null" value, or that they should not propagate a NULL pointer (or smart pointer) in their code, then use the reference protected by the exception. Add a "hasValue" method returning a boolean, and add a throw should the user try the get the value even if there is none.
Last but not least, consider the code that will be used by the user of your object:
// If you want your user to have this kind of code, then choose either
// pointer or smart pointer solution
void doSomething(MyClass & p_oMyClass)
{
MyValue * pValue = p_oMyClass.getValue() ;
if(pValue != NULL)
{
// Etc.
}
}
MyValue * doSomethingElseAndReturnValue(MyClass & p_oMyClass)
{
MyValue * pValue = p_oMyClass.getValue() ;
if(pValue != NULL)
{
// Etc.
}
return pValue ;
}
// ==========================================================
// If you want your user to have this kind of code, then choose the
// throwing reference solution
void doSomething(MyClass & p_oMyClass)
{
if(p_oMyClass.hasValue())
{
MyValue & oValue = p_oMyClass.getValue() ;
}
}
So, if your main problem is choosing between the two user codes above, your problem is not about performance, but "code ergonomy". Thus, the exception solution should not be put aside because of potential performance issues.
:-)
Accessor?
The "iterator" idea proposed before me is interesting, but the real point of iterators is navigation through a container. Not as an simple accessor.
I agree with paercebal, an iterator is to iterate. I don't like the way STL does. But the idea of an accessor seems more appealing. So what we need? A container like class that feels like a boolean for testing but behaves like the original return type. That would be feasible with cast operators.
template <T> class Accessor {
public:
Accessor(): _value(NULL)
{}
Accessor(T &value): _value(&value)
{}
operator T &() const
{
if (!_value)
throw Exception("that is a problem and you made a mistake somewhere.");
else
return *_value;
}
operator bool () const
{
return _value != NULL;
}
private:
T *_value;
};
Now, any foreseeable problem? An example usage:
Accessor <type> value = list.get(key);
if (value) {
type &v = value;
v.doSomething();
}
How about returning a shared_ptr as the result. This can be null if the item wasn't found. It works like a pointer, but it will take care of releasing the object for you.
(I realize this is not always the right answer, and my tone a bit strong, but you should consider this question before deciding for other more complex alternatives):
So, what's wrong with returning a pointer?
I've seen this one many times in SQL, where people will do their earnest to never deal with NULL columns, like they have some contagious decease or something. Instead, they cleverly come up with a "blank" or "not-there" artificial value like -1, 9999 or even something like '#X-EMPTY-X#'.
My answer: the language already has a construct for "not there"; go ahead, don't be afraid to use it.
what I prefer doing in situations like this is having a throwing "get" and for those circumstances where performance matter or failiure is common have a "tryGet" function along the lines of "bool tryGet(type key, value **pp)" whoose contract is that if true is returned then *pp == a valid pointer to some object else *pp is null.
#aradtke, you said.
I agree with paercebal, an iterator is
to iterate. I don't like the way STL
does. But the idea of an accessor
seems more appealing. So what we need?
A container like class that feels like
a boolean for testing but behaves like
the original return type. That would
be feasible with cast operators. [..] Now,
any foreseeable problem?
First, YOU DO NOT WANT OPERATOR bool. See Safe Bool idiom for more info. But about your question...
Here's the problem, users need to now explict cast in cases. Pointer-like-proxies (such as iterators, ref-counted-ptrs, and raw pointers) have a concise 'get' syntax. Providing a conversion operator is not very useful if callers have to invoke it with extra code.
Starting with your refence like example, the most concise way to write it:
// 'reference' style, check before use
if (Accessor<type> value = list.get(key)) {
type &v = value;
v.doSomething();
}
// or
if (Accessor<type> value = list.get(key)) {
static_cast<type&>(value).doSomething();
}
This is okay, don't get me wrong, but it's more verbose than it has to be. now consider if we know, for some reason, that list.get will succeed. Then:
// 'reference' style, skip check
type &v = list.get(key);
v.doSomething();
// or
static_cast<type&>(list.get(key)).doSomething();
Now lets go back to iterator/pointer behavior:
// 'pointer' style, check before use
if (Accessor<type> value = list.get(key)) {
value->doSomething();
}
// 'pointer' style, skip check
list.get(key)->doSomething();
Both are pretty good, but pointer/iterator syntax is just a bit shorter. You could give 'reference' style a member function 'get()'... but that's already what operator*() and operator->() are for.
The 'pointer' style Accessor now has operator 'unspecified bool', operator*, and operator->.
And guess what... raw pointer meets these requirements, so for prototyping, list.get() returns T* instead of Accessor. Then when the design of list is stable, you can come back and write the Accessor, a pointer-like Proxy type.
Interesting question. It's a problem in C++ to exclusively use references I guess - in Java the references are more flexible and can be null. I can't remember if it's legal C++ to force a null reference:
MyType *pObj = nullptr;
return *pObj
But I consider this dangerous. Again in Java I'd throw an exception as this is common there, but I rarely see exceptions used so freely in C++.
If I was making a puclic API for a reusable C++ component and had to return a reference, I guess I'd go the exception route.
My real preference is to have the API return a pointer; I consider pointers an integral part of C++.