Why no sanity checks in legacy strcpy() - c++

Following is the most popular implementation of strcpy in traditional systems. Why dest and src are not checked for NULL in the start? I heard once that in old days the memory was limited so short code was always preferred. Will you implement strcpy and other similar functions with NULL pointer checks at the start now days? Why not?
char *strcpy(char *dest, const char *src)
{
char *save = dest;
while(*dest++ = *src++);
return save;
}

NULL is a bad pointer, but so is (char*)0x1. Should it also check for that? In my opinion (I don't know the definitive reason why), sanity checks in such a low-level operation are uncalled for. strcpy() is so fundamental that it should be treated something like as asm instruction, and you should do your own sanity checks in the caller if needed. Just my 2 cents :)

There are no sanity checks because one of the most important underlying ideologies of C is that the developer supplies the sanity. When you assume that the developer is sane, you end up with a language that can be used to do just about anything, anywhere.
This is not an explicitly stated goal — it's quite possible for someone to come up with an implementation that does check for this, and more. Maybe they have. But I doubt that many people used to C would clamour to use it, since they'd need to put the checks in anyway if there was any chance that their code would be ported to a more usual implementation.

The whole C language is written with the motto "We'll behave correctly provided the programmer knows what he's doing." The programmer is expected to know to make all the checks he needs to make. It's not just checking for NULL, it's ensuring that dest points to enough allocated memory to hold src, it's checking the return value of fopen to make sure the file really did open successfully, knowing when memcpy is safe and when memmove is required, and so on.
Getting strcpy to check for NULL won't change the language paradigm. You will still need to ensure that dest points to enough space -- and this is something that strcpy can't check for without changing the interface. You will also need to ensure that src is '\0'-terminated, which again strcpy can't possibly check.
There are some C standard library functions which do check for NULL: for example, free(NULL) is always safe. But in general, C expects you to know what you're doing.
[C++ generally eschews the <cstring> library in favour of std::string and friends.]

It's usually better for the library to let the caller decide what it wants the failure semantics to be. What would you have strcpy do if either argument is NULL? Silently do nothing? Fail an assert (which isn't an option in non-debug builds)?
It's easier to opt-in than it is to opt-out. It's trivial to write your own wrapper around strcpy that validates the inputs and to use that instead. If, however, the library did this itself, you would have no way of choosing not to perform those checks short of re-implementing strcpy. (For example, you might already know that the arguments you pass to strcpy aren't NULL, and it might be something you care about if you're calling it in a tight loop or are concerned about minimizing power usage.) In general, it's better to err on the side of granting more freedom (even if that freedom comes with additional responsibility).

The most likely reason is: Because strcpy is not specified to work with NULL inputs (i.e. its behaviour in this case is undefined).
So, what should a library implementer choose to do if a NULL is passed in? I would argue that the best thing do to is to let the application crash. Think of it this way: A crash is a fairly obvious sign that something has gone wrong... silently ignoring a NULL input, on the other hand, may mask a bug that will be much harder to detect.

NULL checks were not implemented because C's earliest targets supported strong memory protections. When a process attempted to read from or write to NULL, the memory controller would signal the CPU that an out-of-range memory access was attempted (segmentation violation), and the kernel would kill the offending process.
This was an alright answer, because code attempting to read from or write to a NULL pointer is broken; the only answer is to re-write the code to check return values from malloc(3) and friends and take corrective action. By the time you're trying to use pointers to unallocated memory, it is too late to make a correct decision about how to fix the situation.

You should think of the C standard library functions as the thinnest possible additional layer of abstraction above the assembly code that you don't want to churn out to get your stuff over the door. Everything beyond that, like error checking, is your responsibility.

According to me any function you would want to define would have a pre-condition and a post-condition.
Taking care of the preconditions should never be part of a function. Following is a precondition to use strcpy taken from the man page.
The strcpy() function copies the string pointed to by src (including the terminating '\0' character) to the array pointed to by dest. The strings may not overlap, and the destination string dest must be large enough to receive the copy.
Now if the precondition is not met then things might be undefined.
Whether I would include a NULL check in my strcpy now. I would rather have another safe_strcpy, giving safety the priority I would definitely include NULL checks and handle overflow conditions. And accordingly my precondition gets modified.

There is simply no error semantic defined for it. In particular there is no way for strcpy to return an error value. C99 simply states:
The strcpy function returns the value
of s1.
So for a conforming implementation there wouldn't even a possibility to return the information that something went wrong. So why bother with it.
All this is voluntary, I think, since strcpy is replaced by most compilers by very efficient assembler directly. Error checks are up to the caller.

Related

In either C or C++, should I check pointer parameters against NULL/nullptr?

This question was inspired by this answer.
I've always been of the philosophy that the callee is never responsible when the caller does something stupid, like passing of invalid parameters. I have arrived at this conclusion for several reasons, but perhaps the most important one comes from this article:
Everything not defined is undefined.
If a function doesn't say in it's docs that it's valid to pass nullptr, then you damn well better not be passing nullptr to that function. I don't think it's the responsibility of the callee to deal with such things.
However, I know there are going to be some who disagree with me. I'm curious whether or not I should be checking for these things, and why.
If you're going to check for NULL pointer arguments where you have not entered into a contract to accept and interpret them, do it with an assert, not a conditional error return. This way the bugs in the caller will be immediately detected and can be fixed, and it makes it easy to disable the overhead in production builds. I question the value of the assert except as documentation however; a segfault from dereferencing the NULL pointer is just as effective for debugging.
If you return an error code to a caller which has already proven itself buggy, the most likely result is that the caller will ignore the error, and bad things will happen much later down the line when the original cause of the error has become difficult or impossible to track down. Why is it reasonable to assume the caller will ignore the error you return? Because the caller already ignored the error return of malloc or fopen or some other library-specific allocation function which returned NULL to indicate an error!
In C++, if you don't want to accept NULL pointers, then don't take the chance: accept a reference instead.
While in general I don't see the value in detecting NULL (why NULL and not some other invalid address?) for a public API I'd probably still do it simply because many C and C++ programmers expect such behavior.
Defense in Depth principle says yes. If this is an external API then totally essential. Otherwise, at least an assert to assist in debugging misuse of your API.
You can document the contract until you are blue in the face, but you cannot in callee code prevent ill-advised or malicious misuse of your function. The decision you have to make is what's the likely cost of misuse.
In my view, it's not a question of responsibility. It's a question of robustness.
Unless I have full control on the caller and I must optimize for even the minute speed improvement, I always check for NULL.
I lean heavily on the side of 'don't trust your user's input to not blow up your system' and in defensive programming in general. Since I have made APIs in a past life, I have seen users of the libraries pass in null pointers and then application crashes result.
If it is truly an internal library and I'm the only person (or only a select few) have the ability to use it, then I might ease up on null pointer checks as long as everyone agrees to abide by general contracts. I can't trust the user base at large to adhere to that.
The answer is going to be different for C and C++.
C++ has references. The only difference between passing a pointer and passing a reference is that the pointer can be null. So, if the writer of the called function expects a pointer argument and forgets to do something sane when it's null, he's silly, crazy or writing C-with-classes.
Either way, this is not a matter of who wears the responsibility hat. In order to write good software, the two programmers must co-operate, and it is the responsibility of all programmers to 1° avoid special cases that would require this kind of decision and 2° when that fails, write code that blows up in a non-ambiguous and documented way in order to help with debugging.
So, sure, you can point and laugh at the caller because he messed up and "everything not defined is undefined" and had to spend one hour debugging a simple null pointer bug, but your team wasted some precious time on that.
My philosophy is: Your users should be allowed to make mistakes, your programming team should not.
What this means is that the only place you should check for invalid parameters including NULL, is in the top-level user interface. Everywhere the user can provide input to your code, you should check for errors, and handle them as gracefully as possible.
Everywhere else, you should use ASSERTS to ensure the programmers are using the functions correctly.
If you are writing an API, then only the top-level functions should catch and handle bad input. It is pointless to keep checking for a NULL pointer three or four levels deep into your call stack.
I am pro defensive programming.
Unless you can profile that these nullptr checkings happen in a bottleneck of your application... (in such cases it is conceivable one should not do those pointers value tests at those points)
but all in all comparing an int with 0 is really cheap an operation.
I think it is a shame to let potential crash bugs instead of consuming so little CPU.
so: Test your pointers against NULL!
I think that you should strive to write code that is robust for every conceivable situation. Passing a NULL pointer to a function is very common; therefore, your code should check for it and deal with it, usually by returning an error value. Library functions should NOT crash an application.
For C++, if your function doesn't accept nullpointer, then use a reference argument. In general.
There are some exceptions. For example, many people, including myself, think it's better with pointer argument when the actual argument will most naturally be a pointer, especially when the function stores away of a copy of the pointer. Even when the function doesn't support nullpointer argument.
How much to defend against invalid argument depends, including that it depends on subjective opinion and gut-feeling.
Cheers & hth.,
One thing you have to consider is what happens if some caller DOES misuse your API. In the case of passing NULL pointers, the result is an obvious crash, so it's OK not to check. Any misuse will be readily apparent to the calling code's developer.
The infamous glibc debacle is another thing entirely. The misuse resulted in actually useful behavior for the caller, and the API stayed that way for decades. Then they changed it.
In this case, the API developers' should have checked values with an assert or some similar mechanism. But you can't go back in time to correct an error. The wailing and gnashing of teeth were inevitable. Read all about it here.
If you don't want a NULL then don't make the parameter a pointer.
By using a reference you guarantee that the object will not be NULL.
He who performas invalid operations on invalid or nonexisting data, only deserves his system-state to become invalid.
I consider it complete nonsense that functions which expect input should check for NULL. Or whatever other value for that matter. The sole job of a function is to do a task based on its input or scope-state, nothing else. If you have no valid input, or no input at all, then don't even call the function. Besides, a NULL-check doesn't detect the other millions and millions of possible invalid values. You know on forehand you would be passing NULL, so why would you still pass it, waste valuable cycles on yet another function call with parameter passing, an in-function comparison of some pointer, and then check the function output again for success or not. Sure, I might have done so when I was 6 years old back in 1982, but those days have long since gone.
There is ofcourse the argument to be made for public API's. Like some DLL offering idiot-proof checking. You know, those arguments: "If the user supplies NULL you don't want your application to crash." What a non-argument. It is the user which passes bogus data in the first place; it's an explicit choice and nothing else than that. If one feels that is quality, well... I prefer solid logic and performance over such things. Besides, a programmer is supposed to know what he's doing. If he's operating on invalid data for the particular scope, then he has no business calling himself a programmer. I see no reason to downgrade the performance, increase power consumption, while increasing binary size which in turn affects instruction caching and branch-prediction, of my products in order to support such users.
I don't think it's the responsibility of the callee to deal with such things
If it doesn't take this responsibility it might create bad results, like dereferencing NULL pointers. Problem is that it always implicitly takes this responsibility. That's why i prefer graceful handling.
In my opinion, it's the callee's responsibility to enforce its contract.
If the callee shouldn't accept NULL, then it should assert that.
Otherwise, the callee should be well behaved when it's handed a NULL. That is, either it should functionally be a no-op, return an error code, or allocate its own memory, depending on the contract that you specified for it. It should do whatever seems to be the most sensible from the caller's perspective.
As the user of the API, I want to be able to continue using it without having the program crash; I want to be able to recover at the least or shut down gracefully at worst.
One side effect of that approach is that when your library crashes in response to being passed an invalid argument, you will tend to get the blame.
There is no better example of this than the Windows operating system. Initially, Microsoft's approach was to eliminate many tests for bogus arguments. The result was an operating system that was more efficient.
However, the reality is that invalid arguments are passed all time. From programmers that aren't up to snuff, or just using values returned by other functions there weren't expected to be NULL. Now, Windows performs more validation and is less efficient as a result.
If you want to allow your routines to crash, then don't test for invalid parameters.
Yes, you should check for null pointers. You don't want to crash an application because the developer messed something up.
Overhead of development time + runtime performance has a trade-off with the robustness of the API you are designing.
If the API you are publishing has to run inside the process of the calling routine, you SHOULD NOT check for NULL or invalid arguments. In this scenario, if you crash, the client program crashes and the developer using your API should mend his ways.
However, if you are providing a runtime/ framework which will run the client program inside it (e.g., you are writing a virtual machine or a middleware which can host the code or an operating system), you should definitely check of the correctness of the arguments passed. You don't want your program to be blamed for the mistakes of a plugin.
There is a distinction between what I would call legal and moral responsibility in this case. As an analogy, suppose you see a man with poor eyesight walking towards a cliff edge, blithely unaware of its existence. As far as your legal responsibility goes, it would in general not be possible to successfully prosecute you if you fail to warn him and he carries on walking, falls off the cliff and dies. On the other hand, you had an opportunity to warn him -- you were in a position to save his life, and you deliberately chose not to do so. The average person tends to regard such behaviour with contempt, judging that you had a moral responsibility to do the right thing.
How does this apply to the question at hand? Simple -- the callee is not "legally" responsible for the actions of the caller, stupid or otherwise, such as passing in invalid input. On the other hand, when things go belly up and it is observed that a simple check within your function could have saved the caller from his own stupidity, you will end up sharing some of the moral responsibility for what has happened.
There is of course a trade-off going on here, dependent on how much the check actually costs you. Returning to the analogy, suppose that you found out that the same stranger was inching slowly towards a cliff on the other side of the world, and that by spending your life savings to fly there and warn him, you could save him. Very few people would judge you entirely harshly if, in this particular situation, you neglected to do so (let's assume that the telephone has not been invented, for the purposes of this analogy). In coding terms, however, if the check is as simple as checking for NULL, you are remiss if you fail to do so, even if the "real" blame in the situation lies with the caller.

How many bits to ignore when checking for NULL?

The following crashes with a seg-V:
// my code
int* ipt;
int bool set = false;
void Set(int* i) {
ASSERT(i);
ipt = i;
set = true;
}
int Get() {
return set ? *ipt : 0;
}
// code that I don't control.
struct S { int I, int J; }
int main() {
S* ip = NULL;
// code that, as a bug, forgets to set ip...
Set(&ip->J);
// gobs of code
return Get();
}
This is because while i is not NULL it still isn't valid. The same problem can happen if the calling code takes the address of an array index operation from a NULL pointer.
One solution to this is to trim the low order bits:
void Set(int* i) {
ASSERT((reinterpret_cast<size_t>(i))>>10);
ipt = i;
set = true;
}
But how many bits should/can I get rid of?
Edit, I'm not worried about undefined behavior as I'll be aborting (but more cleanly than a seg-v) on that case anyway.
FWIW: this is a semi-hypothetical situation. The bug that caused me to think of this was fixed before I posted, but I've run into it before and am thinking of how to work with it in the future.
Things that can be assumed for the sake of argument:
If Set is called with something that will seg-v, that's a bug
Set may be called by code that isn't my job to fix. (E.g. I file a bug)
Set may be called by code I'm trying to fix. (E.g. I'm adding sanity checks as part of my debuggin work.)
Get my be called in a way that provide no information about where Set was called. (I.e. allowing Get to seg-v isn't an effective way to debug anything.)
The code needn't be portable or catch 100% of bad pointers. It need only work on my current system often enough to let me find where things are going wrong.
There is no portable way to test for any invalid pointer except NULL. Evaluating &ip[3] gives undefined behaviour, before you do anything with it; the only solution is to test for NULL before doing any arithmetic on the pointer.
If you don't need portability, and don't need to guarantee that you catch all errors, then on most mainstream platforms you could check whether the address is within the first page of memory; it's common to define NULL to be address zero, and to reserve the first page to trap most null pointer dereferences. On a POSIX platform, this would look something like
static size_t page_size = sysconf(_SC_PAGESIZE);
assert(reinterpret_cast<intptr_t>(i) >= page_size);
But this isn't a complete solution. The only real solution is to fix whatever is abusing null pointers in the first place.
You shouldn't be doing pointer arithmetic (including array indexing) off of a null pointer at all.
And you should use 0, not NULL in c++. NULL is a feature of c, still supported but not idiomatic in c++.
In regards to the BCS's many comments and the edit. That changes the question from the rather naive one on the surface to a much deeper one. But...it is not going to be easy---in a language as permissive as c++---to protect yourself against people doing stupid things before calling your code.
Trying to work around undefined behavior will always be very dependant on your platform, compiler, version,etc. if it is at all possible.
Common *nixes never map the first page of the address space precisely to catch null pointer access, thus you might get away with checking if the pointer value is between 0 and 4096 (Or whatever page size your system uses).
But don't do this, you can't guard against everything that can go wrong, focus instead on getting the code right. If somone passes you an invalid pointer, chances are there's something gravely wrong anyway that a pointer validation check can't fix.
Is there any way you can exert some influence to get that bad code corrected? There is no possible way this can turn out well. Legally, just creating an invalid pointer is undefined behavior.
If Set is always going to be passed a small offset from ip, and ip will always be initialized to NULL, you are probably going to be OK with what you are doing. Most modern systems do have the null pointer constant as all bits zero, and most will do the natural thing. There is of course absolutely no guarantee that it will work on any given system with any given compiler and any given compiler options, and changing any of those might cause it to fail.
Since any use of bad pointers can cause program failure, you should consider what happens when the code triggers a memory violation.
Also, I don't know what your ASSERT macro does, but assert, in most implementations, is only activated in debug mode. If you want to push this piece of junk into production, or run in optimized mode, you might want to make sure it will still fail more gently.
If you don't mind a really bad hack, you can force a memory access with volatile (n.b. volatile is evil). According to the GCC docs, volatile accesses must be ordered across sequence points, so you can do something like this:
int test = *(volatile int *)i;
*(volatile int *)i = test;
I don't think = is a sequence point, but the following might also work:
*(volatile int *)i = *(volatile int *)i;
I really wouldn't recommend trying to work around a bug in somebody else's code. If you're not running everything you write through a debugger while you're developing code no amount of checks are going to help you catch all the problems. Get them to fix their code.
If you're not using a debugger, get a decent crash handler that dumps the callstack for each thread and as much additional information regarding the program state as possible. Try to figure out what could be going wrong from that.
Regularly running your code through static analysis tools can also help here.
Remember, that it might not be someone forgetting to initialise a pointer, it could be someone else overwriting that pointer through a bad memory write from somewhere completely unrelated. There are tools which can help track down such things too.
Regarding the NULL Vs 0 debate, #define NULL 0 is better for a couple of reasons:
1) You can more easily see when you're dealing with a pointer.
2) Using NULL offers no less or more safety than using 0. So why not make your code more readable?
3) When C++11 is finally released #define NULL nullptr is a lot easier to change than all those zeros. (You could go the other way and #define nullptr 0 today I suppose, but that will probably cause problems in the future if you're developing cross platform code.)
And for the record, the C++ standard explicitly states that a null pointer constant is an rvalue integer type that evaluates to zero. So please let's not have any more nonsense about null pointers not having to equal zero.
One reason, among many, you cannot do this in a portable fashion is that NULL is not guaranteed to be 0. It is only specified that null pointers will compare equal to 0. You may write a 0 (or the preprocessor macro "NULL") in your code, but the compiler knows that this 0 is in a pointer context so it generates the appropriate code to compare it to a null pointer, whatever the actual implementation of a null pointer is. See here and here for more information on that. Reinterpreting a NULL pointer as an integral type may cause it to have a true value instead of false.
You'd have to consider your particular operating system and hardware architecture. If you're only interested in detecting pointers that are "close to null" then you could use ASSERT(i > pageSize), assuming that the first page is always write protected in your OS.
But ... the obvious question is: Why bother? The OS will detect the null in this case and SEGV as you pointed out, which is just as good as an ASSERT, isn't it?

It is good programming practice to always check for null pointers before using an object in C++?

This seems like a lot of work; to check for null each time an object is used.
I have been advised that it is a good idea to check for null pointers so you don't have to spend time looking for where segmentation faults occur.
Just wondering what the community here thinks?
Use references whenever you can, because they can't be null, therefore you don't have to check if they are null.
It's good practice to check for null in function parameters and other places you may be dealing with pointers someone else is passing you. However, in your own code, you might have pointers you know will always be pointing to a valid object, so a null check is probably overkill... just use your common sense.
I don't know if it really helps with debugging because any debugger will be showing you pretty clearly that a null pointer was used and it won't take long to find it. It's more about making sure you don't crash if another programmer passes in NULL, or that the mistake is picked up by an assert in a debug build.
No. You should instead make sure the pointers were not set to NULL in the first place. Note that in Standard C++:
int * p = new int;
then p can never be NULL because new will throw an exception if the allocation fails.
If you are writing functions that can take a pointer as a parameter, you should treat them like this
// does something with p
// note that p cannot be NULL
void f( int * p );
In other words you should document the requirements of the function. You can also use assert() to check if someone has ignored your documentation (if they have, it's their problem, not yours), but I must admit I have gone off this as time has gone on - simply say what the function requires, and leave the responsibility with the caller.
A third bit of advice is simply not to use pointers - most C++ code that I've seen overuses pointers to a ridiculous extent - you should use values and references wherever possible.
In general, I would advise against doing this, as it makes your code harder to read and you also have to come up with some sensible way of dealing with the situation if a pointer is actually NULL.
In my C++ projects, I only check if a pointer (if I am using pointers at all) is NULL, only if it could be a valid state of the pointer. Checking for NULL if the pointer should never actually be NULL is a bit pointless, because you are then trying work around some programming error you should fix instead.
Additionally, when you feel the need to check if a pointer is NULL, you probably should define more clearly who owns pointer/object.
Also, you never have to check if new returns NULL, because it never will return NULL. It will throw an exception if it could not create an object.
I hate the amount of code checking for nulls adds, so I only do it for functions I export to another person.
If use the function internally, and I know how I use it, I don't check for nulls since it would get the code too messy.
the answer is yes, if you are not in control of the object. that is, if the object is returned from some method you do not control, or if in your own code you expect (or it is possible) that an object can be null.
it also depends on where the code will run. if you are writing professional code that customers / users will see, it's generally bad for them to see null pointer problems. it's better if you can detect it beforehand and print out some debugging information or otherwise report it to them in a "nicer" way.
if it's just code you are using informally, you will probably be able to understand the source of the null pointer without any additional information.
I figure I can do a whole lot of checks for NULL pointers for the cost of (debugging) just one segfault.
And the performance hit is negligible. TWO INSTRUCTIONS. Test for register == zero, branch if test succeeds. Depending on the machine, maybe only ONE instruction, if the register load sets the condition codes (and some do).
Others (AshleysBrain and Neil Butterworth), already answered correctly, but I will summarize it here:
Use references as much as possible
If using pointers, initialize them either to NULL or to a valid memory address/object
If using pointers, always verify if they are NULL before using them
Use references (again)... This is C++, not C.
Still, there is one corner case where a reference can be invalid/NULL :
void foo(T & t)
{
t.blah() ;
}
void bar()
{
T * t = NULL ;
foo(*t) ;
}
The compiler will probably compile this, and then, at execution, the code will crash at the t.blah() line (if T::blah() uses this one way or another).
Still, this is cheating/sabotage : The writer of the bar() function dereferenced t without verifying t was NOT null. So, even if the crash happens in foo(), the error is in the code of bar(), and the writer of bar() is responsible.
So, yes, use references as much as possible, know this corner case, and don't bother to protect against sabotaged references...
And if you really need to use a pointer in C++, unless you are 100% sure the pointer is not NULL (some functions guarantee that kind of thing), then always test the pointer.
I think that is a good idea for a debug version.
In a release version, checking for null pointers can result in a performance degradation.
Moreover, there are cases where you can check the pointer value in a parent function and avoid the checking in its children.
If the pointers are coming to you as parameters to a function, then make sure they are valid at the beginning of the function. Otherwise, there is not much point. new throws an exception on failure.

How defensive should you be? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Defensive programming
We had a great discussion this morning about the subject of defensive programming. We had a code review where a pointer was passed in and was not checked if it was valid.
Some people felt that only a check for null pointer was needed. I questioned whether it could be checked at a higher level, rather than every method it is passed through, and that checking for null was a very limited check if the object at the other end of the point did not meet certain requirements.
I understand and agree that a check for null is better than nothing, but it feels to me that checking only for null provides a false sense of security since it is limited in scope. If you want to ensure that the pointer is usable, check for more than the null.
What are your experiences on the subject? How do you write defenses in to your code for parameters that are passed to subordinate methods?
In Code Complete 2, in the chapter on error handling, I was introduced to the idea of barricades. In essence, a barricade is code which rigorously validates all input coming into it. Code inside the barricade can assume that any invalid input has already been dealt with, and that the inputs that are received are good. Inside the barricade, code only needs to worry about invalid data passed to it by other code within the barricade. Asserting conditions and judicious unit testing can increase your confidence in the barricaded code. In this way, you program very defensively at the barricade, but less so inside the barricade. Another way to think about it is that at the barricade, you always handle errors correctly, and inside the barricade you merely assert conditions in your debug build.
As far as using raw pointers goes, usually the best you can do is assert that the pointer is not null. If you know what is supposed to be in that memory then you could ensure that the contents are consistent in some way. This begs the question of why that memory is not wrapped up in an object which can verify it's consistency itself.
So, why are you using a raw pointer in this case? Would it be better to use a reference or a smart pointer? Does the pointer contain numeric data, and if so, would it be better to wrap it up in an object which managed the lifecycle of that pointer?
Answering these questions can help you find a way to be more defensive, in that you'll end up with a design that is easier to defend.
The best way to be defensive is not to check pointers for null at runtime, but to avoid using pointers that may be null to begin with
If the object being passed in must not be null, use a reference! Or pass it by value! Or use a smart pointer of some sort.
The best way to do defensive programming is to catch your errors at compile-time.
If it is considered an error for an object to be null or point to garbage, then you should make those things compile errors.
Ultimately, you have no way of knowing if a pointer points to a valid object. So rather than checking for one specific corner case (which is far less common than the really dangerous ones, pointers pointing to invalid objects), make the error impossible by using a data type that guarantees validity.
I can't think of another mainstream language that allows you to catch as many errors at compile-time as C++ does. use that capability.
There is no way to check if a pointer is valid.
In all serious, it depends on how many bugs you'd like to have to have inflicted upon you.
Checking for a null pointer is definitely something that I would consider necessary but not sufficient. There are plenty of other solid principles you can use starting with entry points of your code (e.g., input validation = does that pointer point to something useful) and exit points (e.g., you thought the pointer pointed to something useful but it happened to cause your code to throw an exception).
In short, if you assume that everyone calling your code is going to do their best to ruin your life, you'll probably find a lot of the worst culprits.
EDIT for clarity: some other answers are talking about unit tests. I firmly believe that test code is sometimes more valuable than the code that it's testing (depending on who's measuring the value). That said, I also think that units tests are also necessary but not sufficient for defensive coding.
Concrete example: consider a 3rd party search method that is documented to return a collection of values that match your request. Unfortunately, what wasn't clear in the documentation for that method is that the original developer decided that it would be better to return a null rather than an empty collection if nothing matched your request.
So now, you call your defensive and well unit-tested method thinking (that is sadly lacking an internal null pointer check) and boom! NullPointerException that, without an internal check, you have no way of dealing with:
defensiveMethod(thirdPartySearch("Nothing matches me"));
// You just passed a null to your own code.
I'm a big fan of the "let it crash" school of design. (Disclaimer: I don't work on medical equipment, avionics, or nuclear power-related software.) If your program blows up, you fire up the debugger and figure out why. In contrast, if your program keeps running after illegal parameters have been detected, by the time it crashes you'll probably have no idea what went wrong.
Good code consists of many small functions/methods, and adding a dozen lines of parameter-checking to every one of those snippets of code makes it harder to read and harder to maintain. Keep it simple.
I may be a bit extreme, but I don't like Defensive Programming, I think it's laziness that has introduced the principle.
For this particular example, there is no sense in assert that the pointer is not null. If you want a null pointer, there is no better way to actually enforce it (and document it clearly at the same time) than to use a reference instead. And it's documentation that will actually be enforced by the compiler and does not cost a ziltch at runtime!!
In general, I tend not to use 'raw' types directly. Let's illustrate:
void myFunction(std::string const& foo, std::string const& bar);
What are the possible values of foo and bar ? Well that's pretty much limited only by what a std::string may contain... which is pretty vague.
On the other hand:
void myFunction(Foo const& foo, Bar const& bar);
is much better!
if people mistakenly reverse the order of the arguments, it's detected by the compiler
each class is solely responsible for checking that the value is right, the users are not burdenned.
I have a tendency to favor Strong Typing. If I have an entry that should be composed only of alphabetical characters and be up to 12 characters, I'd rather create a small class wrapping a std::string, with a simple validate method used internally to check the assignments, and pass that class around instead. This way I know that if I test the validation routine ONCE, I don't have to actually worry about all the paths through which that value can get to me > it will be validated when it reaches me.
Of course, that doesn't me that the code should not be tested. It's just that I favor strong encapsulation, and validation of an input is part of knowledge encapsulation in my opinion.
And as no rule can come without an exception... exposed interface is necessarily bloated with validation code, because you never know what might come upon you. However with self-validating objects in your BOM it's quite transparent in general.
"Unit tests verifying the code does what it should do" > "production code trying to verify its not doing what its not supposed to do".
I wouldn't even check for null myself, unless its part of a published API.
It very much depends; is the method in question ever called by code external to your group, or is it an internal method?
For internal methods, you can test enough to make this a moot point, and if you're building code where the goal is highest possible performance, you might not want to spend the time on checking inputs you're pretty darn sure are right.
For externally visible methods - if you have any - you should always double check your inputs. Always.
From debugging point of view, it is most important that your code is fail-fast. The earlier the code fails, the easier to find the point of failure.
For internal methods, we usually stick to asserts for these kinds of checks. That does get errors picked up in unit tests (you have good test coverage, right?) or at least in integration tests that are running with assertions on.
checking for null pointer is only half of the story,
you should also assign a null value to every unassigned pointer.
most responsible API will do the same.
checking for a null pointer comes very cheap in CPU cycles, having an application crashing once its delivered can cost you and your company in money and reputation.
you can skip null pointer checks if the code is in a private interface you have complete control of and/or you check for null by running a unit test or some debug build test (e.g. assert)
There are a few things at work here in this question which I would like to address:
Coding guidelines should specify that you either deal with a reference or a value directly instead of using pointers. By definition, pointers are value types that just hold an address in memory -- validity of a pointer is platform specific and means many things (range of addressable memory, platform, etc.)
If you find yourself ever needing a pointer for any reason (like for dynamically generated and polymorphic objects) consider using smart pointers. Smart pointers give you many advantages with the semantics of "normal" pointers.
If a type for instance has an "invalid" state then the type itself should provide for this. More specifically, you can implement the NullObject pattern that specifies how an "ill-defined" or "un-initialized" object behaves (maybe by throwing exceptions or by providing no-op member functions).
You can create a smart pointer that does the NullObject default that looks like this:
template <class Type, class NullTypeDefault>
struct possibly_null_ptr {
possibly_null_ptr() : p(new NullTypeDefault) {}
possibly_null_ptr(Type* p_) : p(p_) {}
Type * operator->() { return p.get(); }
~possibly_null_ptr() {}
private:
shared_ptr<Type> p;
friend template<class T, class N> Type & operator*(possibly_null_ptr<T,N>&);
};
template <class Type, class NullTypeDefault>
Type & operator*(possibly_null_ptr<Type,NullTypeDefault> & p) {
return *p.p;
}
Then use the possibly_null_ptr<> template in cases where you support possibly null pointers to types that have a default derived "null behavior". This makes it explicit in the design that there is an acceptable behavior for "null objects", and this makes your defensive practice documented in the code -- and more concrete -- than a general guideline or practice.
Pointer should only be used if do you need to do something with the pointer. Such as pointer arithmetic to transverse some data structure. Then if possible that should be encapsulated in a class.
IF the pointer is passed into the function to do something with the object to which it points, then pass in a reference instead.
One method for defensive programming is to assert almost everything that you can. At the beginning of the project it is annoying but later it is a good adjunct to unit testing.
A number of answer address the question of how to write defenses in your code, but no much was said about "how defensive should you be?". That's something you have to evaluate based on the criticality of your software components.
We're doing flight software and the impacts of a software error range from a minor annoyance to loss of aircraft/crew. We categorize different pieces of software based on their potential adverse impacts which affects coding standards, testing, etc. You need to evaluate how your software will be used and the impacts of errors and set what level of defensiveness you want (and can afford). The DO-178B standard calls this "Design Assurance Level".

Checking for null before pointer usage

Most people use pointers like this...
if ( p != NULL ) {
DoWhateverWithP();
}
However, if the pointer is null for whatever reason, the function won't be called.
My question is, could it possibly be more beneficial to just not check for NULL? Obviously on safety critical systems this isn't an option, but your program crashing in a blaze of glory is more obvious than a function not being called if the program can still run without it.
In relation to the first question, do you always check for NULL before you use pointers?
Secondly, consider you have a function that takes a pointer as an argument, and you use this function multiple times on multiple pointers throughout your program. Do you find it more beneficial to test for NULL in the function (the benefit being you don't have to test for NULL all over the place), or on the pointer before calling the function (the benefit being no overhead from calling the function)?
You are right in thinking that NULL pointers often result in immediate crashes, but do not forget that if you are indexing into a large array through a NULL pointer, you might indeed get a valid memory address if your index is high enough. And then, you'll get memory corruption or incorrect memory reads, which will be much harder to locate.
Whenever I can assume that calling a function with NULL is a bug, which should never happen in production code, I prefer using ASSERT guards in the function, which are only compiled into real code in a debug build, and not checking for NULL otherwise.
And from my point of view, generally, a function should check its arguments, not the caller. You should always assume that your callers might have been a bit sloppy about the checking, or that they might contain bugs...
Morality: check for NULL in the function being called, either through some if() statement that throws, or using some ASSERT construct (possibly with a clear message of why this happened). Also check for NULL in the callers, but only if the callers know that this condition might happen in a normal program execution, and act accordingly.
When it's acceptable for the program to just crash if a NULL pointer comes up, I'm partial to:
assert(p);
DoWhateverWithP();
This will only check the pointer in debug builds since defining NDEBUG usually #undefs assert() at the preprocessor level. It documents your assumption and assists with debugging but has zero performance impact on the released binary (though, to be fair, checking for a NULL pointer should have effectively zero impact on performance in the vast majority of circumstances).
As a side benefit, this is legal for C as well as C++ and, in the latter case, doesn't require exceptions to be enabled in your compiler/runtime.
Concerning your second question, I prefer to put the assertions at the beginning of the subroutine. Again, the beauty of assert() is the fact that there's really no 'overhead' to speak of. As such, there's nothing to weigh against the benefits of only requiring one assertion in the subroutine definition.
Of course, the caveat is that you never want to assert an expression with side-effects:
assert(p = malloc(1)); // NEVER DO THIS!
DoSomethingWithP(); // If NDEBUG was defined, malloc() was never called!
Don't make it a rule to just check for null and do nothing if you find it.
If the pointer is allowed to be null, then you have to think about what your code does in the case that it actually is null. Usually, just doing nothing is the wrong answer. With care it's possible to define APIs which work like that, but this requires more than just scattering a few NULL checks about the place.
So, if the pointer is allowed to be null, then you must check for null, and you must do whatever is appropriate.
If the pointer is not allowed be null, then it's perfectly reasonable to write code which invokes undefined behaviour if it is null. It's no different from writing string-handling routines which invoke undefined behaviour if the input is not NUL-terminated, or writing buffer-using routines which invoke undefined behaviour if the caller passes in the wrong value for the length, or writing a function that takes a file* parameter, and invokes undefined behaviour if the user passes in a file descriptor reinterpret_cast to file*. In C and C++, you simply have to be able to rely on what your caller tells you. Garbage in, garbage out.
However, you might like to write code which helps out your caller (who is probably you, after all) when the most likely kinds of garbage are passed in. Asserts and exceptions are good for this.
Taking up the analogy from Franci's comment on the question: most people do not look for cars when crossing a footpath, or before sitting down on their sofa. They could still be hit by a car. It happens. But it would generally be considered paranoid to spend any effort checking for cars in those circumstances, or for the instructions on a can of soup to say "first, check for cars in your kitchen. Then, heat the soup".
The same goes for your code. It's much easier to pass an invalid value to a function than it is to accidentally drive your car into someone's kitchen. But it's still the fault of the driver if they do so and hit someone, not a failure of the cook to exercise due care. You don't necessarily want cooks (or callees) to clutter up their recipes (code) with checks that ought to be redundant.
There are other ways to find problems, such as unit tests and debuggers. In any case it is much safer to create a car-free environment except where necessary (roads), than it is to drive cars willy-nilly all over the place and hope everybody can cope with them at all times. So, if you do check for null in cases where it isn't allowed, you shouldn't let this give people the idea that it is allowed after all.
[Edit - I literally just hit an example of a bug where checking for null would not find an invalid pointer. I'm going to use a map to hold some objects. I will be using pointers to those objects (to represent a graph), which is fine because map never relocates its contents. But I haven't defined an ordering for the objects yet (and it's going to be a bit tricky to do so). So, to get things moving and prove that some other code works, I used a vector and a linear search instead of a map. That's right, I didn't mean vector, I meant deque. So after the first time the vector resized, I wasn't passing null pointers into functions, but I was passing pointers to memory which had been freed.
I make dumb errors which pass invalid garbage approximately as often as I make dumb errors which pass null pointers invalidly. So regardless of whether I add checking for null, I still need to be able to diagnose problems where the program just crashes for reasons I can't check. Since this will also diagnose null pointer accesses, I usually don't bother checking for null unless I'm writing code to generally check the preconditions on entry to the function. In that case it should if possible do a lot more than just check null.]
I prefer this style:
if (p == NULL) {
// throw some exception here
}
DoWhateverWithP();
This means that whatever function this code lives in will fail quickly in the event that p is NULL. You are correct that if p is NULL there is no way that DoWhateverWithP can execute but using a null pointer or simply not executing the function are both unacceptable ways to handle the fack the p is NULL.
The important thing to remember is to exit early and fail fast - this kind of approach yields code that is easier to debug.
In addition to the other answers, it depends upon what NULL signifies. For example, this code is perfectly OK, and is pretty idiomatic:
while (fgets(buf, sizeof buf, fp) != NULL) {
process(buf);
}
Here, NULL value indicates not only error, but end-of-file condition as well. Similarly, strtok() returns NULL to say, "there are no more tokens" (although one should avoid strtok() to begin with, but I digress). In cases like this, it is perfectly OK to call a function if the returned pointer is not NULL, and do nothing otherwise.
Edit: another example, closer to what was asked:
const char *data = "this;is;a;test;";
const char *curr = data;
const char *p;
while ((p = strchr(curr, ';')) != NULL) {
/* process data in [curr, p) */
process(curr, p);
curr = p + 1;
}
Once again, NULL here is an indication from strchr() that it couldn't find a ;, and that we should stop processing the data further.
Having said that, if NULL is not used as an indication, then it depends:
If the pointer can't be NULL at this point in code, it's useful to have an assert(p != NULL); when developing, and also having a fprintf(stderr, "Can't happen\n"); or equivalent statement, and then take whatever action as appropriate (abort() or similar is probably the only sane choice at this point).
If the pointer can be NULL, and it's not critical, it might be better to just bypass the usage of the null pointer. Suppose you're trying to allocate memory for writing a log message, and malloc() fails. You shouldn't abort the program because of this. If malloc() succeeds, you want to call a function (sprintf()/whatever) to fill the buffer.
If the pointer can be NULL, and it's critical. In this case, you probably want to fail, and hopefully such conditions don't happen too often.
Secondly, consider you have a function
that takes a pointer as an argument,
and you use this function multiple
times on multiple pointers throughout
your program. Do you find it more
beneficial to test for NULL in the
function (the benefit being you don't
have to test for NULL all over the
place), or on the pointer before
calling the function (the benefit
being no overhead from calling the
function)?
This depends upon a lot of factors. If I can be sure sometimes or most of the times that the pointer passed to a function cannot be NULL, the extra check in the function is wasteful. If the pointer passed comes out of a lot of places, and it's tricky to put in a check everywhere, sure, then the check is good to have in the function itself.
The standard library functions, for the most part, don't check for NULL: str*, mem* functions for example. An exception is free(), it does check for NULL.
A comment about assert: assert is a no-op if NDEBUG is defined, so one should not use it for debugging—its only use is during development to catch programming errors. Also, in C89, assert takes an int, so assert(p != NULL) is better in such cases than a just plain assert(p).
This non-NULLness check can be avoided by using references instead of pointers. This way, the compiler ensures the parameter passed is not NULL. For example:
void f(Param& param)
{
// "param" is a pointer that is guaranteed not to be NULL
}
In this case, it is up to the client to do the checking. However, mostly the client situation will be like this:
Param instance;
f(instance);
No non-NULLness checking is needed.
When using with objects allocated on the heap, you can do the following:
Param& instance = *new Param();
f(*instance);
Update: As user Crashworks remarks, it is still possible to make you program crash. However, when using references, it is the responsibility of the client to pass a valid reference, and as I show in the example, this is very easy to do.
How about: a comment clarifying the intent? If the intent is "this can't happen", then perhaps an assert would be the right thing to do instead of the if statement.
On the other hand, if a null value is normal, perhaps an "else comment" explaining why we can skip the "then" step would be in order. Stevel McConnel has a good section in "Code Complete" about if/else statements, and how a missing else is a very common error (distracted, forgot it?).
Consequently, I usually put a comment in for a "no-op else", unless it is something of the form "if done, return/break".
When you check for NULL, it is not good idea just to skip the function call. You should have an else-part that does something meaningful in case of NULL, for example throws an error or returns error code to upper level.
On the other hand, NULL is not always an error. It is often used to indicate for example that end of data has been reached. In such case, you will have to handle the situation as normal program flow.
Well the answer to the first question is: you are talking about ideal situation, most of the code that I see which uses if ( p != NULL ) are legacy. Also suppose, you want to return an evaluator, and then call the evaluator with the data, but say there is no evaluator for that data, its make logical sense to return NULL and check for NULL before calling the evaluator.
The answer to the second question is, it depends on the situation, like the delete checks for the NULL pointer, whereas lots of other function don't. Sometimes, if you test the pointer inside the function, then you might have to test it in lots of functions like:
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
but this code would be much better:
if(p)
{
ABC(p);
a = DEF(p);
d = GHI(a);
JKL(p, d);
}
Could it possibly be more beneficial to just not check for NULL?
I wouldn't do it, I favor assertions on the frontline and some form of recovery in the body past that. What would assertions not provide to you, that not checking for null would? Similar effect, with easier interpretation and a formal acknowledgement.
In relation to the first question, do you always check for NULL before you use pointers?
It really depends on the code and the time available, but I am irritatingly good at it; a good chunk of 'implementation' in my programs consists of what a program should not do, rather than the usual 'what it should do'.
Secondly, consider you have a function that takes a pointer as an argument...
I test it in the function, as the function is (hopefully) the program that is reused more frequently. I also tend to test it before making the call, without that test, the error loses localization (useful for reporting and isolation).
I think I've seen more of. This way you don't proceed if you know it's going to blow up anyway.
if (NULL == p)
{
goto FunctionExit; // or some other common label to exit the function.
}
I think it is better to check for null. Although, you can cut down on the amount of checks you need to make.
For most cases I prefer a simple guard clause at the top of a function:
if (p == NULL) return;
That said, I typically only put the check on functions that are publicly exposed.
However, when the null pointer in unexpected I will throw an exception. (There are some functions it doesn't make any sense to call with null, and the consumer should be responsible enough to use it right.)
Constructor initialization can be used as an alternative to checking for null all the time. This is especially useful when the class contains a collection. The collection can be used throughout the class without checking whether it has been initialized.
Dereferencing a null pointer is undefined behavior. If you want to crash if the pointer is null, use an assert or something similar (and, depending on the defined behavior of your class, that can be a perfectly valid response - it's certainly better than continuing to run when people may be expecting something to have been done!).
Since the behavior of dereferencing a null pointer is undefined, it can do anything. Crash, corrupt memory, create a wormhole to an alternate dimension allowing the Elder Gods to come forth and devour all of mankind... anything. While bugs happen, depending upon undefined behavior is, by definition, a bug. So don't do it deliberately.