Invalid pointer handling policy - c++

I think if null pointer passed in a function, just let it gp and we can find the root cause easily. But my teammate say we should avoid the gp times in production code, clients may be upset if application usually crash although the root cause may be covered in some null pointer protection.
Which method will you use when you need to validate the pointer is null?
HRESULT function(const int* pNumber)
{
{ POINTER CHECK for pNumber... }
...
}
Method 1 - Ignore the invalid case
if(pNumber)
{
int a = *pNumber;
}
No GP
Possible enter abnormal flow
Hard to find root cause
Method 2 - Assert pointer, warning in debug mode
assert(pNumber);
int a = *pNumber;
May GP in release mode
Never enter abnormal flow
Easy to find root cause
Method 3 - Leave debug message and return error code
if(!pNumber)
{
OutputDebugString(L"Error Null pointer in function.\n");
return E_POINTER;
}
No GP
Never enter abnormal flow inside the function. Client may enter abnormal flow out-side if he ignores E_POINTER returned
Silently hard to find root cause
Method 4 - Throw a logic_error exception - Let caller catch
if(!pNumber)
{
throw std::logic_error("Null pointer of pNumber in function");
};
No GP
Possible resource leak in code sequence without resource management(RAII) when stack is unwinding.
Never enter abnormal flow
Hard to find where the exception is throwed

If you dereference a nullptr, you enter the land of undefined behaviour. This means, your compiler isn't obliged to do anything sensible, so this should really be avoided. It may also decide that since it is illegal, it never happened, so it removes the corresponding code (and thereby optimizing it) and you have logic errors without hitting a general protection fault.
I personally prefer the assert-case if a nullptr is absolutely invalid, but in that case a reference might be more sensible anyway. I don't think there is a general policy, because it depends heavily on the surrounding logic.

Exception are a serious breach of contract, and this make sense if you have two modules which communicate via interface. Not so much for a static function local to a cpp unit to throw it. Think about accessing an array past the end. It also assume the other party will catch it.
None of the other are good enough alone.
assert(pNumber); alone is weak. there might be a behavior which is
specific to release mode and you will not catch it. Furthermore it is
limited to the range of inputs you have tested in debug (which is far
from all).
Ignore the invalid case as shown above is plugin your head in the
ground like an ostrich.
OutputDebugString is weaker than assert. You the dev will eventually let
things slide with error messages you will get used to so much you
will stop reading them.
So If I am not using exceptions I will use
assert(pNumber);
if(pNumber)
{
}
else
{
//Log with a logger which has different logging level to the level you seem fit
}

I priffer Method 2
assert(pNumber);
int a = *pNumber;
because, In debug mode you can easily identify where the assert failure occured. even in release mode It assures null value will not continue in to the function inside. User will not see any abnormal behaviors and application would work as normal.

Related

How to check for and handle precondition violations?

C++20 is delivering some amazing new features around contracts - which for templates is going to make life much better - where constraints around types or other compile-time requirements can be baked into the template definition and enforced with decent diagnostics by the compiler. Yay!
However, I'm very concerned by the push towards terminating unconditionally when a runtime precondition violation occurs.
https://en.cppreference.com/w/cpp/language/attributes/contract
A program may be translated with one of two violation continuation
modes:
off (default if no continuation mode is selected): after the execution
of the violation handler completes, std::terminate is called; on:
after the execution of the violation handler completes, execution
continues normally. Implementations are encouraged to not provide any
programmatic way to query, set, or modify the build level or to set or
modify the violation handler.
I've written extensive user-facing software which traps all exceptions to a core execution loop where errors are logged and the user is informed of the failure.
In many cases, the user is better off saving if possible and exiting, but in many other cases, the error can be addressed by changing something in the design / data file they're working on.
This is to say - simply by altering their design (e.g. a CAD design) - the operation they wished to perform will now succeed. E.g. it's possible that the code was executed with too tight of a tolerance which micomputed a result based on that. Simply rerunning the procedure after changing tolerance would succeed (the offending precondition somewhere in the underlying code would no longer be violated).
But the push to make preconditions simply terminate and have no capacity to trap such an error and retry the operation? This sounds like a serious degradation in feature set to me. Admittedly, there are domains in which this is exactly desirable. Fail fast, fail early, and for preconditions or postconditions, the problem is in the way the code is written, and the user cannot remedy the situation.
But... and this is a big but... most software executes against an unknown data set that is supplied at runtime - to claim that all software must terminate and that there is no way that a user can be expected to rectify the situation seems to be to be bizarre.
Herb Sutter's discussion at the ACCU seems to be aligned strongly with the perspective that precondition & postcondition violations are simply terminate conditions:
https://www.youtube.com/watch?v=os7cqJ5qlzo
I'm looking for what other C++ pros are thinking from whatever your experiences coding informs you?
I know that many projects disallow exceptions. If you're working on one such project, does that mean you write your code to simply terminate whenever invalid input occurs? Or do you back out using error states to some parent code point that is able to continue in some way?
Maybe more to the point - maybe I'm misunderstanding the nature of the intent of C++20 runtime contracts are intended for?
Please keep this civil - and if your suggestion is to close this - perhaps you could be so kind as to point to a more appropriate forum for this discussion?
Most generally, I'm trying to answer, to my satisfaction:
How to check for and handle precondition violations (using best possible practices)?
It really comes down to this question: what do you mean when you say the word "precondition"?
The way you seem to use the word is to refer to "a thing that gets checked when you call this function." The way Herb, the C++ standard, and therefore the C++ contract system mean it is "a thing which must be true for the valid execution of this function, and if it is not true, then you have done a wrong thing and the world is broken."
And this view really comes down to what a "contract" means. Consider vector::operator[] vs. vector::at(). at does not have a precondition contract in the C++ standard; it throws if the index is out-of-range. That is, it is part of the interface of at that you can pass it out-of-range values, and it will respond in an expected, predictable way.
That is not the case for operator[]. It is not part of the interface of that function that you can pass it out-of-range indices. As such, it has a precondition contract that the index is not out-of-range. If you pass it an out-of-range index, you get undefined behavior.
So, let's look at some simplistic examples. I'm going to build a vector and then read an integer from the user, then use that to access the vector I built in three different ways:
int main()
{
std::vector<int> ivec = {10, 209, 184, 96};
int ix;
std::cin >> ix;
//case 1:
try
{
std::cout << ivec.at(ix);
}
catch(const std::exception &)
{
std::cout << "Invalid input!\n";
}
//case 2:
if(0 <= ix && ix < ivec.size())
std::cout << ivec[ix];
else
std::cout << "Invalid Input!\n";
//case 3:
std::cout << ivec[ix];
return 0;
}
In case 1, we see the use of at. In the case of bad input, we catch the exception and process it.
In case 2, we see the use of operator[]. We check to see if the input is in the valid range, and if so, call operator[].
In case 3, we see... a bug in our code. Why? Because nobody sanitized the input. Someone had to, and operator[]'s precondition says that it is the caller's job to do it. The caller fails to sanitize its inputs and thus represents broken code.
That is what it means to establish a contract: if the code breaks the contract, it's the code's fault for breaking it.
But as we can see, a contract appears to be a fundamental part of a function's interface. If so, why is this part of the interface sitting in the standard's text instead of being in the function's visible declaration where people can see it? That right there is the entire point of the contracts language feature: to allow users to express this specific kind of thing within the language.
To sum up, contracts are assumptions that a piece of code makes about the state of the world. If that assumption is incorrect, then that represents a state of the world that should not exist, and therefore your program has a bug in it. That's the idea underlying the contract language feature design. If your code tests it, it's not something you assume, and you shouldn't use preconditions to define it.
If it's an error condition, then you should use your preferred error mechanism, not a contract.

Difference between using try-Catch exception handler and if else condition check? [duplicate]

This question already has answers here:
Is there a general consensus in the C++ community on when exceptions should be used? [closed]
(11 answers)
Closed 9 years ago.
I have used in many places if...else statements, however I'm new to exception handling. What is the main difference among these two?
for eg:
int *ptr = new (nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
OR
try
{
int* myarray= new int[1000];
}
catch (exception& e)
{
cout << "Standard exception: " << e.what() << endl;
}
So we are using here standard class for exception which has some in build function like e.what(). So it may be advantage. Other than that all other functionality handling we can do using if...else also. Is there any other merits in using exception handling?
To collect what the comments say in an answer:
since the standardization in 1998, new does not return a null pointer at failure but throws an exception, namely std::bad_alloc. This is different to C's malloc and maybe to some early pre-standard implementations of C++, where new might have returned NULL as well (I don't know, tbh).
There is a possibility in C++, to get a nullpointer on allocation failure instead of an exception as well:
int *ptr = new(std::nothrow) int[1000];
So in short, the first code you have will not work as intended, as it is an attempt of C-style error handling in the presence of C++ exceptions. If allocation fails, the exception will be thrown, the if block will never be entered and the program probably will be terminated since you don't catch the bad_alloc.
There are lots of articles comparing general error handling with exceptions vs return codes, and it would go way to far trying to cover the topic here. Amongst the reasons for exceptions are
Function return types are not occupied by the error handling but can return real values - no "output" function parameters needed.
You do not need to handle the return of every single function call in every single function but can just catch the exception some levels up the call stack where you actually can handle the error
Exceptions can pass arbitraty information to the error handling site, as compared to one global errno variable and a single returned error code.
The main difference is that the version using exception handling at least might work, where the one using the if statement can't possibly work.
Your first snippet:
int *ptr = new int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
...seems to assume that new will return a null pointer in case of failure. While that was true at one time, it hasn't been in a long time. With any reasonably current compiler, the new has only two possibilities: succeed or throw. Therefore, your second version aligns with how C++ is supposed to work.
If you really want to use this style, you can rewrite the code to get it to return a null pointer in case of failure:
int *ptr = new(nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
In most cases, you shouldn't be using new directly anyway -- you should really use std::vector<int> p(1000); and be done with it.
With that out of the way, I feel obliged to add that for an awful lot of code, it probably makes the most sense to do neither and simply assume that the memory allocation will succeed.
At one time (MS-DOS) it was fairly common for memory allocation to actually fail if you tried to allocate more memory than was available -- but that was a long time ago. Nowadays, things aren't so simple (as a rule). Current systems use virtual memory, which makes the situation much more complicated.
On Linux, what'll typically happen is that even the memory isn't really available, Linux will do what's called an "overdcommit". You'll still get a non-null pointer as if the allocation had succeeded -- but when you try to use the memory, bad things will happen. Specifically, Linux has what's called an "OOM Killer" that basically assumes that running out of memory is a sign of a bug, so if it happens, it tries to find the buggy program(s), and kills it/them. For most practical purpose, this means your program will probably be killed, and other (semi-arbitrarily chosen) ones may be as well.
Windows stays a little closer to the model C++ expects, so if (for example) your code were running on an unattended server, the allocation might actually fail. Long before it fails, however, it'll drag the rest of the machine to its knees, madly swapping in a doomed attempt at making the allocation succeed. If the user is actually operating the machine at the time, they'll typically either kill your program or else kill some others to free up enough memory for your code to get the requested memory fairly quickly.
In none of these cases is it particularly realistic to program against the assumption that an allocation can fail though. For most practical purposes, one of two things happens: either the allocation succeeds, or the program dies.
That leads back to the previous advice: in a typical case, you should generally just use std::vector, and assume your allocation will succeed. If you need to provide availability beyond that, you just about need to do it some other way (such as re-starting the process if it dies, preferably in a way that uses less memory).
As already mentioned, your original if-else example would still throw an exception from C++98 onwards, though adding nothrow (as edited) should make it work as desired (return null, thus trigger if-statement).
Below I'll assume, for simplicity, that, for if-else to handle exceptions, we have functions returning false on exception.
Some advantages of exceptions above if-else, off the top of my head:
You know the type of the exception for logging / debugging / bug fixing
Example:
When a function throws an exception, you can, to a reasonable extent, tell whether there may be a problem with the code or something that you can't do much about like an out of memory exception.
With the if-else, when a function returns false, you have no idea what happened in that function.
You can of course have separate logging to record this information, but why not just return an exception with the exception details included instead?
You needn't have a mess of if-else conditions to propagate the exception to the calling function
Example: (comments included to indicate behaviour)
bool someFunction() // may return false on exception
{
if (someFunction2()) // may return false on exception
return false;
if (someFunction3()) // may return false on exception
return false;
return someFunction4(); // may return false on exception
}
(There are many people who don't like having functions with multiple return statements. In this case, you'll have an even messier function.)
As opposed to:
void someFunction() // may throw exception
{
someFunction2(); // may throw exception
someFunction3(); // may throw exception
someFunction4(); // may throw exception
}
An alternative to, or extension of, if-else is error codes. For this, the second point will remain. See this for more on the comparison between that and exceptions.
If you handle the error locally, if ... else is cleaner. If the function where the error occurs doesn't handle the error, then throw an exception to pass off to someone higher in the call chain.
First of all your first code with if statement will terminate program in case of exception thrown by new[] operator because of not handled exception. You can check such thing here for example: http://www.cplusplus.com/reference/new/operator%20new%5B%5D/
Also exceptions are thrown in many other cases, not only when allocation failed and their main feature (in my eyes) is moving control in application up (to place where exception is handled). I recommend you read some more about exceptions, good read would be "More Effective C++" by Scott Meyers, there is great chapter on exceptions.

Where should assert() be used in C resp. C++?

What are the places we should use the assert() function specifically? If it's a situation like determining if an integer value is greater than zero or a pointer is null, we can simply use a private function to check this. In this kind of situation, where should we use assert() over a custom written check?
Context: I write server software for a living, the kind that stays up for weeks before the next version is loaded. So my answers may be biaised toward highly defensive code.
The principle.
Before we delve into the specifics of where to use assert, it's important to understand the principle behind it.
assert is an essential tool in Defensive Programming. It helps validating assumptions (assert them actually) and thus catch programming errors (to be distinguished from user errors). The goal of assert is to detect erroneous situations, from which recovery is generally not immediately possible.
Example:
char const* strstr(char const* haystack, char const* needle) {
assert(haystack); assert(needle);
// ...
}
Alternatives.
In C ? There is little alternative. Unless your function has been designed to be able to pass an error code or return a sentinel value, and this is duly documented.
In C++, exceptions are a perfectly acceptable alternative. However, an assert can help produce a memory dump so that you can see exactly what state the program is in at the moment the erroneous situation is detected (which helps debugging), while an exception will unwind the stack and thus lose the context (oups...).
Also, an exception might (unfortunately) get caught by a high level handler (or an unsavory catch from a fellow developer (you would not do that, of course)), in which case you could miss completely the error until it's too late.
Where NOT to use it.
First, it should be understood that assert is only ever useful in Debug code. In Release, NDEBUG is defined and no code is generated. As a corollary, in Release assert has the same worth as a comment.
Never use it for checks that are necessary to the good behavior of the software. Error conditions should be checked and dealt with. Always.
Second, it should be understood that malformed input is part of your life. Would you want your compiler display an assert message each time you make an error ? Hum! Therefore:
Never use it for input data validation. Input data should be validated and errors appropriately reported to the user. Always.
Third, it should be understood that crashes are not appreciated. It is expected of your program that it will run smoothly. Therefore, one should not get tempted to leave asserts on in Release mode: Release code ends up in the end user hands and should never crash, ever. At worst, it should shutdown while displaying an error message. It is expected that no user data is lost during this process, and even better if upon restarting the user is taken back to where she was: that is what modern browsers do, for example.
Never leave asserts on in Release.
Note: for server code, upon "hitting" an assertion, we manage to get back in position for treating the next query in most cases.
Where to use it.
assert is on in Debug mode, and so should be used for Debugging. Whenever you test new code, whenever your test suite run, whenever software is in your (or your teammates) hands, whenever software is in you QA department hands. Asserts let you spot errors and gives you the full context of the error so that you can repair.
Use it during the development and testing cycles.
Even better. Since you know code will not be executed in Release you can afford to perform expensive checks.
Note: you should also test the Release binary, if only to check the performance.
And in Release ?
Well, in the codebase I work on, we replace the inexpensive asserts (the others are ignored) by specific exceptions that are only caught by a high level handler that will log the issue (with backtrace), return a pre-encoded error response and resume the service. The development team is notified automatically.
In software that is deployed, the best practices I have seen imply to create a memory dump and stream it back to the developers for analysis while attempting not to lose any user data and behave as courteously as possible toward the unfortunate user. I feel really blessed to be working server-side when I contemplate the difficulty of this task ;)
I'm gonna throw out my view of assert(). I can find what assert() does elsewhere, but stackoverflow provides a good forum for suggestions on how and when to use it.
Both assert and static_assert serve similar functions. Let's say you have some function foo. For example, lets say you have a function foo(void*) that assumes its argument is not null:
void foo(void* p) {
assert(p);
...
}
Your function has a couple people that care about it.
First, the developer who calls your function. He might just look at your documentation and maybe he will miss the part about not allowing a null pointer as the argument. He may not ever read the code for the function, but when he runs it in debug mode the assert may catch his inappropriate usage of your function (especially if his test cases are good).
Second (and more important), is the developer who reads your code. To him, your assert says that after this line, p is not null. This is something that is sometimes overlooked, but I believe is the most useful feature of the assert macro. It documents and enforces conditions.
You should use asserts to encode this information whenever it is practical. I like to think of it as saying "at this point in the code, this is true" (and it says this in a way so much stronger than a comment would). Of course, if such a statement doesn't actually convey much/any information then it isn't needed.
I think there's a simple and powerful point to be made:
assert () is for checking internal consistency.
Use it to check preconditions, postconditions, and invariants.
When there may be inconsistency due to external factors, circumstances which the code can't control locally, then throw an exception. Exceptions are for when postconditions cannot be satisfied given the preconditions. Good examples:
new int is ok up to its preconditions, so if memory is unavailable, throwing is the only reasonable response. (The postcondition of malloc is "a valid pointer or NULL")
The postcondition of a constructor is the existence of an object whose invariants are established. If it can't construct a valid state, throwing is the only reasonable response.
assert should not be used for the above. By contrast,
void sort (int * begin, int * end) {
// assert (begin <= end); // OPTIONAL precondition, possibly want to throw
for (int * i = begin, i < end; ++i) {
assert (is_sorted (begin, i)); // invariant
// insert *i into sorted position ...
}
}
Checking is_sorted is checking that the algorithm is behaving correctly given its preconditions. An exception is not a reasonable response.
To cut a long story short: assert is for things which WILL NEVER happen IF the program is LOCALLY correct, exceptions are for things which can go wrong even when the code is correct.
Whether or not invalid inputs trigger exceptions or not is a matter of style.
You usually use it when you want the program to abort and display a runtime error if a boolean condition is not true. It is usually used like this:
void my_func( char* str )
{
assert ( str != NULL );
/* code */
}
It can also be used with functions that return a NULL pointer on failure:
SDL_Surface* screen = SDL_SetVideoMode( 640, 480, 16, SDL_HWSURFACE );
assert ( screen != NULL );
The exact error message assert() gives, depends on you compiler but it usually goes along these lines:
Assertion failed: str, mysrc.c, line 5

On null pointer arg, better to crash or throw exception? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
design by contract tests by assert or by exception?
What is the preferred way to handle a null pointer passed in as an output argument to a function? I could ASSERT but I feel like its not good to let a library crash the program. Instead, I was thinking about using exceptions.
Throw an exception! That's what they're for. Then the user of your library can decide if they want to handle it gracefully or crash and burn.
Another specific solution is to return an invalid value of a valid type, such as a negative integer for a method returning an index, but you can only use that in specific cases.
I would use an assertion if null pointers are not allowed. If you throw an exception for null pointers, you effectively allow them as arguments, because you specify behavior for such arguments. If you don't allow null pointers but you still get them, then some code around definitely has a bug. So in my opinion it does not make sense to "handle" it at some higher levels.
Either you want to allow callers to pass null pointers and handle this case by throwing an exception and let the caller react properly (or let the exception propagate, as the caller wishes), or you don't allow null pointers and assert them, possibly crashing in release mode (undefined behavior) or use a designated assertion macro that is still active in release mode. The latter philosophy is taken by functions such as strlen, while the former philosophy is taken by functions such as vector<>::at. The latter function explicitly dictates the behavior for out-of-bound values, while the former simply declares behavior undefined for a null pointer being passed.
In the end, how would you "handle" null pointers anyway?
try {
process(data);
} catch(NullPointerException &e) {
process(getNonNullData());
}
That's plain ugly, in my opinion. If you assert in the function that pointers are null, such code becomes
if(!data) {
process(getNonNullData());
} else {
process(data);
}
I think this is far superior, as it doesn't use exceptions for control flow (supplying a non-NULL source as argument). If you don't handle the exception, then you could aswell fail already with an assertion in process, which will directly point you to the file and line number the crash occurred at (and with a debugger, you can actually get a stack trace).
In my applications, i always take the assert route. My philosophy is that null pointer arguments should be handled completely by non-exceptional paths, or asserted to be non-NULL.
Do both.
Any that can be caught during development will abort the process which will make it obvious to the developer that they need to fix it.
And if one does make it past testing, there's still the exception that a robust program can handle.
And this is easy enough to put into a macro (must be a macro and not an inline so that assert properly reports the line number - thanks to #RogerPate for pointing out this out):
#define require_not_null(ptr) \
do { assert(ptr); if (!(ptr)) throw std::logic_error("null ptr"); } while (0)
If you value performance, assertions will be off in release. They're there to catch problems that should never happen, and shouldn't be used to catch stuff that may happen in real life. That's what exceptions are for.
But let's back up a second. Where is it guaranteed what will happen if you dereference a null pointer, whether writing to it or not? It may crash for you, but it won't crash in every OS, or every compiler, or any anything else. That it crashes for you is just good fortune on your part.
I'd say throw an exception if you're not gonna create the object yourself and have the pointer to the pointer passed to you, the way I often see 'out' params passed.
If you are programming the autopilot system for the ultimate airplane, I should recommend trying to handle the exception gracefully.
Please read the Eiffel specifications for "contract programming" (a very nice language indeed) and you'll be enlightened. NEVER crash if you can handle the event.
if you throw, a client can decide to re-throw, or not handle the exception, or crash or call exit or try to recover or....
If you crash, the client crashes with you.
So throw, to give your client more flexibility.
I would neither raise an exception nor use assert, which is what the C++ Standard library does. Consider about the simplest function in the library, strlen(). If it raised an exception, how would you possibly handle it? And the assertions won't fire in production code. The only sensible thing to do is say explicitly that the function must not be called with a NULL pointer as a parameter, and that doing so will result in undefined behaviour.
The benefit of using exceptions is that you let your client code decide how to handle the exceptional circumstance. That's for the case where the parameter being non-null is a stated precondition of the function. For functions taking optional out parameters, though, passing NULL can be an indication that the client is not interested in the value. Presumably, you're using the return value to signify success or failure, and if that's the case, you could simply detect the NULL and return an error code if the parameter is mandatory, or simply ignore it if the parameter is optional. This avoids the overhead of exceptions and still allows error handling on the client's part.

How to catch the null pointer exception? [duplicate]

This question already has answers here:
Catching access violation exceptions?
(8 answers)
Closed 6 years ago.
try {
int* p = 0;
*p = 1;
} catch (...) {
cout << "null pointer." << endl;
}
I tried to catch the exception like this but it doesn't work,any help?
There's no such thing as "null pointer exception" in C++. The only exceptions you can catch, is the exceptions explicitly thrown by throw expressions (plus, as Pavel noted, some standard C++ exceptions thrown intrinsically by standard operator new, dynamic_cast etc). There are no other exceptions in C++. Dereferencing null pointers, division by zero etc. does not generate exceptions in C++, it produces undefined behavior. If you want exceptions thrown in cases like that it is your own responsibility to manually detect these conditions and do throw explicitly. That's how it works in C++.
Whatever else you seem to be looking for has noting to do with C++ language, but rather a feature of particular implementation. In Visual C++, for example, system/hardware exceptions can be "converted" into C++ exceptions, but there's a price attached to this non-standard functionality, which is not normally worth paying.
You cannot. De-referencing a null-pointer is a system thing.
On Linux, the OS raises signals in your application. Take a look at csignal to see how to handle signals. To "catch" one, you'd hook a function in that will be called in the case of SIGSEGV. Here you could try to print some information before you gracefully terminate the program.
Windows uses structured-exception-handling. You could use the instristics __try/__except, as outlined in the previous link. The way I did it in a certain debug utility I wrote was with the function _set_se_translator (because it closely matches hooks). In Visual Studio, make sure you have SEH enabled. With that function, you can hook in a function to call when the system raises an exception in your application; in your case it would call it with EXCEPTION_ACCESS_VIOLATION. You can then throw an exception and have it propagate back out as if an exception was thrown in the first place.
There is a very easy way to catch any kind of exception (division by zero, access violation, etc.) in Visual Studio using try -> catch (...) blocks.
A minor project tweaking is enough. Just enable the /EHa option in project settings. See Project Properties -> C/C++ -> Code Generation -> Modify the Enable C++ Exceptions to "Yes With SEH Exceptions". That's it!
See details here:
http://msdn.microsoft.com/en-us/library/1deeycx5(v=vs.80).aspx
Dereferencing a null (or pointer that's past-the-end of array, or a random invalid pointer) results in undefined behavior. There's no portable way to "catch" that.
C++ doesn't do pointer checking (although I suppose some implementations could). If you try to write to a null pointer it is most likely going to crash hard. It will not throw an exception. If you want to catch this you need to check the value of the pointer yourself before you try to write to it.
Generally you can't. Even if you could it would be like trying to put a band aid on a submarine that has sprung a leak.
A crippled application can do far more damage than one that has crashed. My advice here would be to let it crash then fix why it crashed. Rinse. Repeat.
As others have said, you can't do this in C++.
If I can make a broader point: even in a language that allows you to catch it, the better action is to not touch null pointers. Catching an error when it's already blown up in your face, then deciding to just move on like it didn't happen, is not a good coding strategy. Things like null pointer dereference, stack overflow, etc., should be seen as catastrophic events and defensively avoided, even if your language allows you to react to it differently.
There is no platform independent way to do this. Under Windows/MSVC++ you can use __try/__except
But I wouldn't recommend doing it anyway. You almost certainly cannot recover correctly from a segmentation fault.
If you wanted to you could just do the pointer checking yourself and throw...
if (p == nullptr) throw std::exception("woot! a nullptr!")
p->foo();
so course this would only be to debug the problem, the nullptr should not occur in the first place :)
Short answer- you can't in a portable or standard way, because bugs like this are potentially corrupting the process itself.
Long answer- you can do more than you might think, and definitely more than the default of the program just crashing. However, you need to keep 3 things in mind:
1) These bugs are MORE severe than exceptions and often cannot present as exceptions to your logic.
2) Your detection and library handling of them WILL be platform-dependent on the back end, even though you can provide a clean abstract interface for public consumption.
3) There will always be some crashes that are so bad you cannot even detect them before the end.
Basically, faults like segfaults or heap corruption are not exceptions because they're corrupting the actual process running the program. Anything you coded into the program is part of the program, including exception handling, so anything beyond logging a nice error message before the process dies is inadvisable in the few cases it isn't impossible. In POSIX, the OS uses a signaling system to report faults like these, and you can register callback functions to log what the error was before you exit. In Windows, the OS can sometimes convert them into normal-looking exceptions which you can catch and recover from.
Ultimately, however, your best bet is to code defensively against such nightmares. On any given OS there will be some that are so bad that you cannot detect them, even in principle, before your process dies. For example, corrupting your own stack pointer is something that can crash you so badly that even your POSIX signal callbacks never see it.
In VC++ 2013 (and also earlier versions) you can put breakpoints on exceptions:
Press Ctrl + Alt + Delete (this will open the exception dialog).
Expand 'Win32 Exceptions'
Ensure that "0xC0000005 Access Violation" exception is checked.
Now debug again, a breakpoint will be hit exactly when the the null dereference happened.
There is no NULL pointer exception exist in c++ but still you want to catch the same then you need to provide your own class implementation for the same.
below is the example for the same.
class Exception {
public:
Exception(const string& msg,int val) : msg_(msg),e(val) {}
~Exception( ) {}
string getMessage( ) const {return(msg_);}
int what(){ return e;}
private:
string msg_;
int e;
};
Now based on NULL pointer check it can be threw like , throw(Exception("NullPointerException",NULL));
and below is the code for catching the same.
catch(Exception& e) {
cout << "Not a valid object: " << e.getMessage( )<< ": ";
cout<<"value="<<e.what()<< endl;
}