This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
design by contract tests by assert or by exception?
What is the preferred way to handle a null pointer passed in as an output argument to a function? I could ASSERT but I feel like its not good to let a library crash the program. Instead, I was thinking about using exceptions.
Throw an exception! That's what they're for. Then the user of your library can decide if they want to handle it gracefully or crash and burn.
Another specific solution is to return an invalid value of a valid type, such as a negative integer for a method returning an index, but you can only use that in specific cases.
I would use an assertion if null pointers are not allowed. If you throw an exception for null pointers, you effectively allow them as arguments, because you specify behavior for such arguments. If you don't allow null pointers but you still get them, then some code around definitely has a bug. So in my opinion it does not make sense to "handle" it at some higher levels.
Either you want to allow callers to pass null pointers and handle this case by throwing an exception and let the caller react properly (or let the exception propagate, as the caller wishes), or you don't allow null pointers and assert them, possibly crashing in release mode (undefined behavior) or use a designated assertion macro that is still active in release mode. The latter philosophy is taken by functions such as strlen, while the former philosophy is taken by functions such as vector<>::at. The latter function explicitly dictates the behavior for out-of-bound values, while the former simply declares behavior undefined for a null pointer being passed.
In the end, how would you "handle" null pointers anyway?
try {
process(data);
} catch(NullPointerException &e) {
process(getNonNullData());
}
That's plain ugly, in my opinion. If you assert in the function that pointers are null, such code becomes
if(!data) {
process(getNonNullData());
} else {
process(data);
}
I think this is far superior, as it doesn't use exceptions for control flow (supplying a non-NULL source as argument). If you don't handle the exception, then you could aswell fail already with an assertion in process, which will directly point you to the file and line number the crash occurred at (and with a debugger, you can actually get a stack trace).
In my applications, i always take the assert route. My philosophy is that null pointer arguments should be handled completely by non-exceptional paths, or asserted to be non-NULL.
Do both.
Any that can be caught during development will abort the process which will make it obvious to the developer that they need to fix it.
And if one does make it past testing, there's still the exception that a robust program can handle.
And this is easy enough to put into a macro (must be a macro and not an inline so that assert properly reports the line number - thanks to #RogerPate for pointing out this out):
#define require_not_null(ptr) \
do { assert(ptr); if (!(ptr)) throw std::logic_error("null ptr"); } while (0)
If you value performance, assertions will be off in release. They're there to catch problems that should never happen, and shouldn't be used to catch stuff that may happen in real life. That's what exceptions are for.
But let's back up a second. Where is it guaranteed what will happen if you dereference a null pointer, whether writing to it or not? It may crash for you, but it won't crash in every OS, or every compiler, or any anything else. That it crashes for you is just good fortune on your part.
I'd say throw an exception if you're not gonna create the object yourself and have the pointer to the pointer passed to you, the way I often see 'out' params passed.
If you are programming the autopilot system for the ultimate airplane, I should recommend trying to handle the exception gracefully.
Please read the Eiffel specifications for "contract programming" (a very nice language indeed) and you'll be enlightened. NEVER crash if you can handle the event.
if you throw, a client can decide to re-throw, or not handle the exception, or crash or call exit or try to recover or....
If you crash, the client crashes with you.
So throw, to give your client more flexibility.
I would neither raise an exception nor use assert, which is what the C++ Standard library does. Consider about the simplest function in the library, strlen(). If it raised an exception, how would you possibly handle it? And the assertions won't fire in production code. The only sensible thing to do is say explicitly that the function must not be called with a NULL pointer as a parameter, and that doing so will result in undefined behaviour.
The benefit of using exceptions is that you let your client code decide how to handle the exceptional circumstance. That's for the case where the parameter being non-null is a stated precondition of the function. For functions taking optional out parameters, though, passing NULL can be an indication that the client is not interested in the value. Presumably, you're using the return value to signify success or failure, and if that's the case, you could simply detect the NULL and return an error code if the parameter is mandatory, or simply ignore it if the parameter is optional. This avoids the overhead of exceptions and still allows error handling on the client's part.
Related
The CppReference page for make_shared says (same with make_unique)
May throw std::bad_alloc or any exception thrown by the constructor of T.
If an exception is thrown, the functions have no effect.
This means that std::bad_alloc exeception can be thrown in case of a failure. "the functions have no effect" implicitly means that it cannot return a nullptr. If this is the case, why is it not a common practice to write make_shared/make_unique always into a try catch block?
What is the proper way to use a make_shared?
Within try catch block? or Checking for nullptr?
I see two main reasons.
Failure of dynamic memory allocation is often considered a scenario which doesn't allow for graceful treatment. The program is terminated, and that's it. This implies that we often don't check for every possible std::bad_alloc. Or do you wrap std::vector::push_back into a try-catch block because the underlying allocator could throw?
Not every possible exception must be caught right at the immediate call side. There are recommendations that the relation of throw to catch shall be much larger than one. This implies that you catch exceptions at a higher level, "collecting" multiple error-paths into one handler. The case that the T constructor throws can also be treated this way. After all, exceptions are exceptional. If construction of objects on the heap is so likely to throw that you have to check every such invocation, you should consider using a different error handling scheme (std::optional, std::expected etc.).
In any case, checking for nullptr is definitely not the right way of making sure std::make_unique succeeds. It never returns nullptr - either it succeeds, or it throws.
Throwing bad_alloc has two effects:
It allows the error to be caught and handled somewhere in the caller hierarchy.
It produces well-defined behaviour, regardless of whether or not such handling occurs.
The default for that well-defined behaviour is for the process to terminate in an expedited but orderly manner by calling std::terminate(). Note that it is implementation-defined (but, for a given implementation, well-defined nonetheless) whether the stack is unwound before the call to terminate().
This is rather different from an unhandled failed malloc(), for example, which (a) results in undefined behaviour when the returned null pointer is dereferenced, and (b) lets execution carry on blithely until (and beyond) that moment, usually accumulating further allocation failures along the way.
The next question, then, is where and how, if at all, calling code should catch and handle the exception.
The answer in most cases is that it shouldn't.
What's the handler going to do? Really there are two options:
Terminate the application in a more orderly fashion than the default unhandled exception handling.
Free up some memory somewhere else and retry the allocation.
Both approaches add complexity to the system (the latter especially), which needs to be justified in the specific circumstances - and, importantly, in the context of other possible failure modes and mitigations. (e.g. A critical system that already contains non-software failsafes might be better off terminating quickly to let those mechanisms kick in, rather than futzing around in software.)
In both cases, it likely makes more sense for any actual handling to be done higher up in the caller hierarchy than at the point making the failed allocation.
And if neither of these approaches adds any benefit, then the best approach is simply to let the default std::terminate() handling kick in.
Say I'm using a library that has a function to_int that takes a string parameter. This function returns an int if the string is a character representation of a number, e.g "23" would return 23. If the string isn't a number it throws an std::runtime_error. Would it be better to:
if(is_all_digits(str))
x = to_int(str);
else
output("not an int, idiot. Try again");
Or
try
{
x = to_int(str);
}
catch(...)
{
output("not an int, idiot. Try again");
}
There are several different error handling techniques and each have advantages and disadvantages.
Lets consider a function getAllElements() which gets a container with some elements.
Now this may produce an error (database connection or whatever). You now have the options
Errorcode getAllElements(std::vector<...> & cont);
or
std::vector<...> getAllElements(); //throws exceptions
It is kind of a general design question usually and depends on circumstances. I prefer the one with the exceptions for multiple reasons. I can just assign and don't need a predetermined container
auto elements = getAllElements();
The next thing is where will you handle your errors? If you handle them like 5 functions above in the stack you have to check for the error-code every time and just give it to the next function. An exception will automatically propagate until someone catches it and is able to deal with it.
Exceptions though have some disadvantage. They cause a bigger binary and is slower when an exception gets thrown. In game development usually no exceptions are used because of that ( Listen to this for more info on that: http://cppcast.com/2016/10/guy-davidson/ they talk about why not using exceptions. I don't have the timestamp currently though. )
Also exceptions should be used in exceptional cases. Errors you cannot deal with immediately and have to be dealt with somewhere higher.
So if you don't need high performance / a small binary I would suggest using exceptions where they are useful. They can result in typing less code (like checking return codes) which can result in less places for introducing bugs.
Here is also a good discussion on error handling mechanisms from CppCon 2016:
CppCon 2016: Patrice Roy “The Exception Situation"
There is no single answer to this question, as it depends on how the program as a whole can deal with bad input, and whether it can sensibly recover when errors are detected (regardless of whether those errors are reported using return codes or by throwing exceptions).
Can every function which calls to_int() recover from bad input immediately? If not, it is better to allow an exception to be thrown .... so it unwinds the stack until there is some caller (with a try/catch block) that can actually recover from the error.
What if you have numerous functions that call to_int(), do you want to do the check in every one? If so, this results in a lot of code duplication.
What if you have some function that calls to_int() which can recover immediately from the error and some others that cannot?
What if you want to report an error to the caller (e.g. to allow something more substantial than writing an error string)?
What is there is no is_all_digits() function? If there is not, what if you implement it in a way that misses some errors that to_int() will detect? Then you have the worst of both worlds - doing the error checking in an attempt to prevent an exception being thrown, but then the function throwing an exception anyway. For example, there might be some global setting that causes to_int() to only accept octal digits (in range 0 to 7), but your is_all_digits() function deems all decimal digits to be valid.
More generally, the real need is to define an error handling strategy that works for your program as a whole. Trying to decide, based on usage of a single function, between throwing exceptions or not throwing exceptions, is completely missing the point.
If it makes sense for your program to report errors using exceptions (e.g. with a single centralised try/catch block in main() so all errors propagate up the call stack so main() implements the recovery globally) then throw exceptions. If it makes sense for every function in your program to detect errors and silently deal with them on the spot, then avoid exceptions.
What I'm advocating is allow the dog (your program) to wag the tail (low level decisions on how to handle errors). Your question is essentially asking if it is appropriate to allow the tail to wag the dog.
If the caught exception is really specific than you can use a specific try catch (using a generic try catch is not a good idea since you are hiding a bunch of other possible errors)
Generically speaking my preference is to check the string before passing it to the function.
When you are using a library or an API you want it to prevent you from bad usages and other mistakes.
This is the role of your function to control the integrity of the given parameters, and to throw an exception.
Note that you can also use assertions when developing your code, and then disable it for binary production.
This question already has answers here:
Is there a general consensus in the C++ community on when exceptions should be used? [closed]
(11 answers)
Closed 9 years ago.
I have used in many places if...else statements, however I'm new to exception handling. What is the main difference among these two?
for eg:
int *ptr = new (nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
OR
try
{
int* myarray= new int[1000];
}
catch (exception& e)
{
cout << "Standard exception: " << e.what() << endl;
}
So we are using here standard class for exception which has some in build function like e.what(). So it may be advantage. Other than that all other functionality handling we can do using if...else also. Is there any other merits in using exception handling?
To collect what the comments say in an answer:
since the standardization in 1998, new does not return a null pointer at failure but throws an exception, namely std::bad_alloc. This is different to C's malloc and maybe to some early pre-standard implementations of C++, where new might have returned NULL as well (I don't know, tbh).
There is a possibility in C++, to get a nullpointer on allocation failure instead of an exception as well:
int *ptr = new(std::nothrow) int[1000];
So in short, the first code you have will not work as intended, as it is an attempt of C-style error handling in the presence of C++ exceptions. If allocation fails, the exception will be thrown, the if block will never be entered and the program probably will be terminated since you don't catch the bad_alloc.
There are lots of articles comparing general error handling with exceptions vs return codes, and it would go way to far trying to cover the topic here. Amongst the reasons for exceptions are
Function return types are not occupied by the error handling but can return real values - no "output" function parameters needed.
You do not need to handle the return of every single function call in every single function but can just catch the exception some levels up the call stack where you actually can handle the error
Exceptions can pass arbitraty information to the error handling site, as compared to one global errno variable and a single returned error code.
The main difference is that the version using exception handling at least might work, where the one using the if statement can't possibly work.
Your first snippet:
int *ptr = new int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
...seems to assume that new will return a null pointer in case of failure. While that was true at one time, it hasn't been in a long time. With any reasonably current compiler, the new has only two possibilities: succeed or throw. Therefore, your second version aligns with how C++ is supposed to work.
If you really want to use this style, you can rewrite the code to get it to return a null pointer in case of failure:
int *ptr = new(nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
In most cases, you shouldn't be using new directly anyway -- you should really use std::vector<int> p(1000); and be done with it.
With that out of the way, I feel obliged to add that for an awful lot of code, it probably makes the most sense to do neither and simply assume that the memory allocation will succeed.
At one time (MS-DOS) it was fairly common for memory allocation to actually fail if you tried to allocate more memory than was available -- but that was a long time ago. Nowadays, things aren't so simple (as a rule). Current systems use virtual memory, which makes the situation much more complicated.
On Linux, what'll typically happen is that even the memory isn't really available, Linux will do what's called an "overdcommit". You'll still get a non-null pointer as if the allocation had succeeded -- but when you try to use the memory, bad things will happen. Specifically, Linux has what's called an "OOM Killer" that basically assumes that running out of memory is a sign of a bug, so if it happens, it tries to find the buggy program(s), and kills it/them. For most practical purpose, this means your program will probably be killed, and other (semi-arbitrarily chosen) ones may be as well.
Windows stays a little closer to the model C++ expects, so if (for example) your code were running on an unattended server, the allocation might actually fail. Long before it fails, however, it'll drag the rest of the machine to its knees, madly swapping in a doomed attempt at making the allocation succeed. If the user is actually operating the machine at the time, they'll typically either kill your program or else kill some others to free up enough memory for your code to get the requested memory fairly quickly.
In none of these cases is it particularly realistic to program against the assumption that an allocation can fail though. For most practical purposes, one of two things happens: either the allocation succeeds, or the program dies.
That leads back to the previous advice: in a typical case, you should generally just use std::vector, and assume your allocation will succeed. If you need to provide availability beyond that, you just about need to do it some other way (such as re-starting the process if it dies, preferably in a way that uses less memory).
As already mentioned, your original if-else example would still throw an exception from C++98 onwards, though adding nothrow (as edited) should make it work as desired (return null, thus trigger if-statement).
Below I'll assume, for simplicity, that, for if-else to handle exceptions, we have functions returning false on exception.
Some advantages of exceptions above if-else, off the top of my head:
You know the type of the exception for logging / debugging / bug fixing
Example:
When a function throws an exception, you can, to a reasonable extent, tell whether there may be a problem with the code or something that you can't do much about like an out of memory exception.
With the if-else, when a function returns false, you have no idea what happened in that function.
You can of course have separate logging to record this information, but why not just return an exception with the exception details included instead?
You needn't have a mess of if-else conditions to propagate the exception to the calling function
Example: (comments included to indicate behaviour)
bool someFunction() // may return false on exception
{
if (someFunction2()) // may return false on exception
return false;
if (someFunction3()) // may return false on exception
return false;
return someFunction4(); // may return false on exception
}
(There are many people who don't like having functions with multiple return statements. In this case, you'll have an even messier function.)
As opposed to:
void someFunction() // may throw exception
{
someFunction2(); // may throw exception
someFunction3(); // may throw exception
someFunction4(); // may throw exception
}
An alternative to, or extension of, if-else is error codes. For this, the second point will remain. See this for more on the comparison between that and exceptions.
If you handle the error locally, if ... else is cleaner. If the function where the error occurs doesn't handle the error, then throw an exception to pass off to someone higher in the call chain.
First of all your first code with if statement will terminate program in case of exception thrown by new[] operator because of not handled exception. You can check such thing here for example: http://www.cplusplus.com/reference/new/operator%20new%5B%5D/
Also exceptions are thrown in many other cases, not only when allocation failed and their main feature (in my eyes) is moving control in application up (to place where exception is handled). I recommend you read some more about exceptions, good read would be "More Effective C++" by Scott Meyers, there is great chapter on exceptions.
This question already has answers here:
Catching access violation exceptions?
(8 answers)
Closed 6 years ago.
try {
int* p = 0;
*p = 1;
} catch (...) {
cout << "null pointer." << endl;
}
I tried to catch the exception like this but it doesn't work,any help?
There's no such thing as "null pointer exception" in C++. The only exceptions you can catch, is the exceptions explicitly thrown by throw expressions (plus, as Pavel noted, some standard C++ exceptions thrown intrinsically by standard operator new, dynamic_cast etc). There are no other exceptions in C++. Dereferencing null pointers, division by zero etc. does not generate exceptions in C++, it produces undefined behavior. If you want exceptions thrown in cases like that it is your own responsibility to manually detect these conditions and do throw explicitly. That's how it works in C++.
Whatever else you seem to be looking for has noting to do with C++ language, but rather a feature of particular implementation. In Visual C++, for example, system/hardware exceptions can be "converted" into C++ exceptions, but there's a price attached to this non-standard functionality, which is not normally worth paying.
You cannot. De-referencing a null-pointer is a system thing.
On Linux, the OS raises signals in your application. Take a look at csignal to see how to handle signals. To "catch" one, you'd hook a function in that will be called in the case of SIGSEGV. Here you could try to print some information before you gracefully terminate the program.
Windows uses structured-exception-handling. You could use the instristics __try/__except, as outlined in the previous link. The way I did it in a certain debug utility I wrote was with the function _set_se_translator (because it closely matches hooks). In Visual Studio, make sure you have SEH enabled. With that function, you can hook in a function to call when the system raises an exception in your application; in your case it would call it with EXCEPTION_ACCESS_VIOLATION. You can then throw an exception and have it propagate back out as if an exception was thrown in the first place.
There is a very easy way to catch any kind of exception (division by zero, access violation, etc.) in Visual Studio using try -> catch (...) blocks.
A minor project tweaking is enough. Just enable the /EHa option in project settings. See Project Properties -> C/C++ -> Code Generation -> Modify the Enable C++ Exceptions to "Yes With SEH Exceptions". That's it!
See details here:
http://msdn.microsoft.com/en-us/library/1deeycx5(v=vs.80).aspx
Dereferencing a null (or pointer that's past-the-end of array, or a random invalid pointer) results in undefined behavior. There's no portable way to "catch" that.
C++ doesn't do pointer checking (although I suppose some implementations could). If you try to write to a null pointer it is most likely going to crash hard. It will not throw an exception. If you want to catch this you need to check the value of the pointer yourself before you try to write to it.
Generally you can't. Even if you could it would be like trying to put a band aid on a submarine that has sprung a leak.
A crippled application can do far more damage than one that has crashed. My advice here would be to let it crash then fix why it crashed. Rinse. Repeat.
As others have said, you can't do this in C++.
If I can make a broader point: even in a language that allows you to catch it, the better action is to not touch null pointers. Catching an error when it's already blown up in your face, then deciding to just move on like it didn't happen, is not a good coding strategy. Things like null pointer dereference, stack overflow, etc., should be seen as catastrophic events and defensively avoided, even if your language allows you to react to it differently.
There is no platform independent way to do this. Under Windows/MSVC++ you can use __try/__except
But I wouldn't recommend doing it anyway. You almost certainly cannot recover correctly from a segmentation fault.
If you wanted to you could just do the pointer checking yourself and throw...
if (p == nullptr) throw std::exception("woot! a nullptr!")
p->foo();
so course this would only be to debug the problem, the nullptr should not occur in the first place :)
Short answer- you can't in a portable or standard way, because bugs like this are potentially corrupting the process itself.
Long answer- you can do more than you might think, and definitely more than the default of the program just crashing. However, you need to keep 3 things in mind:
1) These bugs are MORE severe than exceptions and often cannot present as exceptions to your logic.
2) Your detection and library handling of them WILL be platform-dependent on the back end, even though you can provide a clean abstract interface for public consumption.
3) There will always be some crashes that are so bad you cannot even detect them before the end.
Basically, faults like segfaults or heap corruption are not exceptions because they're corrupting the actual process running the program. Anything you coded into the program is part of the program, including exception handling, so anything beyond logging a nice error message before the process dies is inadvisable in the few cases it isn't impossible. In POSIX, the OS uses a signaling system to report faults like these, and you can register callback functions to log what the error was before you exit. In Windows, the OS can sometimes convert them into normal-looking exceptions which you can catch and recover from.
Ultimately, however, your best bet is to code defensively against such nightmares. On any given OS there will be some that are so bad that you cannot detect them, even in principle, before your process dies. For example, corrupting your own stack pointer is something that can crash you so badly that even your POSIX signal callbacks never see it.
In VC++ 2013 (and also earlier versions) you can put breakpoints on exceptions:
Press Ctrl + Alt + Delete (this will open the exception dialog).
Expand 'Win32 Exceptions'
Ensure that "0xC0000005 Access Violation" exception is checked.
Now debug again, a breakpoint will be hit exactly when the the null dereference happened.
There is no NULL pointer exception exist in c++ but still you want to catch the same then you need to provide your own class implementation for the same.
below is the example for the same.
class Exception {
public:
Exception(const string& msg,int val) : msg_(msg),e(val) {}
~Exception( ) {}
string getMessage( ) const {return(msg_);}
int what(){ return e;}
private:
string msg_;
int e;
};
Now based on NULL pointer check it can be threw like , throw(Exception("NullPointerException",NULL));
and below is the code for catching the same.
catch(Exception& e) {
cout << "Not a valid object: " << e.getMessage( )<< ": ";
cout<<"value="<<e.what()<< endl;
}
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
design by contract tests by assert or by exception?
Is there a rule of thumb to follow when deciding to use exceptions instead of asserts (or vice versa). Right now I do only throw if it's something I think will happen during runtime on the user side (like a socket or file error). Almost everything else I use asserts.
Also, if I were to throw an assert, what is a nice standard object to throw? If I recall correctly there is std::logic_error, but is that not a good object to throw? What would I throw for a missing file or unexpected input (such as from the command line instead of a frontend app)?
My rule of thumb:
Exceptions are used for run-time error conditions (IO errors, out of memory, can't get a database connection, etc.).
Assertions are used for coding errors (this method doesn't accept nulls, and the developer passed one anyway).
For libraries with public classes, throw exceptions on the public methods (because it makes sense to do so). Assertions are used to catch YOUR mistakes, not theirs.
EDIT: This may not be entirely clear, due to the null value example. My point is that you use assertions (as others have pointed out) for conditions that should NEVER happen, for conditions that should NEVER make it into production code. These conditions absolutely must fail during unit testing or QA testing.
Assert the stuff that you know cannot happen (i.e. if it happens, it's your fault for being incompetent).
Raise exceptional situations which are not treated by the regular control flow of the program.
You use exceptions for exceptional situations. For example an out of memory situation or a network failure.
You use assert to ascertain that a cetain precondition is met. For example a pointer is not NULL or an integer is within a certain range.
I use asserts for things that should never happen, yet do. The sort of thing that when it happens, the developer needs to revisit incorrect assumptions.
I use exceptions for everything else.
In reusable code, I prefer an exception because it gives the caller a choice of handling or not handling the problem. Just try catching and handling an assert!
Assert is a means to verify that the program is in a possible state. If a function returns -1 when it should only return positive integers, and you have an assert that verifies that, your program should stop because it puts your program in a dangerous state.
As a general rule, I throw exceptions from:
public functions of a package to catch programming errors.
internal functions to report system errors or pass-through sub-system errors.
where I use asserts only internally to catch implementation mistakes.