Why new std::nothrow version is not widely used [duplicate] - c++

This question already has answers here:
nothrow or exception?
(6 answers)
Closed 9 years ago.
According to C++ reference, you can new an object by:
MyClass * p1 = new MyClass;
or by
MyClass * p2 = new (std::nothrow) MyClass;
The second one will return a null pointer instead of throwing an exception.
However, I hardly see this version in my experience.
For example Google does not recommend using exception in their code, but they are not using the nothrow version either in Chromium as I can see.
Is there any reason that we prefer the default one against the nothrow one? Even in a project that is not using exception?
-- EDIT --
Follow up question: should I check return value of malloc()?
It looks like, on the contrary, many people advice to check return value of malloc, some said because:
many allocation failures have nothing to do with being out of memory. Fragmentation can cause an allocation to fail because there's not enough contiguous space available even though there's plenty of memory free.
Is this true? Why we treat malloc() and new() differently in this case?

However, I hardly see this version in my experience.
You would use it (or, equivalently, catch the exception from the default version) if you can handle the failure locally; perhaps by requesting to free some other memory and then retrying, or by trying to allocate something smaller, or using an alternative algorithm that doesn't need extra memory.
Is there any reason that we prefer the default one against the nothrow one?
The general principle of exceptions: if you can't handle it locally, then there's no point in checking locally. Unlike return values, exceptions can't be ignored, so there's no possibility of ploughing on regardless and using a null pointer.
Even in a project that is not using exception?
Often, an out-of-memory condition can't be handled at all. In that case, terminating the program is probably the best response; and that is the default response to an unhandled exception. So, even if you're not using exceptions, the default new is probably the best option in most situations.
should I check return value of malloc()?
Yes: that's the only way to check whether it succeeded. If you don't, then you could end up using a null pointer, giving undefined behaviour: often a crash, but perhaps data corruption or other bizarre behaviour and long debugging sessions to (hopefully) figure out what went wrong.
Why we treat malloc() and new differently in this case?
Because malloc forces us to check the return value, while new gives us the option of less intrusive error handling.

If you use the throwing version you don't need to test the result of every new call to see if it succeeded or failed. Typically speaking in many/most applications if your allocation fails you can't do much and just exit/abort, which the exception does for you automatically if you don't explicitly try/catch.
If you use the nothrow version you might wind up propagating a null pointer through your application and crashing/exiting MUCH later on at a point apparently totally unrelated to memory allocation, making debugging much harder.

Related

Using C++ std::vector push_back() or insert() with nothrow [duplicate]

The new operator (or for PODs, malloc/calloc) support a simple and efficient form of failing when allocating large chunks of memory.
Say we have this:
const size_t sz = GetPotentiallyLargeBufferSize(); // 1M - 1000M
T* p = new (nothrow) T[sz];
if(!p) {
return sorry_not_enough_mem_would_you_like_to_try_again;
}
...
Is there any such construct for the std::containers, or will I always have to handle an (expected!!) exception with std::vector and friends?
Would there maybe be a way to write a custom allocator that preallocates the memory and then pass this custom allocator to the vector, so that as long as the vector does not ask for more memory than you put into the allocator beforehand, it will not fail?
Afterthought: What really would be needed were a member function bool std::vector::reserve(std::nothrow) {...} in addition to the normal reserve function. But since that would only make sense if allocators were extended too to allow for nothrow allocation, it just won't happen. Seems (nothrow) new is good for something after all :-)
Edit: As to why I'm even asking this:
I thought of this question while debugging (1st chance / 2nd chance exception handling of the debugger): If I've set my debugger to 1st-chance catch any bad_alloc because I'm testing for low-memory conditions, it would be annoying if it also caught those bad_alloc exceptions that are already well-expected and handled in the code. It wasn't/isn't a really big problem but it just occurred to me that the sermon goes that exceptions are for exceptional circumstances, and something I already expect to happen every odd call in the code is not exceptional.
If new (nothrow) has it's legitimate uses, the a vector-nothrow-reserve also would have.
By default, the standard STL container classes use the std::allocator class under the hood to do their allocation, which is why they can throw std::bad_alloc if there's no memory available. Interestingly, the C++ ISO specification on allocators states that the return value of any allocator type must be a pointer to a block of memory capable of holding some number of elements, which automatically precludes you from building a custom allocator that could potentially use the nothrow version of new to have these sorts of silent allocation failures. You could, however, build a custom allocator that terminated the program if no memory was available, since then it's vacuously true that the returned memory is valid when no memory is left. :-)
In short, the standard containers throw exceptions by default, and any way you might try to customize them with a custom allocator to prevent exceptions from being thrown won't conform to the spec.
Too often we hear "I don't want to use exceptions because they are inefficient".
Unless you are referring to an "embedded" environment where you want all runtime type information switched off, you should not be worrying too much about inefficiency of exceptions if they are being thrown in an appropriate way. Running out of memory is one of these appropriate ways.
Part of the contract of vector is that it will throw if it cannot allocate. If you write a custom allocator that returned NULL instead that would be worse, as it would cause undefined behaviour.
If you have to then use an allocator that will first attempt a failed-allocation callback if one is available, and only then if you still cannot allocate to throw, but still you have to end up with an exception.
Can I give you a hint though: if you really are allocating such large amounts of data then vector is probably the wrong class to use and you should use std::deque instead. Why? Because deque does not require a contiguous block of memory but is still constant time lookup. And the advantages are two-fold:
Allocations will fail less frequently. Because you do not need a contiguous block so you may well have the memory available albeit not in a single block.
There is no reallocation, just more allocations. Reallocations are expensive as it requires all your objects to be moved. When you are in high-volume mode that can be a very timely operation.
When I worked on such a system in the past we found we could actually stored over 4 times as much data using deque as we could using vector because of the reason 1 above, and it was faster because of the reason 2.
Something else we did was allocate a 2MB spare buffer and when we caught a bad_alloc we freed the buffer and then threw anyway to show we had reached capacity. But with 2MB spare now we at least knew we had the memory to perform small operations to move the data from memory to temporary disk storage.
Thus we could catch the bad_alloc sometimes and take an appropriate action retaining a consistent state, which is the purpose of exceptions rather than assuming that running out of memory is always fatal and should never do anything other than terminate the program (or even worse, invoke undefined behaviour).
Standard containers use exceptions for this, you can't get around it other than attempting the allocation only once you know it will succeed. You can't do that portably, because the implementation will typically over-allocate by an unspecified amount. If you have to disable exceptions in the compiler then you're limited in what you can do with containers.
Regarding "simple and efficient", I think that the std containers are reasonably simple and reasonably efficient:
T* p = new (nothrow) T[sz];
if(!p) {
return sorry_not_enough_mem_would_you_like_to_try_again;
}
... more code that doesn't throw ...
delete[] p;
try {
std::vector<T> p(sz);
... more code that doesn't throw ...
} catch (std::bad_alloc) {
return sorry_not_enough_mem_would_you_like_to_try_again;
}
It's the same number of lines of code. If it presents an efficiency problem in the failure case then your program must be failing hundreds of thousands of times per second, in which case I slightly question the program design. I also wonder under what circumstances the cost of throwing and catching an exception is significant compared with the cost of the system call that new probably makes to establish that it can't satisfy the request.
But even better, how about writing your APIs to use exceptions too:
std::vector<T> p(sz);
... more code that doesn't throw ...
Four lines shorter than your original code, and the caller who currently has to handle "sorry_not_enough_mem_would_you_like_to_try_again" can instead handle the exception. If this error code is passed up through several layers of callers, you might save four lines at each level. C++ has exceptions, and for almost all purposes you may as well accept this and write code accordingly.
Regarding "(expected!!)" - sometimes you know how to handle an error condition. The thing to do in that case is to catch the exception. It's how exceptions are supposed to work. If the code that throws the exception somehow knew that there was no point anyone ever catching it, then it could terminate the program instead.

New (std::nothrow) vs. New within a try/catch block

I did some research after learning new, unlike malloc() which I am used to, does not return NULL for failed allocations, and found there are two distinct ways of checking whether new had succeeded or not. Those two ways are:
try
{
ptr = new int[1024];
}
catch(std::bad_alloc& exc)
{
assert();
};
and
ptr = new (std::nothrow) int[1024];
if(ptr == NULL)
assert();
I believe the two ways accomplish the same goal, (correct me if I am wrong of course!), so my question is this:
which is the better option for checking if new succeeded, based entirely on readability, maintainability, and performance, while disregarding de-facto c++ programming convention.
Consider what you are doing. You're allocating memory. And if for some reason memory allocation cannot work, you assert. Which is more or less exactly what will happen if you just let the std::bad_alloc propagate back to main. In a release build, where assert is a no-op, your program will crash when it tries to access the memory. So it's the same as letting the exception bubble up: halting the app.
So ask yourself a question: Do you really need to care what happens if you run out of memory? If all you're doing is asserting, then the exception method is better, because it doesn't clutter your code with random asserts. You just let the exception fall back to main.
If you do in fact have a special codepath in the event that you cannot allocate memory (that is, you can actually continue to function), exceptions may or may not be a way to go, depending on what the codepath is. If the codepath is just a switch set by having a pointer be null, then the nothrow version will be simpler. If instead, you need to do something rather different (pull from a static buffer, or delete some stuff, or whatever), then catching std::bad_alloc is quite good.
It depends on the context of where the allocation is taking place. If your program can continue even if the allocation fails (maybe return an error code to the caller) then use the std::nothrow method and check for NULL. Otherwise you'd be using exceptions for control flow, which is not good practice.
On the other hand, if your program absolutely needs to have that memory allocated successfully in order to be able to function, use try-catch to catch (not necessarily in the immediate vicinity of the new) an exception and exit gracefully from the program.
From a pure performance perspective it matters little. There is inherent overhead with exception handling, though this overhead is generally worth the trade off in application readability and maintenance. Memory allocation failures of this nature should not be in the 99% case of your application, so this should happen infrequently.
From a performance perspective you generally want to avoid the standard allocator due to its relatively poor performance anyway.
All this said, I generally accept the exception throwing version because generally our applications are in a state where if memory allocation fails, there is little we can do other than exit gracefully with an appropriate error message, and we save performance by not requiring NULL checking on our newly allocated resources because by definition an allocation failure will move the scope out from where that matters.
new is used to create objects, not allocate memory, therefore your example is somewhat artificial.
Object constructors typically throw if they fail. Having stepped through the new implementation in Visual Studio more than a few times, I don't believe that the code catches any exceptions. It therefore generally makes sense to look for exceptions when creating objects.
I think std::bad_alloc is thrown only if the memory allocation part fails. I'm not sure what happens if you pass std::nothrow to new but the object constructor throws - there is ambiguity in the documents I have read.
The difference in performance between the 2 approaches is probably irrelevant since most of the processor time may easily be spent in the object constructor or searching the heap.
A rule-of-thumb is not always appropriate. For example, real-time systems typically restrict dynamic memory allocations, so new, if present, would probably be overloaded. In that case it might make use of a returned null pointer and handle failure locally.

Difference in implementation of malloc and new. Stack implementation?

While allocating memory, the new operator throws an exception if the memory is not available. On the other hand, malloc returns a NULL. What is the reason for the difference in implementation. Also, on static memory allocation, i.e. on the stack, is there an exception if we run out of memory?
I have already gone through the link What is the difference between new/delete and malloc/free?
but did not get my answer on the difference in implementation of the two
The problem with C code is that you are supposed to check the return value of function to make sure they worked correctly. But a lot of code was written that did not check the return value and as a result blew up very nicely when you least expected it.
In the worst case scenario it does not even crash immediately but continues on corrupting memory crashing at some point miles down stream of the error.
Thus in C++ exceptions were born.
Now when there is an error the code does not continue (thus no memory corruption) but unwinds the stack (potentially forcing the application to quit). If you can handle the problem you have to explicitly add code to handle the situation before continuing. Thus you can not accidentally forget to not check the error condition; you either check it or the application will quit.
The use of new fits this design.
If you fail to allocate memory then you must explicitly handle the error.
There is no opportunity to forget to check for the NULL pointer. Thus you can't go and mess up memory by accidentally using a NULL pointer.
Also, on static memory allocation, i.e. on the stack, is there an exception if we run out of memory?
Unfortunately you can not rely on this.
It is implementation defined what happens on stack-overflow. On a lot of systems it is not even possible to detect the situation resulting in memory corruption and probably ultimately a crash.
Note
If you #include <new> then there is a no throw version of new that you can use that returns NULL when there is no memory left. It is best avoided unless have some specialized need.
malloc cannot throw an exception, because that would break compatibility with C. new throws an exception because that is the preferred way in C++ to signal errors.
As far as I know, in early versions of C++, new did indeed return 0 on failure,
One important difference, I suppose, lies in the fact that :
malloc is the C-way of allocating memories ; and there were no exceptions in C
new is the C++, object-oriented and all, way ; and there are exceptions in C++, and using them is more clean.
Why keep malloc in C++ ? I suppose it's because C++ compiler can also work with C code...
... But I've often heard (from teachers, while I was still at school, a couple of years ago) that using malloc in C++ is discouraged, and new should be used instead.
At the risk of perhaps adding some confusion...
malloc is a regular C-function. Because it is C, it can only signal errors by ways that fit into a C program: using the return value, an argument passed by pointer or a global variable (like errno).
new introduces a C++ expression, it calls the operator new to obtain memory and then constructs the object. Either the operator new or the constructors may throw.
Note: there is a no throw version of the new expression
Most operator new are generally implemented in term of malloc, but as I noted there is more to a new expression than simply getting memory, since it's also builds the object(s).
It also takes care of managing up until it releases it to you. That is, if the constructor throws, then it properly disposes of the memory that was allocated, and in case of the new[] expression (which builds an array), calls the destructor of those objects that were already built.
Regarding stack overflows: it depends on your compiler and your operating system. The OS might remark the issue and signals the error, the compiler might check etc...
Note that gcc introduces the split-stack option to compiling, which consists in allocating a minimal stack and then growing it on demand. This neatly sidesteps the issue of possible stack-overflows, but introduces yet another binary compatibility issue since interaction with code that was not built with this option could get hazy; I don't know how they plan on implementing this exactly.
For the sake of completeness, remember also, that you can simulate the old (non throwing) method using nothrow -- this is especially suitable for performance critical parts of your code:
// never throws
char* ptr = new (nothrow) char [1024*1024];
// check pointer for succeeded allocation
if ( !ptr ) {
... // handle error
}

nothrow or exception?

I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something.
Since
#include <new>
//...
T * t = new (std::nothrow) T();
if(t)
{
//...
}
//...
Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)??
What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ?
Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there?
Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ??
Thanks for your time. I hope the question is in line with the rules.
Since
dealing with Exceptions is heavier
compared to a simple if(t), why isn't
the normal new T() not considered less
good practice, considering we will
have to use try-catch() to check if a
simple allocation succeeded (and if we
don't, just watch the program die)??
What are the benefits (if any) of the
normal new allocation compared to
using a nothrow new? Exception's
overhead in that case is insignificant
?
The penalty for using exceptions is indeed very heavy, but (in a decently tuned implementation) the penalty is only paid when an exception is thrown - so the mainline case stays very fast, and there is unlikely to be any measurable performance between the two in your example.
The advantage of exceptions is that your code is simpler: if allocating several objects you don't have to do "allocate A; if (A) { allocate B; if (B) etc...". The cleanup and termination - in both the exception and mainline case - is best handled automatically by RAII (whereas if you're checking manually you will also have to free manually, which makes it all too easy to leak memory).
Also, Assume that an allocation fails
(eg. no memory exists in the system).
Is there anything the program can do
in that situation, or just fail
gracefully. There is no way to find
free memory on the heap, when all is
reserved, is there?
There are many things that it can do, and the best thing to do will depend on the program being written. Failing and exiting (gracefully or otherwise) is certainly one option. Another is to reserve sufficient memory in advance, so that the program can carry on with its functions (perhaps with reduced functionality or performance). It may be able to free up some of its own memory (e.g. if it maintains caches that can be rebuilt when needed). Or (in the case of a server process), the server may refuse to process the current request (or refuse to accept new connections), but stay running so that clients don't drop their connections, and things can start working again once memory returns. Or in the case of an interactive/GUI application, it might display an error to the user and carry on (allowing them to fix the memory problem and try again - or at least save their work!).
Incase an allocation fails, and an
std::bad_alloc is thrown, how can we
assume that since there is not enough
memory to allocate an object (Eg. a
new int), there will be enough memory
to store an exception ??
No, usually the standard libraries will ensure, usually by allocating a small amount of memory in advance, that there will be enough memory for an exception to be raised in the event that memory is exhausted.
Nothrow was added to C++ primarily to support embedded systems developers that want to write exception free code. It is also useful if you actually want to handle memory errors locally as a better solution than malloc() followed by a placement new. And finally it is essential for those who wished to continue to use (what were then current) C++ programming styles based on checking for NULL. [I proposed this solution myself, one of the few things I proposed that didn't get downvoted :]
FYI: throwing an exception on out of memory is very design sensitive and hard to implement because if you, for example, were to throw a string, you might double fault because the string does heap allocation. Indeed, if you're out of memory because your heap crashed into the stack, you mightn't even be able to create a temporary! This particular case explains why the standard exceptions are fairly restricted. Also, if you're catching such an exception fairly locally, why you should catch by reference rather than by value (to avoid a possible copy causing a double fault).
Because of all this, nothrow provide a safer solution for critical applications.
I think that the rationale behind why you'd use the regular new instead of the nothrow new is connected to the reason why exceptions are usually preferred to explicitly checking the return value of each function. Not every function that needs to allocate memory necessarily knows what to do if no memory can be found. For example, a deeply-nested function that allocates memory as a subroutine to some algorithm probably has no idea how what the proper course of action to take is if memory can't be found. Using a version of new that throws an exception allows the code that calls the subroutine, not the subroutine itself, to take a more appropriate course of action. This could be as simple as doing nothing and watching the program die (which is perfectly fine if you're writing a small toy program), or signalling some higher-level program construct to start throwing away memory.
In regards to the latter half of your question, there actually could be things you could do if your program ran out of memory that would make memory more available. For example, you might have a part of your program that caches old data, and could tell the cache to evict everything as soon as resources became tight. You could potentially page some less-critical data out to disk, which probably has more space than your memory. There are a whole bunch of tricks like this, and by using exceptions it's possible to put all the emergency logic at the top of the program, and then just have every part of the program that does an allocation not catch the bad_alloc and instead let it propagate up to the top.
Finally, it usually is possible to throw an exception even if memory is scarce. Many C++ implementations reserve some space in the stack (or some other non-heap memory segment) for exceptions, so even if the heap runs out of space it can be possible to find memory for exceptions.
Hope this helps!
Going around exceptions because they're "too expensive" is premature optimisation. There is practically no overhead of a try/catch if an exception is not thrown.
Is there anything the program can do
in that situation
Not usually. If there's no memory in the system, you probably can't even write anything to a log, or print to stdout, or anything. If you're out of memory, you're pretty much screwed.
Running out of memory is expected to be a rare event, so the overhead of throwing an exception when it happens isn't a problem. Implementations can "pre-allocate" any memory that's needed for throwing a std::bad_alloc, to ensure that it's available even when the program has otherwise run out of memory.
The reason for throwing an exception by default, instead of returning null, is that it avoids the need for null checks after every allocation. Many programmers wouldn't bother doing that, and if the program were to continue with a null pointer after a failed allocation, it would probably just crash later with something like a segmentation fault, which doesn't indicate the real cause of the problem. The use of an exception means that if the OOM condition isn't handled, the program will immediately terminate with an error that actually indicates what went wrong, which makes debugging much easier.
It's also easier to write handling code for out-of-memory situations if they throw exceptions: instead of having to individually check the result of every allocation, you can put a catch block somewhere high in the call stack to catch OOM conditions from many places throughout the program.
In Symbian C++ it works the other way around. If you want an exception thrown when OOM you have to do
T* t = new(ELeave) T();
And you're right about the logic of throwing a new exception when OOM being strange. A scenario that is manageable suddenly becomes a program termination.

Is it useful to test the return of "new" in C++?

I usually never see test for new in C++ and I was wondering why.
Foo *f = new Foo;
// f is assumed as allocated, why usually, nobody test the return of new?
As per the current standard, new never returns NULL, it throws a std::bad_alloc instead. If you don't want new to throw(as per the old standard) but rather return NULL you should call it by postfixing it with "(std::nothrow)".
i.e.
Foo* foo = new (std::nothrow) Foo;
Of course, if you have a very old or possibly broken toolchain it might not follow the standard.
It all depends on your complier VC++ up to version 6 gives NULL if the new operator fails, on a non MFC application.
Now the problem gets bigger when you use for example STL with VC++ 6, because the STL goes with the standards it will never test for NULL when he needs to get some memory, and guess what will happen under low memory conditions....
So for everybody that still uses VC++ 6 and STL check this article for a Fix.
Don't Let Memory Allocation Failures Crash Your Legacy STL Application
new throws std::bad_alloc by default. If you use default, checking for null is obsolete, but handling exceptions is necessary. This default behavior is consistent with C++ exception safety paradigm - it usually provides that an object is either constructed or not allocated
if you override default by using new (std::nothrow), checking on null is necessary. "New" both allocates and commits pages, so it is possible to run out of memory, either because you ran out of page descriptors, or because there's no physical memory available
research your OS's memory management. C/C++ reference machine does not know how your OS manages memory, so relying on language alone is not safe. For example of memory allocation gone bad, read on C malloc() + Linux overcommit
It all depends which version of C++ the code targets.
The c++ specification for a long time now has stated that, by default at least, failures in new will cause a c++ exception, so any code performing a test would be entirely redundant.
Most programming nowadays also targets virtual memory operating systems where its almost impossible to run out of memory, AND an out-of-memory condition is so fatal anyway that just letting the application crash on the next NULL access is as good a way of any of terminating.
Its only really in embedded programming, where exception handling is deemed to be too much of an overhead, and memory is very limited, that programmers bother to check for new failures.
As quoted here
"In compilers conforming to the ISO C++ standard, if there is not enough memory for the allocation, the code throws an exception of type std::bad_alloc. All subsequent code is aborted until the error is handled in a try-catch block or the program exits abnormally. The program does not need to check the value of the pointer; if no exception was thrown, the allocation succeeded."
Usually no one tests the return of new in new code because Visual Studio now throws the way the standard says.
In old code if a hack has been done to avoid throwing then you'd still better test.