While allocating memory, the new operator throws an exception if the memory is not available. On the other hand, malloc returns a NULL. What is the reason for the difference in implementation. Also, on static memory allocation, i.e. on the stack, is there an exception if we run out of memory?
I have already gone through the link What is the difference between new/delete and malloc/free?
but did not get my answer on the difference in implementation of the two
The problem with C code is that you are supposed to check the return value of function to make sure they worked correctly. But a lot of code was written that did not check the return value and as a result blew up very nicely when you least expected it.
In the worst case scenario it does not even crash immediately but continues on corrupting memory crashing at some point miles down stream of the error.
Thus in C++ exceptions were born.
Now when there is an error the code does not continue (thus no memory corruption) but unwinds the stack (potentially forcing the application to quit). If you can handle the problem you have to explicitly add code to handle the situation before continuing. Thus you can not accidentally forget to not check the error condition; you either check it or the application will quit.
The use of new fits this design.
If you fail to allocate memory then you must explicitly handle the error.
There is no opportunity to forget to check for the NULL pointer. Thus you can't go and mess up memory by accidentally using a NULL pointer.
Also, on static memory allocation, i.e. on the stack, is there an exception if we run out of memory?
Unfortunately you can not rely on this.
It is implementation defined what happens on stack-overflow. On a lot of systems it is not even possible to detect the situation resulting in memory corruption and probably ultimately a crash.
Note
If you #include <new> then there is a no throw version of new that you can use that returns NULL when there is no memory left. It is best avoided unless have some specialized need.
malloc cannot throw an exception, because that would break compatibility with C. new throws an exception because that is the preferred way in C++ to signal errors.
As far as I know, in early versions of C++, new did indeed return 0 on failure,
One important difference, I suppose, lies in the fact that :
malloc is the C-way of allocating memories ; and there were no exceptions in C
new is the C++, object-oriented and all, way ; and there are exceptions in C++, and using them is more clean.
Why keep malloc in C++ ? I suppose it's because C++ compiler can also work with C code...
... But I've often heard (from teachers, while I was still at school, a couple of years ago) that using malloc in C++ is discouraged, and new should be used instead.
At the risk of perhaps adding some confusion...
malloc is a regular C-function. Because it is C, it can only signal errors by ways that fit into a C program: using the return value, an argument passed by pointer or a global variable (like errno).
new introduces a C++ expression, it calls the operator new to obtain memory and then constructs the object. Either the operator new or the constructors may throw.
Note: there is a no throw version of the new expression
Most operator new are generally implemented in term of malloc, but as I noted there is more to a new expression than simply getting memory, since it's also builds the object(s).
It also takes care of managing up until it releases it to you. That is, if the constructor throws, then it properly disposes of the memory that was allocated, and in case of the new[] expression (which builds an array), calls the destructor of those objects that were already built.
Regarding stack overflows: it depends on your compiler and your operating system. The OS might remark the issue and signals the error, the compiler might check etc...
Note that gcc introduces the split-stack option to compiling, which consists in allocating a minimal stack and then growing it on demand. This neatly sidesteps the issue of possible stack-overflows, but introduces yet another binary compatibility issue since interaction with code that was not built with this option could get hazy; I don't know how they plan on implementing this exactly.
For the sake of completeness, remember also, that you can simulate the old (non throwing) method using nothrow -- this is especially suitable for performance critical parts of your code:
// never throws
char* ptr = new (nothrow) char [1024*1024];
// check pointer for succeeded allocation
if ( !ptr ) {
... // handle error
}
Related
This question already has answers here:
nothrow or exception?
(6 answers)
Closed 9 years ago.
According to C++ reference, you can new an object by:
MyClass * p1 = new MyClass;
or by
MyClass * p2 = new (std::nothrow) MyClass;
The second one will return a null pointer instead of throwing an exception.
However, I hardly see this version in my experience.
For example Google does not recommend using exception in their code, but they are not using the nothrow version either in Chromium as I can see.
Is there any reason that we prefer the default one against the nothrow one? Even in a project that is not using exception?
-- EDIT --
Follow up question: should I check return value of malloc()?
It looks like, on the contrary, many people advice to check return value of malloc, some said because:
many allocation failures have nothing to do with being out of memory. Fragmentation can cause an allocation to fail because there's not enough contiguous space available even though there's plenty of memory free.
Is this true? Why we treat malloc() and new() differently in this case?
However, I hardly see this version in my experience.
You would use it (or, equivalently, catch the exception from the default version) if you can handle the failure locally; perhaps by requesting to free some other memory and then retrying, or by trying to allocate something smaller, or using an alternative algorithm that doesn't need extra memory.
Is there any reason that we prefer the default one against the nothrow one?
The general principle of exceptions: if you can't handle it locally, then there's no point in checking locally. Unlike return values, exceptions can't be ignored, so there's no possibility of ploughing on regardless and using a null pointer.
Even in a project that is not using exception?
Often, an out-of-memory condition can't be handled at all. In that case, terminating the program is probably the best response; and that is the default response to an unhandled exception. So, even if you're not using exceptions, the default new is probably the best option in most situations.
should I check return value of malloc()?
Yes: that's the only way to check whether it succeeded. If you don't, then you could end up using a null pointer, giving undefined behaviour: often a crash, but perhaps data corruption or other bizarre behaviour and long debugging sessions to (hopefully) figure out what went wrong.
Why we treat malloc() and new differently in this case?
Because malloc forces us to check the return value, while new gives us the option of less intrusive error handling.
If you use the throwing version you don't need to test the result of every new call to see if it succeeded or failed. Typically speaking in many/most applications if your allocation fails you can't do much and just exit/abort, which the exception does for you automatically if you don't explicitly try/catch.
If you use the nothrow version you might wind up propagating a null pointer through your application and crashing/exiting MUCH later on at a point apparently totally unrelated to memory allocation, making debugging much harder.
I did some research after learning new, unlike malloc() which I am used to, does not return NULL for failed allocations, and found there are two distinct ways of checking whether new had succeeded or not. Those two ways are:
try
{
ptr = new int[1024];
}
catch(std::bad_alloc& exc)
{
assert();
};
and
ptr = new (std::nothrow) int[1024];
if(ptr == NULL)
assert();
I believe the two ways accomplish the same goal, (correct me if I am wrong of course!), so my question is this:
which is the better option for checking if new succeeded, based entirely on readability, maintainability, and performance, while disregarding de-facto c++ programming convention.
Consider what you are doing. You're allocating memory. And if for some reason memory allocation cannot work, you assert. Which is more or less exactly what will happen if you just let the std::bad_alloc propagate back to main. In a release build, where assert is a no-op, your program will crash when it tries to access the memory. So it's the same as letting the exception bubble up: halting the app.
So ask yourself a question: Do you really need to care what happens if you run out of memory? If all you're doing is asserting, then the exception method is better, because it doesn't clutter your code with random asserts. You just let the exception fall back to main.
If you do in fact have a special codepath in the event that you cannot allocate memory (that is, you can actually continue to function), exceptions may or may not be a way to go, depending on what the codepath is. If the codepath is just a switch set by having a pointer be null, then the nothrow version will be simpler. If instead, you need to do something rather different (pull from a static buffer, or delete some stuff, or whatever), then catching std::bad_alloc is quite good.
It depends on the context of where the allocation is taking place. If your program can continue even if the allocation fails (maybe return an error code to the caller) then use the std::nothrow method and check for NULL. Otherwise you'd be using exceptions for control flow, which is not good practice.
On the other hand, if your program absolutely needs to have that memory allocated successfully in order to be able to function, use try-catch to catch (not necessarily in the immediate vicinity of the new) an exception and exit gracefully from the program.
From a pure performance perspective it matters little. There is inherent overhead with exception handling, though this overhead is generally worth the trade off in application readability and maintenance. Memory allocation failures of this nature should not be in the 99% case of your application, so this should happen infrequently.
From a performance perspective you generally want to avoid the standard allocator due to its relatively poor performance anyway.
All this said, I generally accept the exception throwing version because generally our applications are in a state where if memory allocation fails, there is little we can do other than exit gracefully with an appropriate error message, and we save performance by not requiring NULL checking on our newly allocated resources because by definition an allocation failure will move the scope out from where that matters.
new is used to create objects, not allocate memory, therefore your example is somewhat artificial.
Object constructors typically throw if they fail. Having stepped through the new implementation in Visual Studio more than a few times, I don't believe that the code catches any exceptions. It therefore generally makes sense to look for exceptions when creating objects.
I think std::bad_alloc is thrown only if the memory allocation part fails. I'm not sure what happens if you pass std::nothrow to new but the object constructor throws - there is ambiguity in the documents I have read.
The difference in performance between the 2 approaches is probably irrelevant since most of the processor time may easily be spent in the object constructor or searching the heap.
A rule-of-thumb is not always appropriate. For example, real-time systems typically restrict dynamic memory allocations, so new, if present, would probably be overloaded. In that case it might make use of a returned null pointer and handle failure locally.
I know free() won't call the destructor, but what else will this cause besides that the member variable won't be destructed properly?
Also, what if we delete a pointer that is allocated by malloc?
It is implementation defined whether new uses malloc under the hood. Mixing new with free and malloc with delete could cause a catastrophic failure at runtime if the code was ported to a new machine, a new compiler, or even a new version of the same compiler.
I know free() won't call the destructor
And that is reason enough not to do it.
In addition, there's no requirement for a C++ implementation to even use the same memory areas for malloc and new so it may be that you're trying to free memory from a totally different arena, something which will almost certainly be fatal.
Many points:
It's undefined behaviour, and hence inherently risky and subject to change or breakage at any time and for no reason at all.
(As you know) delete calls the destructor and free doesn't... you may have some POD type and not care, but it's easy for someone else to add say a string to that type without realising there are weird limitations on its content.
If you malloc and forget to use placement new to construct an object in it, then invoke a member function as if the object existed (including delete which calls the destructor), the member function may attempt operations using pointers with garbage values
new and malloc may get memory from different heaps.
Even if new calls malloc to get its memory, there may not be a 1:1 correspondence between the new/delete and underlying malloc/free behaviour.
e.g. new may have extra logic such as small-object optimisations that have proven beneficial to typical C++ programs but harmful to typical C programs.
Someone may overload new, or link in a debug version of malloc/realloc/free, either of which could break if you're not using the functions properly.
Tools like ValGrind, Purify and Insure won't be able to differentiate between the deliberately dubious and the accidentally.
In the case of arrays, delete[] invokes all the destructors and free() won't, but also the heap memory typically has a counter of the array size (for 32-bit VC++2005 Release builds for example, the array size is in the 4 bytes immediately before the pointer value visibly returned by new[]. This extra value may or may not be be there for POD types (not for VC++2005), but if it is free() certainly won't expect it. Not all heap implementations allow you to free a pointer that's been shifted from the value returned by malloc().
An important difference is that new and delete also call the constructor and destructor of the object. Thus, you may get unexpected behavior. That is the most important thing i think.
Because it might not be the same allocator, which could lead to weird, unpredictable behaviour. Plus, you shouldn't be using malloc/free at all, and avoid using new/delete where it's not necessary.
It totally depends on the implementation -- it's possible to write an implementation where this actually works fine. But there's no guarantee that the pool of memory new allocates from is the same pool that free() wants to return the memory to. Imagine that both malloc() and new use a few bytes of extra memory at the beginning of each allocated block to specify how large the block is. Further, imagine that malloc() and new use different formats for this info -- for example, malloc() uses the number of bytes, but new uses the number of 4-byte long words (just an example). Now, if you allocate with malloc() and free with delete, the info delete expects won't be valid, and you'll end up with a corrupted heap.
I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something.
Since
#include <new>
//...
T * t = new (std::nothrow) T();
if(t)
{
//...
}
//...
Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)??
What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ?
Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there?
Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ??
Thanks for your time. I hope the question is in line with the rules.
Since
dealing with Exceptions is heavier
compared to a simple if(t), why isn't
the normal new T() not considered less
good practice, considering we will
have to use try-catch() to check if a
simple allocation succeeded (and if we
don't, just watch the program die)??
What are the benefits (if any) of the
normal new allocation compared to
using a nothrow new? Exception's
overhead in that case is insignificant
?
The penalty for using exceptions is indeed very heavy, but (in a decently tuned implementation) the penalty is only paid when an exception is thrown - so the mainline case stays very fast, and there is unlikely to be any measurable performance between the two in your example.
The advantage of exceptions is that your code is simpler: if allocating several objects you don't have to do "allocate A; if (A) { allocate B; if (B) etc...". The cleanup and termination - in both the exception and mainline case - is best handled automatically by RAII (whereas if you're checking manually you will also have to free manually, which makes it all too easy to leak memory).
Also, Assume that an allocation fails
(eg. no memory exists in the system).
Is there anything the program can do
in that situation, or just fail
gracefully. There is no way to find
free memory on the heap, when all is
reserved, is there?
There are many things that it can do, and the best thing to do will depend on the program being written. Failing and exiting (gracefully or otherwise) is certainly one option. Another is to reserve sufficient memory in advance, so that the program can carry on with its functions (perhaps with reduced functionality or performance). It may be able to free up some of its own memory (e.g. if it maintains caches that can be rebuilt when needed). Or (in the case of a server process), the server may refuse to process the current request (or refuse to accept new connections), but stay running so that clients don't drop their connections, and things can start working again once memory returns. Or in the case of an interactive/GUI application, it might display an error to the user and carry on (allowing them to fix the memory problem and try again - or at least save their work!).
Incase an allocation fails, and an
std::bad_alloc is thrown, how can we
assume that since there is not enough
memory to allocate an object (Eg. a
new int), there will be enough memory
to store an exception ??
No, usually the standard libraries will ensure, usually by allocating a small amount of memory in advance, that there will be enough memory for an exception to be raised in the event that memory is exhausted.
Nothrow was added to C++ primarily to support embedded systems developers that want to write exception free code. It is also useful if you actually want to handle memory errors locally as a better solution than malloc() followed by a placement new. And finally it is essential for those who wished to continue to use (what were then current) C++ programming styles based on checking for NULL. [I proposed this solution myself, one of the few things I proposed that didn't get downvoted :]
FYI: throwing an exception on out of memory is very design sensitive and hard to implement because if you, for example, were to throw a string, you might double fault because the string does heap allocation. Indeed, if you're out of memory because your heap crashed into the stack, you mightn't even be able to create a temporary! This particular case explains why the standard exceptions are fairly restricted. Also, if you're catching such an exception fairly locally, why you should catch by reference rather than by value (to avoid a possible copy causing a double fault).
Because of all this, nothrow provide a safer solution for critical applications.
I think that the rationale behind why you'd use the regular new instead of the nothrow new is connected to the reason why exceptions are usually preferred to explicitly checking the return value of each function. Not every function that needs to allocate memory necessarily knows what to do if no memory can be found. For example, a deeply-nested function that allocates memory as a subroutine to some algorithm probably has no idea how what the proper course of action to take is if memory can't be found. Using a version of new that throws an exception allows the code that calls the subroutine, not the subroutine itself, to take a more appropriate course of action. This could be as simple as doing nothing and watching the program die (which is perfectly fine if you're writing a small toy program), or signalling some higher-level program construct to start throwing away memory.
In regards to the latter half of your question, there actually could be things you could do if your program ran out of memory that would make memory more available. For example, you might have a part of your program that caches old data, and could tell the cache to evict everything as soon as resources became tight. You could potentially page some less-critical data out to disk, which probably has more space than your memory. There are a whole bunch of tricks like this, and by using exceptions it's possible to put all the emergency logic at the top of the program, and then just have every part of the program that does an allocation not catch the bad_alloc and instead let it propagate up to the top.
Finally, it usually is possible to throw an exception even if memory is scarce. Many C++ implementations reserve some space in the stack (or some other non-heap memory segment) for exceptions, so even if the heap runs out of space it can be possible to find memory for exceptions.
Hope this helps!
Going around exceptions because they're "too expensive" is premature optimisation. There is practically no overhead of a try/catch if an exception is not thrown.
Is there anything the program can do
in that situation
Not usually. If there's no memory in the system, you probably can't even write anything to a log, or print to stdout, or anything. If you're out of memory, you're pretty much screwed.
Running out of memory is expected to be a rare event, so the overhead of throwing an exception when it happens isn't a problem. Implementations can "pre-allocate" any memory that's needed for throwing a std::bad_alloc, to ensure that it's available even when the program has otherwise run out of memory.
The reason for throwing an exception by default, instead of returning null, is that it avoids the need for null checks after every allocation. Many programmers wouldn't bother doing that, and if the program were to continue with a null pointer after a failed allocation, it would probably just crash later with something like a segmentation fault, which doesn't indicate the real cause of the problem. The use of an exception means that if the OOM condition isn't handled, the program will immediately terminate with an error that actually indicates what went wrong, which makes debugging much easier.
It's also easier to write handling code for out-of-memory situations if they throw exceptions: instead of having to individually check the result of every allocation, you can put a catch block somewhere high in the call stack to catch OOM conditions from many places throughout the program.
In Symbian C++ it works the other way around. If you want an exception thrown when OOM you have to do
T* t = new(ELeave) T();
And you're right about the logic of throwing a new exception when OOM being strange. A scenario that is manageable suddenly becomes a program termination.
I usually never see test for new in C++ and I was wondering why.
Foo *f = new Foo;
// f is assumed as allocated, why usually, nobody test the return of new?
As per the current standard, new never returns NULL, it throws a std::bad_alloc instead. If you don't want new to throw(as per the old standard) but rather return NULL you should call it by postfixing it with "(std::nothrow)".
i.e.
Foo* foo = new (std::nothrow) Foo;
Of course, if you have a very old or possibly broken toolchain it might not follow the standard.
It all depends on your complier VC++ up to version 6 gives NULL if the new operator fails, on a non MFC application.
Now the problem gets bigger when you use for example STL with VC++ 6, because the STL goes with the standards it will never test for NULL when he needs to get some memory, and guess what will happen under low memory conditions....
So for everybody that still uses VC++ 6 and STL check this article for a Fix.
Don't Let Memory Allocation Failures Crash Your Legacy STL Application
new throws std::bad_alloc by default. If you use default, checking for null is obsolete, but handling exceptions is necessary. This default behavior is consistent with C++ exception safety paradigm - it usually provides that an object is either constructed or not allocated
if you override default by using new (std::nothrow), checking on null is necessary. "New" both allocates and commits pages, so it is possible to run out of memory, either because you ran out of page descriptors, or because there's no physical memory available
research your OS's memory management. C/C++ reference machine does not know how your OS manages memory, so relying on language alone is not safe. For example of memory allocation gone bad, read on C malloc() + Linux overcommit
It all depends which version of C++ the code targets.
The c++ specification for a long time now has stated that, by default at least, failures in new will cause a c++ exception, so any code performing a test would be entirely redundant.
Most programming nowadays also targets virtual memory operating systems where its almost impossible to run out of memory, AND an out-of-memory condition is so fatal anyway that just letting the application crash on the next NULL access is as good a way of any of terminating.
Its only really in embedded programming, where exception handling is deemed to be too much of an overhead, and memory is very limited, that programmers bother to check for new failures.
As quoted here
"In compilers conforming to the ISO C++ standard, if there is not enough memory for the allocation, the code throws an exception of type std::bad_alloc. All subsequent code is aborted until the error is handled in a try-catch block or the program exits abnormally. The program does not need to check the value of the pointer; if no exception was thrown, the allocation succeeded."
Usually no one tests the return of new in new code because Visual Studio now throws the way the standard says.
In old code if a hack has been done to avoid throwing then you'd still better test.