There is a method called foo that sometimes returns the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abort
Is there a way that I can use a try-catch block to stop this error from terminating my program (all I want to do is return -1)?
If so, what is the syntax for it?
How else can I deal with bad_alloc in C++?
In general you cannot, and should not try, to respond to this error. bad_alloc indicates that a resource cannot be allocated because not enough memory is available. In most scenarios your program cannot hope to cope with that, and terminating soon is the only meaningful behaviour.
Worse, modern operating systems often over-allocate: on such systems, malloc and new can return a valid pointer even if there is not enough free memory left – std::bad_alloc will never be thrown, or is at least not a reliable sign of memory exhaustion. Instead, attempts to access the allocated memory will then result in a segmentation fault, which is not catchable (you can handle the segmentation fault signal, but you cannot resume the program afterwards).
The only thing you could do when catching std::bad_alloc is to perhaps log the error, and try to ensure a safe program termination by freeing outstanding resources (but this is done automatically in the normal course of stack unwinding after the error gets thrown if the program uses RAII appropriately).
In certain cases, the program may attempt to free some memory and try again, or use secondary memory (= disk) instead of RAM but these opportunities only exist in very specific scenarios with strict conditions:
The application must ensure that it runs on a system that does not overcommit memory, i.e. it signals failure upon allocation rather than later.
The application must be able to free memory immediately, without any further accidental allocations in the meantime.
It’s exceedingly rare that applications have control over point 1 — userspace applications never do, it’s a system-wide setting that requires root permissions to change.1
OK, so let’s assume you’ve fixed point 1. What you can now do is for instance use a LRU cache for some of your data (probably some particularly large business objects that can be regenerated or reloaded on demand). Next, you need to put the actual logic that may fail into a function that supports retry — in other words, if it gets aborted, you can just relaunch it:
lru_cache<widget> widget_cache;
double perform_operation(int widget_id) {
std::optional<widget> maybe_widget = widget_cache.find_by_id(widget_id);
if (not maybe_widget) {
maybe_widget = widget_cache.store(widget_id, load_widget_from_disk(widget_id));
}
return maybe_widget->frobnicate();
}
…
for (int num_attempts = 0; num_attempts < MAX_NUM_ATTEMPTS; ++num_attempts) {
try {
return perform_operation(widget_id);
} catch (std::bad_alloc const&) {
if (widget_cache.empty()) throw; // memory error elsewhere.
widget_cache.remove_oldest();
}
}
// Handle too many failed attempts here.
But even here, using std::set_new_handler instead of handling std::bad_alloc provides the same benefit and would be much simpler.
1 If you’re creating an application that does control point 1, and you’re reading this answer, please shoot me an email, I’m genuinely curious about your circumstances.
You can catch it like any other exception:
try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}
Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.
What is the C++ Standard specified behavior of new in c++?
The usual notion is that if new operator cannot allocate dynamic memory of the requested size, then it should throw an exception of type std::bad_alloc.
However, something more happens even before a bad_alloc exception is thrown:
C++03 Section 3.7.4.1.3: says
An allocation function that fails to allocate storage can invoke the currently installed new_handler(18.4.2.2), if any. [Note: A program-supplied allocation function can obtain the address of the currently installed new_handler using the set_new_handler function (18.4.2.3).] If an allocation function declared with an empty exception-specification (15.4), throw(), fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall only indicate failure by throw-ing an exception of class std::bad_alloc (18.4.2.1) or a class derived from std::bad_alloc.
Consider the following code sample:
#include <iostream>
#include <cstdlib>
// function to call if operator new can't allocate enough memory or error arises
void outOfMemHandler()
{
std::cerr << "Unable to satisfy request for memory\n";
std::abort();
}
int main()
{
//set the new_handler
std::set_new_handler(outOfMemHandler);
//Request huge memory size, that will cause ::operator new to fail
int *pBigDataArray = new int[100000000L];
return 0;
}
In the above example, operator new (most likely) will be unable to allocate space for 100,000,000 integers, and the function outOfMemHandler() will be called, and the program will abort after issuing an error message.
As seen here the default behavior of new operator when unable to fulfill a memory request, is to call the new-handler function repeatedly until it can find enough memory or there is no more new handlers. In the above example, unless we call std::abort(), outOfMemHandler() would be called repeatedly. Therefore, the handler should either ensure that the next allocation succeeds, or register another handler, or register no handler, or not return (i.e. terminate the program). If there is no new handler and the allocation fails, the operator will throw an exception.
What is the new_handler and set_new_handler?
new_handler is a typedef for a pointer to a function that takes and returns nothing, and set_new_handler is a function that takes and returns a new_handler.
Something like:
typedef void (*new_handler)();
new_handler set_new_handler(new_handler p) throw();
set_new_handler's parameter is a pointer to the function operator new should call if it can't allocate the requested memory. Its return value is a pointer to the previously registered handler function, or null if there was no previous handler.
How to handle out of memory conditions in C++?
Given the behavior of newa well designed user program should handle out of memory conditions by providing a proper new_handlerwhich does one of the following:
Make more memory available: This may allow the next memory allocation attempt inside operator new's loop to succeed. One way to implement this is to allocate a large block of memory at program start-up, then release it for use in the program the first time the new-handler is invoked.
Install a different new-handler: If the current new-handler can't make any more memory available, and of there is another new-handler that can, then the current new-handler can install the other new-handler in its place (by calling set_new_handler). The next time operator new calls the new-handler function, it will get the one most recently installed.
(A variation on this theme is for a new-handler to modify its own behavior, so the next time it's invoked, it does something different. One way to achieve this is to have the new-handler modify static, namespace-specific, or global data that affects the new-handler's behavior.)
Uninstall the new-handler: This is done by passing a null pointer to set_new_handler. With no new-handler installed, operator new will throw an exception ((convertible to) std::bad_alloc) when memory allocation is unsuccessful.
Throw an exception convertible to std::bad_alloc. Such exceptions are not be caught by operator new, but will propagate to the site originating the request for memory.
Not return: By calling abort or exit.
I would not suggest this, since bad_alloc means you are out of memory. It would be best to just give up instead of attempting to recover. However here is is the solution you are asking for:
try {
foo();
} catch ( const std::bad_alloc& e ) {
return -1;
}
I may suggest a more simple (and even faster) solution for this. new operator would return null if memory could not be allocated.
int fv() {
T* p = new (std::nothrow) T[1000000];
if (!p) return -1;
do_something(p);
delete p;
return 0;
}
I hope this could help!
Let your foo program exit in a controlled way:
#include <stdlib.h> /* exit, EXIT_FAILURE */
try {
foo();
} catch (const std::bad_alloc&) {
exit(EXIT_FAILURE);
}
Then write a shell program that calls the actual program. Since the address spaces are separated, the state of your shell program is always well-defined.
Of course you can catch a bad_alloc, but I think the better question is how you can stop a bad_alloc from happening in the first place.
Generally, bad_alloc means that something went wrong in an allocation of memory - for example when you are out of memory. If your program is 32-bit, then this already happens when you try to allocate >4 GB. This happened to me once when I copied a C-string to a QString. The C-string wasn't '\0'-terminated which caused the strlen function to return a value in the billions. So then it attempted to allocate several GB of RAM, which caused the bad_alloc.
I have also seen bad_alloc when I accidentally accessed an uninitialized variable in the initializer-list of a constructor. I had a class foo with a member T bar. In the constructor I wanted to initialize the member with a value from a parameter:
foo::foo(T baz) // <-- mistyped: baz instead of bar
: bar(bar)
{
}
Because I had mistyped the parameter, the constructor initialized bar with itself (so it read an uninitialized value!) instead of the parameter.
valgrind can be very helpful with such errors!
Related
When I learnt C99 I was told to Always check the return value of malloc to check whether it succeeded or failed, but now I started learning C++ and I was told that there is no need to do this with the keyword new, and you can suppose that it will always work for you.
But why is that?
new can still fail and throw an std::bad_alloc exception, and your program needs to may check whether it did, or simply let the exception propagate up. There is also a flag you can pass to new to make it act like malloc and return NULL on error. Take a look at the documentation.
Edit
Here are two examples:
try {
char* arr = new char[20];
} catch (std::bad_alloc& e) {
// Handle error
}
Or using the nothrow flag, making new act like malloc:
char* arr = new (std::nothrow) char[20];
if (!arr) {
// Handle error
}
Any dynamic allocation can fail.
malloc signals this by returning NULL. If it failed, unless explicitly you check for its return the program will continue, even though malloc failed, most likely resulting in Undefined Behavior when you try to access via the pointer returned by malloc (which is NULL). This is why you should always check for malloc failure.
new on the other hand signals this by throwing an std::bad_alloc exception (default behavior). If you don't catch the the exception it will bubble up to the top and terminate your program. This is desired, so you don't need to do anything.
Also please note that in C++ you should never explicitly call new/delete. Use standard containers like std::vector or smart pointers.
New allocates memory and calls constructor for object initialization: if it fails it throws an exception std::bad_alloc. malloc allocates memory and does not call constructor: if its allocation fails, it return a null pointer, so you have to check what you get from it.
However, in c++ you cannot assume that new will always work: you can assume that if it doesn't work, it throws an exception.
There are many ways to check the return values,
you could check the returned pointer if you use the nothrow version
you can use a try block on a much higher level
you can use the set_new_handler to check handle it for all new's.
I prefer the 2nd and 3rd.
In C++, unless you use advanced features, if memory cannot be allocated with the new operator, an exception is thrown, so the there is no need to check if the pointer obtained by new is null or not. Handling the exception is up to the programmer. If you don't, the program will terminate abruptly. It is actually tricky to handle this exception properly and restart the operations without memory or resource leaks, which is why allocation of objects with new and delete is now considered obsolete. Using containers and smart pointers is a better alternative.
Note that you can get the same behavior in C with a wrapper on malloc():
#include <stdio.h>
#include <stdlib.h>
void *xmalloc(size_t size) {
void *p = malloc(size);
if (p == NULL) {
fprintf(stderr, "malloc failed for %zu bytes\n", size);
exit(1);
}
return p;
}
There is no need to check for memory allocation failure by xmalloc() since such a failure causes an abrupt program termination automatically. This approach can be used for command line utilities where failure is not catastrophic and can be handled interactively.
Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).
This question already has answers here:
Is there a general consensus in the C++ community on when exceptions should be used? [closed]
(11 answers)
Closed 9 years ago.
I have used in many places if...else statements, however I'm new to exception handling. What is the main difference among these two?
for eg:
int *ptr = new (nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
OR
try
{
int* myarray= new int[1000];
}
catch (exception& e)
{
cout << "Standard exception: " << e.what() << endl;
}
So we are using here standard class for exception which has some in build function like e.what(). So it may be advantage. Other than that all other functionality handling we can do using if...else also. Is there any other merits in using exception handling?
To collect what the comments say in an answer:
since the standardization in 1998, new does not return a null pointer at failure but throws an exception, namely std::bad_alloc. This is different to C's malloc and maybe to some early pre-standard implementations of C++, where new might have returned NULL as well (I don't know, tbh).
There is a possibility in C++, to get a nullpointer on allocation failure instead of an exception as well:
int *ptr = new(std::nothrow) int[1000];
So in short, the first code you have will not work as intended, as it is an attempt of C-style error handling in the presence of C++ exceptions. If allocation fails, the exception will be thrown, the if block will never be entered and the program probably will be terminated since you don't catch the bad_alloc.
There are lots of articles comparing general error handling with exceptions vs return codes, and it would go way to far trying to cover the topic here. Amongst the reasons for exceptions are
Function return types are not occupied by the error handling but can return real values - no "output" function parameters needed.
You do not need to handle the return of every single function call in every single function but can just catch the exception some levels up the call stack where you actually can handle the error
Exceptions can pass arbitraty information to the error handling site, as compared to one global errno variable and a single returned error code.
The main difference is that the version using exception handling at least might work, where the one using the if statement can't possibly work.
Your first snippet:
int *ptr = new int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
...seems to assume that new will return a null pointer in case of failure. While that was true at one time, it hasn't been in a long time. With any reasonably current compiler, the new has only two possibilities: succeed or throw. Therefore, your second version aligns with how C++ is supposed to work.
If you really want to use this style, you can rewrite the code to get it to return a null pointer in case of failure:
int *ptr = new(nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
In most cases, you shouldn't be using new directly anyway -- you should really use std::vector<int> p(1000); and be done with it.
With that out of the way, I feel obliged to add that for an awful lot of code, it probably makes the most sense to do neither and simply assume that the memory allocation will succeed.
At one time (MS-DOS) it was fairly common for memory allocation to actually fail if you tried to allocate more memory than was available -- but that was a long time ago. Nowadays, things aren't so simple (as a rule). Current systems use virtual memory, which makes the situation much more complicated.
On Linux, what'll typically happen is that even the memory isn't really available, Linux will do what's called an "overdcommit". You'll still get a non-null pointer as if the allocation had succeeded -- but when you try to use the memory, bad things will happen. Specifically, Linux has what's called an "OOM Killer" that basically assumes that running out of memory is a sign of a bug, so if it happens, it tries to find the buggy program(s), and kills it/them. For most practical purpose, this means your program will probably be killed, and other (semi-arbitrarily chosen) ones may be as well.
Windows stays a little closer to the model C++ expects, so if (for example) your code were running on an unattended server, the allocation might actually fail. Long before it fails, however, it'll drag the rest of the machine to its knees, madly swapping in a doomed attempt at making the allocation succeed. If the user is actually operating the machine at the time, they'll typically either kill your program or else kill some others to free up enough memory for your code to get the requested memory fairly quickly.
In none of these cases is it particularly realistic to program against the assumption that an allocation can fail though. For most practical purposes, one of two things happens: either the allocation succeeds, or the program dies.
That leads back to the previous advice: in a typical case, you should generally just use std::vector, and assume your allocation will succeed. If you need to provide availability beyond that, you just about need to do it some other way (such as re-starting the process if it dies, preferably in a way that uses less memory).
As already mentioned, your original if-else example would still throw an exception from C++98 onwards, though adding nothrow (as edited) should make it work as desired (return null, thus trigger if-statement).
Below I'll assume, for simplicity, that, for if-else to handle exceptions, we have functions returning false on exception.
Some advantages of exceptions above if-else, off the top of my head:
You know the type of the exception for logging / debugging / bug fixing
Example:
When a function throws an exception, you can, to a reasonable extent, tell whether there may be a problem with the code or something that you can't do much about like an out of memory exception.
With the if-else, when a function returns false, you have no idea what happened in that function.
You can of course have separate logging to record this information, but why not just return an exception with the exception details included instead?
You needn't have a mess of if-else conditions to propagate the exception to the calling function
Example: (comments included to indicate behaviour)
bool someFunction() // may return false on exception
{
if (someFunction2()) // may return false on exception
return false;
if (someFunction3()) // may return false on exception
return false;
return someFunction4(); // may return false on exception
}
(There are many people who don't like having functions with multiple return statements. In this case, you'll have an even messier function.)
As opposed to:
void someFunction() // may throw exception
{
someFunction2(); // may throw exception
someFunction3(); // may throw exception
someFunction4(); // may throw exception
}
An alternative to, or extension of, if-else is error codes. For this, the second point will remain. See this for more on the comparison between that and exceptions.
If you handle the error locally, if ... else is cleaner. If the function where the error occurs doesn't handle the error, then throw an exception to pass off to someone higher in the call chain.
First of all your first code with if statement will terminate program in case of exception thrown by new[] operator because of not handled exception. You can check such thing here for example: http://www.cplusplus.com/reference/new/operator%20new%5B%5D/
Also exceptions are thrown in many other cases, not only when allocation failed and their main feature (in my eyes) is moving control in application up (to place where exception is handled). I recommend you read some more about exceptions, good read would be "More Effective C++" by Scott Meyers, there is great chapter on exceptions.
Question says it all: Are there any pitfalls with allocating exceptions on the heap?
I am asking because allocating exceptions on the heap, in combination with the polymorphic exception idiom, solve the problem of transporting exceptions between threads (for the sake of discussion, assume that I can't use exception_ptr). Or at least I think it does...
Some of my thoughts:
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Are there other ways to transport exceptions across threads?
Are there any pitfalls with allocating exceptions on the heap?
One obvious pitfall is that a heap allocation might fail.
Interestingly, when an exception is thrown it actually throws a copy of the exception object that is the argument of throw. When using gcc, it creates that copy in the heap but with a twist. If heap allocation fails it uses a static emergency buffer instead of heap:
extern "C" void *
__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) throw()
{
void *ret;
thrown_size += sizeof (__cxa_refcounted_exception);
ret = malloc (thrown_size);
if (! ret)
{
__gnu_cxx::__scoped_lock sentry(emergency_mutex);
bitmask_type used = emergency_used;
unsigned int which = 0;
if (thrown_size > EMERGENCY_OBJ_SIZE)
goto failed;
while (used & 1)
{
used >>= 1;
if (++which >= EMERGENCY_OBJ_COUNT)
goto failed;
}
emergency_used |= (bitmask_type)1 << which;
ret = &emergency_buffer[which][0];
failed:;
if (!ret)
std::terminate ();
}
}
So, one possibility is to replicate this functionality to protect from heap allocation failures of your exceptions.
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Not sure if using auto_ptr<> is a good idea. This is because copying auto_ptr<> destroys the original, so that after catching by value as in catch(std::auto_ptr<std::exception> e) a subsequent throw; with no argument to re-throw the original exception may throw a NULL auto_ptr<> because it was copied from (I didn't try that).
I would probably throw a plain pointer for this reason, like throw new my_exception(...) and catch it by value and manually delete it. Because manual memory management leaves a way to leak memory I would create a small library for transporting exceptions between threads and put such low level code in there, so that the rest of the code doesn't have to be concerned with memory management issues.
Another issue is that requiring a special syntax for throw, like throw new exception(...), may be a bit too intrusive, that is, there may be existing code or third party libraries that can't be changed that throw in a standard manner like throw exception(...). It may be a good idea just to stick to the standard throw syntax and catch all possible exception types (which must be known in advance, and as a fall-back just slice the exception and only copy a base class sub-object) in a top-level thread catch block, copy that exception and re-throw the copy in the other thread (probably on join or in the function that extracts the other's thread result, although the thread that throws may be stand-alone and yield no result at all, but that is a completely another issue and we assume we deal with some kind of worker thread with limited lifetime). This way the exception handler in the other thread can catch the exception in a standard way by reference or by value without having to deal with the heap. I would probably choose this path.
You may also take a look at Boost: Transporting of Exceptions Between Threads.
There are two evident one:
One - easy - is that doing throw new myexception risk to throw a bad_alloc (not bad_alloc*), hence catch exception* doesn't catch the eventual impossible to allocate exception. And throwing new(nothrow) myexception may throw ... a null pointer.
Another - more a design issue - is "who has to catch". If it is not yourself, consider a situation where your client, that may also be a client of somebody else, has - depending on who's throwing - to decide if delete or not. May result in a mess.
A typical way to solve the problem is throwing a static variable by reference (or address): doesn't need to be deleted and doesn't require to be copied when going down in the unrolled stack
If the reason you're throwing an exception is that there's a problem with the heap (out of memory or otherwise) allocating your exception on the heap will just cause more problems.
There is another catch when throwing std::auto_ptr<SomeException>, and also boost shared pointers: while runtime_error is derived from exception, auto_ptr<runtime_error> is not derived from auto_ptr<exception>.
As a result, a catch(auto_ptr<exception> &) won't catch an auto_ptr<runtime_error>.
Can the new operator throw an exception in real life?
And if so, do I have any options for handling such an exception apart from killing my application?
Update:
Do any real-world, new-heavy applications check for failure and recover when there is no memory?
See also:
How often do you check for an exception in a C++ new instruction?
Is it useful to test the return of “new” in C++?
Will new return NULL in any case?
Yes, new can and will throw if allocation fails. This can happen if you run out of memory or you try to allocate a block of memory too large.
You can catch the std::bad_alloc exception and handle it appropriately. Sometimes this makes sense, other times (read: most of the time) it doesn't. If, for example, you were trying to allocate a huge buffer but could work with less space, you could try allocating successively smaller blocks.
The new operator, and new[] operator should throw std::bad_alloc, but this is not always the case as the behavior can be sometimes overridden.
One can use std::set_new_handler and suddenly something entirely different can happen than throwing std::bad_alloc. Although the standard requires that the user either make memory available, abort, or throw std::bad_alloc. But of course this may not be the case.
Disclaimer: I am not suggesting to do this.
If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.
If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.
If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.
If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.
If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.
All the above is for the default global new. However, you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.
You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you terminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.
I have not personally seen an example where additional memory could be obtained after the exception. One possibility however, is the following: Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.
Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameter passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.
In Unix systems, it's customary to run long-running processes with memory limits (using ulimit) so that it doesn't eat up all of a system's memory. If your program hits that limit, you will get std::bad_alloc.
Update for OP's edit: the most typical case of programs recovering from an out-of-memory condition is in garbage-collected systems, which then performs a GC and continues. Though, this sort of on-demand GC is really for last-ditch efforts only; usually, good programs try to GC periodically to reduce stress on the collector.
It's less usual for non-GC programs to recover from out-of-memory issues, but for Internet-facing servers, one way to recover is to simply reject the request that's causing the memory to run out with a "temporary" error. ("First in, first served" strategy.)
osgx said:
Does any real-world applications
checks a lot number of news and can
recover when there is no memory?
I have answered this previously in my answer to this question, which is quoted below:
It is very difficult to handle this
sort of situation. You may want to
return a meaningful error to the user
of your application, but if it's a
problem caused by lack of memory, you
may not even be able to afford the
memory to allocate the error message.
It's a bit of a catch-22 situation
really.
There is a defensive programming
technique (sometimes called a memory
parachute or rainy day fund) where you
allocate a chunk of memory when your
application starts. When you then
handle the bad_alloc exception, you
free this memory up, and use the
available memory to close down the
application gracefully, including
displaying a meaningful error to the
user. This is much better than
crashing :)
You don't need to handle the exception in every single new :) Exceptions can propagate. Design your code so that there are certain points in each "module" where that error is handled.
It depends on the compiler/runtime and on the operator new that you are using (e.g. certain versions of Visual Studio will not throw out of the box, but would rather return a NULL pointer a la malloc instead.)
You can always catch a std::bad_alloc exception, or explicitly use nothrow new to return NULL instead of throwing. (Also see past StackOverflow posts revolving around the subject.)
Note that operator new, like malloc, will fail when you have run out of memory, out of address space (e.g. 2-3GB in a 32-bit process depending on the OS), out of quota (ulimit was already mentioned) or out of contiguous address space (e.g. fragmented heap.)
Yes, new can throw std::bad_alloc (a subclass of std::exception), which you may catch.
If you absolutely want to avoid this exception, and instead are ready to test the result of new for a null pointer, you may add a nothrow argument:
T* p = new (nothrow) T(...);
if (p == 0)
{
// Do something about the bad allocation!
}
else
{
// Here you may use p.
}
Yes new will throw an exception if there is no more memory available, but that doesn't mean you should wrap every new in a try ... catch. Only catch the exception if your program can actually do something about it.
If the program cannot do anything to handle that exceptional situation, what is often the case if you run out of memory, there is no use in catching the exception. If the only thing you could reasonably do is to abort the program you can as well just let the exception bubble up to top level, where it will terminate the program as well.
In many cases there's no reasonable recovery for an out of memory situation, in which case it's probably perfectly reasonable to let the application terminate. You might want to catch the exception at a high level to display a nicer error message than the compiler might give by default, but you might have to play some tricks to get even that to work (since the process is likely to be very low on resources at that point).
Unless you have a special situation that can be handled and recovered, there's probably no reason to spend a lot of effort trying to handle the exception.
Note that in Windows, very large new/mallocs will just allocate from virtual memory. In practice, your machine will crash before you see that exception.
char *pCrashMyMachine = new char[TWO_GIGABYTES];
Try it if you dare!
I use Mac OS X, and I've never seen malloc return NULL (which would imply an exception from new in C++). The machine bogs down, does its best to allocate dwindling memory to processes, and finally sends SIGSTOP and invites the user to kill processes rather than have them deal with allocation failure.
However, that's just one platform. CERTAINLY there are platforms where the default allocator does throw. And, as Chris says, ulimit may introduce an artificial constraint so that an exception would be the expected behavior.
Also, there are allocators besides the default one/malloc. If a class overrides operator new, you use custom arguments to new(…), or you pass an allocator object into a container, it probably defines its own conditions to throw bad_alloc.
new operator will throw std::bad_alloc exception when there are not enough available memory in the pool to fulfill runtime request.
This can happen on bad design or when memory allocated are not freed correctly.
Handling of such exception is based on your design, one way will be pause and retry some time later, hoping more memory returned to the pool and the request may succeed.
Most realistically new will throw due to a decision to limit a resource. Say this class (which may be memory intensive) takes memory out of the physicals pool and if to many objects take from it (we need memory for other things like sound, textures etc) it may throw instead of crashing later on when something that should be able to allocate memory takes it. (looks like a weird side effect).
Overloading new can be useful in devices with restricted memory. Such as handhelds or on consoles when its too easy to go overboard with cool effects.
Yes, new can and will throw.
Since you are asking about 'real' programs: I've worked on various shrink-wrapped commercial software applications for over 20 years. 'Real' programs with millions of users. That you can go and buy off the shelf today. Yes, new can throw.
There are various ways to handle this.
First, write your own new_handler (this is called before new gives up and throws - see set_new_handler() function). When your new_handler is called, see if you can free some things you don't really need. Also warn the user that they are running low on memory. (yes, it can be hard to warn the user about anything if you are really low).
One thing is to have pre-allocated, at the start of your program some 'extra' memory. When you run out of memory, use this extra memory to help save a copy of the user's document to disk. Then warn, and maybe exit gracefully.
Etc. This is just a overview, obviously there is more to it.
Handling low memory is not easy.
The new-handler function is the function called by allocation functions whenever new attempt to allocate the memory fails. We can have our own logging or some special action, eg,g arranging for more memory etc.
Its intended purpose is one of three things:
1) make more memory available
2) terminate the program (e.g. by calling std::terminate)
3) throw exception of type std::bad_alloc or derived from std::bad_alloc.
The default implementation throws std::bad_alloc. The user can have his own new-handler, which may offer behavior different than the default one. THis should be use only when you really need. See the example for more clarification and default behaviour,
#include <iostream>
#include <new>
void handler()
{
std::cout << "Memory allocation failed, terminating\n";
std::set_new_handler(nullptr);
}
int main()
{
std::set_new_handler(handler);
try {
while (true) {
new int[100000000ul];
}
} catch (const std::bad_alloc& e) {
std::cout << e.what() << '\n';
}
}
It's good to check/catch this exception when you are allocating memory based from something given from outside (from user space, network e.g.), because it could mean an attempt to compromise your application/service/system and you shouldn't allow this to happen.
new operator will throw std::bad_alloc exception when you run out of the memory ( virtual memory to be precise).
If new throws an exception then it is a serious error:
More than available VM is getting allocated ( it fails eventually). You can try reducing the amount of memory than exiting the program by catching std::bad_alloc exception.