How to catch or handle segfault from free() or delete() - c++

In c++, I have a server code running continuously 24*7 but i am getting segfault sometimes while freeing the buffer.
I tried try catch as well.
try {
free(partialBuf);
} catch (...) {
printf("Caught partial buf free error");
}
Thanks in advance!

Since you're apparently able to use try/catch, you're writing C++ code. It helps to know which language you're using.
The solution then is to use std::shared_ptr. You may have multiple places in which a pointer goes out of scope. With shared_ptr you no longer call free, and as a bonus shared_ptr will call delete only once (after the last pointer goes out of scope).
However, you should now allocate memory with new instead of malloc.

A segfault is not an exception in the sense of other C++ exceptions, hence you cannot catch it with try/catch. A segfault can have any number of reasons, but in 99.9% of cases it's a memory access bug :-) If the segfault happens during a call to delete or free(), chances are that you are having a double-free issue.

You could use GDB to debug, and find out whether you are trying to free a pointer which was not allocated previously.

Related

Exceptions on unique_ptr and make_unique [duplicate]

There is a method called foo that sometimes returns the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abort
Is there a way that I can use a try-catch block to stop this error from terminating my program (all I want to do is return -1)?
If so, what is the syntax for it?
How else can I deal with bad_alloc in C++?
In general you cannot, and should not try, to respond to this error. bad_alloc indicates that a resource cannot be allocated because not enough memory is available. In most scenarios your program cannot hope to cope with that, and terminating soon is the only meaningful behaviour.
Worse, modern operating systems often over-allocate: on such systems, malloc and new can return a valid pointer even if there is not enough free memory left – std::bad_alloc will never be thrown, or is at least not a reliable sign of memory exhaustion. Instead, attempts to access the allocated memory will then result in a segmentation fault, which is not catchable (you can handle the segmentation fault signal, but you cannot resume the program afterwards).
The only thing you could do when catching std::bad_alloc is to perhaps log the error, and try to ensure a safe program termination by freeing outstanding resources (but this is done automatically in the normal course of stack unwinding after the error gets thrown if the program uses RAII appropriately).
In certain cases, the program may attempt to free some memory and try again, or use secondary memory (= disk) instead of RAM but these opportunities only exist in very specific scenarios with strict conditions:
The application must ensure that it runs on a system that does not overcommit memory, i.e. it signals failure upon allocation rather than later.
The application must be able to free memory immediately, without any further accidental allocations in the meantime.
It’s exceedingly rare that applications have control over point 1 — userspace applications never do, it’s a system-wide setting that requires root permissions to change.1
OK, so let’s assume you’ve fixed point 1. What you can now do is for instance use a LRU cache for some of your data (probably some particularly large business objects that can be regenerated or reloaded on demand). Next, you need to put the actual logic that may fail into a function that supports retry — in other words, if it gets aborted, you can just relaunch it:
lru_cache<widget> widget_cache;
double perform_operation(int widget_id) {
std::optional<widget> maybe_widget = widget_cache.find_by_id(widget_id);
if (not maybe_widget) {
maybe_widget = widget_cache.store(widget_id, load_widget_from_disk(widget_id));
}
return maybe_widget->frobnicate();
}
…
for (int num_attempts = 0; num_attempts < MAX_NUM_ATTEMPTS; ++num_attempts) {
try {
return perform_operation(widget_id);
} catch (std::bad_alloc const&) {
if (widget_cache.empty()) throw; // memory error elsewhere.
widget_cache.remove_oldest();
}
}
// Handle too many failed attempts here.
But even here, using std::set_new_handler instead of handling std::bad_alloc provides the same benefit and would be much simpler.
1 If you’re creating an application that does control point 1, and you’re reading this answer, please shoot me an email, I’m genuinely curious about your circumstances.
You can catch it like any other exception:
try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}
Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.
What is the C++ Standard specified behavior of new in c++?
The usual notion is that if new operator cannot allocate dynamic memory of the requested size, then it should throw an exception of type std::bad_alloc.
However, something more happens even before a bad_alloc exception is thrown:
C++03 Section 3.7.4.1.3: says
An allocation function that fails to allocate storage can invoke the currently installed new_handler(18.4.2.2), if any. [Note: A program-supplied allocation function can obtain the address of the currently installed new_handler using the set_new_handler function (18.4.2.3).] If an allocation function declared with an empty exception-specification (15.4), throw(), fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall only indicate failure by throw-ing an exception of class std::bad_alloc (18.4.2.1) or a class derived from std::bad_alloc.
Consider the following code sample:
#include <iostream>
#include <cstdlib>
// function to call if operator new can't allocate enough memory or error arises
void outOfMemHandler()
{
std::cerr << "Unable to satisfy request for memory\n";
std::abort();
}
int main()
{
//set the new_handler
std::set_new_handler(outOfMemHandler);
//Request huge memory size, that will cause ::operator new to fail
int *pBigDataArray = new int[100000000L];
return 0;
}
In the above example, operator new (most likely) will be unable to allocate space for 100,000,000 integers, and the function outOfMemHandler() will be called, and the program will abort after issuing an error message.
As seen here the default behavior of new operator when unable to fulfill a memory request, is to call the new-handler function repeatedly until it can find enough memory or there is no more new handlers. In the above example, unless we call std::abort(), outOfMemHandler() would be called repeatedly. Therefore, the handler should either ensure that the next allocation succeeds, or register another handler, or register no handler, or not return (i.e. terminate the program). If there is no new handler and the allocation fails, the operator will throw an exception.
What is the new_handler and set_new_handler?
new_handler is a typedef for a pointer to a function that takes and returns nothing, and set_new_handler is a function that takes and returns a new_handler.
Something like:
typedef void (*new_handler)();
new_handler set_new_handler(new_handler p) throw();
set_new_handler's parameter is a pointer to the function operator new should call if it can't allocate the requested memory. Its return value is a pointer to the previously registered handler function, or null if there was no previous handler.
How to handle out of memory conditions in C++?
Given the behavior of newa well designed user program should handle out of memory conditions by providing a proper new_handlerwhich does one of the following:
Make more memory available: This may allow the next memory allocation attempt inside operator new's loop to succeed. One way to implement this is to allocate a large block of memory at program start-up, then release it for use in the program the first time the new-handler is invoked.
Install a different new-handler: If the current new-handler can't make any more memory available, and of there is another new-handler that can, then the current new-handler can install the other new-handler in its place (by calling set_new_handler). The next time operator new calls the new-handler function, it will get the one most recently installed.
(A variation on this theme is for a new-handler to modify its own behavior, so the next time it's invoked, it does something different. One way to achieve this is to have the new-handler modify static, namespace-specific, or global data that affects the new-handler's behavior.)
Uninstall the new-handler: This is done by passing a null pointer to set_new_handler. With no new-handler installed, operator new will throw an exception ((convertible to) std::bad_alloc) when memory allocation is unsuccessful.
Throw an exception convertible to std::bad_alloc. Such exceptions are not be caught by operator new, but will propagate to the site originating the request for memory.
Not return: By calling abort or exit.
I would not suggest this, since bad_alloc means you are out of memory. It would be best to just give up instead of attempting to recover. However here is is the solution you are asking for:
try {
foo();
} catch ( const std::bad_alloc& e ) {
return -1;
}
I may suggest a more simple (and even faster) solution for this. new operator would return null if memory could not be allocated.
int fv() {
T* p = new (std::nothrow) T[1000000];
if (!p) return -1;
do_something(p);
delete p;
return 0;
}
I hope this could help!
Let your foo program exit in a controlled way:
#include <stdlib.h> /* exit, EXIT_FAILURE */
try {
foo();
} catch (const std::bad_alloc&) {
exit(EXIT_FAILURE);
}
Then write a shell program that calls the actual program. Since the address spaces are separated, the state of your shell program is always well-defined.
Of course you can catch a bad_alloc, but I think the better question is how you can stop a bad_alloc from happening in the first place.
Generally, bad_alloc means that something went wrong in an allocation of memory - for example when you are out of memory. If your program is 32-bit, then this already happens when you try to allocate >4 GB. This happened to me once when I copied a C-string to a QString. The C-string wasn't '\0'-terminated which caused the strlen function to return a value in the billions. So then it attempted to allocate several GB of RAM, which caused the bad_alloc.
I have also seen bad_alloc when I accidentally accessed an uninitialized variable in the initializer-list of a constructor. I had a class foo with a member T bar. In the constructor I wanted to initialize the member with a value from a parameter:
foo::foo(T baz) // <-- mistyped: baz instead of bar
: bar(bar)
{
}
Because I had mistyped the parameter, the constructor initialized bar with itself (so it read an uninitialized value!) instead of the parameter.
valgrind can be very helpful with such errors!

What happens if new-handler function in not written properly or can't free any more memory in c++

A well written new-handler function should be "if it is not able to free any more memory, then either through exception or call exit() to terminate the program".
What happens if new-handler is not written in that way. Function is written such that it can try to free the memory that's it. what will happen in this case. where the control will be return??
and when new-handler is freeing the memory, who checks that now free memory is big enough to handle the new request.
If you mean the handler set by std::set_new_handler, then if the handler returns and new still can't allocate memory the handler will be called again, and again and again...
"Undefined Behavior". C++ doesn't try to protect you from yourself. If you violate the C++ rules, anything can happen.

how to make smart pointer go out of scope at exit()

I've spent a bit of time writing an application for practice and i've taken a liking to using smart pointers throughout so as to avoid memory leaks in case i forgot to delete something. At the same time, i've also taken a liking to using exceptions to report failure in a constructor and attempt to handle it. When it cannot however, i would like for it to exit the program at that spot either through a call to assert() or exit(). However, using the crtdbg library in msvc, it reports a memory leak from the smart pointers that have anything dynamically allocated to them. This means one of two things to me. 1) the smart pointers never went out of scope of where they were allocated, and never deallocate, causing some memory leaks, or 2) crtdbg is not catching the deallocation because it doesn't exit at main. From this page though, using _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); at the begginning of the program will catch the leaks from any exit point, and I still get the memory leak errors using that.
So my question to you guys, will the memory actually be deallocated at exit or assert, and if not, might i be able to derive from std::shared_ptr and implement my own solution to cataloging dynamically allocated objects to be deallocated just before the call to exit or assert, or is that too much work for a more simple solution?
When the program exits, the memory is reclaimed by the OS anyway, so if leaking is worrying you, it shouldn't.
If, however, you have logic in your destructors, and the objects must be destroyed - calling exit explicitly bypasses all deallocation. A workaround for this is to throw an exception where you would call exit, catch it in main and return.
#include "stdlib.h"
void foo()
{
//exit(0);
throw killException();
}
int main
{
try
{
foo();
}
catch (killException& ex)
{
//exit program calling destructors
return EXIT_FAILURE;
}
}
The real problem is not with memory, but other resources. The OS will (in most cases, unless you are running an embedded system) recover the memory from the process when it terminates, so memory will not be leaked in the OS. The actual problem might be with other resources, external to your process that might need to be released before your process completes...
At any rate, why do you prefer to abort or exit rather than letting the exception propagate up? In general you should handle only the exceptions that you want to manage and let the others fall through. While you might not be able to recover from it, your caller might actually be able to. By capturing the exception and exiting the program on the spot you are removing the choice of handling from the users.

Any pitfalls with allocating exceptions on the heap?

Question says it all: Are there any pitfalls with allocating exceptions on the heap?
I am asking because allocating exceptions on the heap, in combination with the polymorphic exception idiom, solve the problem of transporting exceptions between threads (for the sake of discussion, assume that I can't use exception_ptr). Or at least I think it does...
Some of my thoughts:
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Are there other ways to transport exceptions across threads?
Are there any pitfalls with allocating exceptions on the heap?
One obvious pitfall is that a heap allocation might fail.
Interestingly, when an exception is thrown it actually throws a copy of the exception object that is the argument of throw. When using gcc, it creates that copy in the heap but with a twist. If heap allocation fails it uses a static emergency buffer instead of heap:
extern "C" void *
__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) throw()
{
void *ret;
thrown_size += sizeof (__cxa_refcounted_exception);
ret = malloc (thrown_size);
if (! ret)
{
__gnu_cxx::__scoped_lock sentry(emergency_mutex);
bitmask_type used = emergency_used;
unsigned int which = 0;
if (thrown_size > EMERGENCY_OBJ_SIZE)
goto failed;
while (used & 1)
{
used >>= 1;
if (++which >= EMERGENCY_OBJ_COUNT)
goto failed;
}
emergency_used |= (bitmask_type)1 << which;
ret = &emergency_buffer[which][0];
failed:;
if (!ret)
std::terminate ();
}
}
So, one possibility is to replicate this functionality to protect from heap allocation failures of your exceptions.
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Not sure if using auto_ptr<> is a good idea. This is because copying auto_ptr<> destroys the original, so that after catching by value as in catch(std::auto_ptr<std::exception> e) a subsequent throw; with no argument to re-throw the original exception may throw a NULL auto_ptr<> because it was copied from (I didn't try that).
I would probably throw a plain pointer for this reason, like throw new my_exception(...) and catch it by value and manually delete it. Because manual memory management leaves a way to leak memory I would create a small library for transporting exceptions between threads and put such low level code in there, so that the rest of the code doesn't have to be concerned with memory management issues.
Another issue is that requiring a special syntax for throw, like throw new exception(...), may be a bit too intrusive, that is, there may be existing code or third party libraries that can't be changed that throw in a standard manner like throw exception(...). It may be a good idea just to stick to the standard throw syntax and catch all possible exception types (which must be known in advance, and as a fall-back just slice the exception and only copy a base class sub-object) in a top-level thread catch block, copy that exception and re-throw the copy in the other thread (probably on join or in the function that extracts the other's thread result, although the thread that throws may be stand-alone and yield no result at all, but that is a completely another issue and we assume we deal with some kind of worker thread with limited lifetime). This way the exception handler in the other thread can catch the exception in a standard way by reference or by value without having to deal with the heap. I would probably choose this path.
You may also take a look at Boost: Transporting of Exceptions Between Threads.
There are two evident one:
One - easy - is that doing throw new myexception risk to throw a bad_alloc (not bad_alloc*), hence catch exception* doesn't catch the eventual impossible to allocate exception. And throwing new(nothrow) myexception may throw ... a null pointer.
Another - more a design issue - is "who has to catch". If it is not yourself, consider a situation where your client, that may also be a client of somebody else, has - depending on who's throwing - to decide if delete or not. May result in a mess.
A typical way to solve the problem is throwing a static variable by reference (or address): doesn't need to be deleted and doesn't require to be copied when going down in the unrolled stack
If the reason you're throwing an exception is that there's a problem with the heap (out of memory or otherwise) allocating your exception on the heap will just cause more problems.
There is another catch when throwing std::auto_ptr<SomeException>, and also boost shared pointers: while runtime_error is derived from exception, auto_ptr<runtime_error> is not derived from auto_ptr<exception>.
As a result, a catch(auto_ptr<exception> &) won't catch an auto_ptr<runtime_error>.

Can the C++ `new` operator ever throw an exception in real life?

Can the new operator throw an exception in real life?
And if so, do I have any options for handling such an exception apart from killing my application?
Update:
Do any real-world, new-heavy applications check for failure and recover when there is no memory?
See also:
How often do you check for an exception in a C++ new instruction?
Is it useful to test the return of “new” in C++?
Will new return NULL in any case?
Yes, new can and will throw if allocation fails. This can happen if you run out of memory or you try to allocate a block of memory too large.
You can catch the std::bad_alloc exception and handle it appropriately. Sometimes this makes sense, other times (read: most of the time) it doesn't. If, for example, you were trying to allocate a huge buffer but could work with less space, you could try allocating successively smaller blocks.
The new operator, and new[] operator should throw std::bad_alloc, but this is not always the case as the behavior can be sometimes overridden.
One can use std::set_new_handler and suddenly something entirely different can happen than throwing std::bad_alloc. Although the standard requires that the user either make memory available, abort, or throw std::bad_alloc. But of course this may not be the case.
Disclaimer: I am not suggesting to do this.
If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.
If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.
If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.
If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.
If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.
All the above is for the default global new. However, you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.
You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you terminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.
I have not personally seen an example where additional memory could be obtained after the exception. One possibility however, is the following: Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.
Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameter passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.
In Unix systems, it's customary to run long-running processes with memory limits (using ulimit) so that it doesn't eat up all of a system's memory. If your program hits that limit, you will get std::bad_alloc.
Update for OP's edit: the most typical case of programs recovering from an out-of-memory condition is in garbage-collected systems, which then performs a GC and continues. Though, this sort of on-demand GC is really for last-ditch efforts only; usually, good programs try to GC periodically to reduce stress on the collector.
It's less usual for non-GC programs to recover from out-of-memory issues, but for Internet-facing servers, one way to recover is to simply reject the request that's causing the memory to run out with a "temporary" error. ("First in, first served" strategy.)
osgx said:
Does any real-world applications
checks a lot number of news and can
recover when there is no memory?
I have answered this previously in my answer to this question, which is quoted below:
It is very difficult to handle this
sort of situation. You may want to
return a meaningful error to the user
of your application, but if it's a
problem caused by lack of memory, you
may not even be able to afford the
memory to allocate the error message.
It's a bit of a catch-22 situation
really.
There is a defensive programming
technique (sometimes called a memory
parachute or rainy day fund) where you
allocate a chunk of memory when your
application starts. When you then
handle the bad_alloc exception, you
free this memory up, and use the
available memory to close down the
application gracefully, including
displaying a meaningful error to the
user. This is much better than
crashing :)
You don't need to handle the exception in every single new :) Exceptions can propagate. Design your code so that there are certain points in each "module" where that error is handled.
It depends on the compiler/runtime and on the operator new that you are using (e.g. certain versions of Visual Studio will not throw out of the box, but would rather return a NULL pointer a la malloc instead.)
You can always catch a std::bad_alloc exception, or explicitly use nothrow new to return NULL instead of throwing. (Also see past StackOverflow posts revolving around the subject.)
Note that operator new, like malloc, will fail when you have run out of memory, out of address space (e.g. 2-3GB in a 32-bit process depending on the OS), out of quota (ulimit was already mentioned) or out of contiguous address space (e.g. fragmented heap.)
Yes, new can throw std::bad_alloc (a subclass of std::exception), which you may catch.
If you absolutely want to avoid this exception, and instead are ready to test the result of new for a null pointer, you may add a nothrow argument:
T* p = new (nothrow) T(...);
if (p == 0)
{
// Do something about the bad allocation!
}
else
{
// Here you may use p.
}
Yes new will throw an exception if there is no more memory available, but that doesn't mean you should wrap every new in a try ... catch. Only catch the exception if your program can actually do something about it.
If the program cannot do anything to handle that exceptional situation, what is often the case if you run out of memory, there is no use in catching the exception. If the only thing you could reasonably do is to abort the program you can as well just let the exception bubble up to top level, where it will terminate the program as well.
In many cases there's no reasonable recovery for an out of memory situation, in which case it's probably perfectly reasonable to let the application terminate. You might want to catch the exception at a high level to display a nicer error message than the compiler might give by default, but you might have to play some tricks to get even that to work (since the process is likely to be very low on resources at that point).
Unless you have a special situation that can be handled and recovered, there's probably no reason to spend a lot of effort trying to handle the exception.
Note that in Windows, very large new/mallocs will just allocate from virtual memory. In practice, your machine will crash before you see that exception.
char *pCrashMyMachine = new char[TWO_GIGABYTES];
Try it if you dare!
I use Mac OS X, and I've never seen malloc return NULL (which would imply an exception from new in C++). The machine bogs down, does its best to allocate dwindling memory to processes, and finally sends SIGSTOP and invites the user to kill processes rather than have them deal with allocation failure.
However, that's just one platform. CERTAINLY there are platforms where the default allocator does throw. And, as Chris says, ulimit may introduce an artificial constraint so that an exception would be the expected behavior.
Also, there are allocators besides the default one/malloc. If a class overrides operator new, you use custom arguments to new(…), or you pass an allocator object into a container, it probably defines its own conditions to throw bad_alloc.
new operator will throw std::bad_alloc exception when there are not enough available memory in the pool to fulfill runtime request.
This can happen on bad design or when memory allocated are not freed correctly.
Handling of such exception is based on your design, one way will be pause and retry some time later, hoping more memory returned to the pool and the request may succeed.
Most realistically new will throw due to a decision to limit a resource. Say this class (which may be memory intensive) takes memory out of the physicals pool and if to many objects take from it (we need memory for other things like sound, textures etc) it may throw instead of crashing later on when something that should be able to allocate memory takes it. (looks like a weird side effect).
Overloading new can be useful in devices with restricted memory. Such as handhelds or on consoles when its too easy to go overboard with cool effects.
Yes, new can and will throw.
Since you are asking about 'real' programs: I've worked on various shrink-wrapped commercial software applications for over 20 years. 'Real' programs with millions of users. That you can go and buy off the shelf today. Yes, new can throw.
There are various ways to handle this.
First, write your own new_handler (this is called before new gives up and throws - see set_new_handler() function). When your new_handler is called, see if you can free some things you don't really need. Also warn the user that they are running low on memory. (yes, it can be hard to warn the user about anything if you are really low).
One thing is to have pre-allocated, at the start of your program some 'extra' memory. When you run out of memory, use this extra memory to help save a copy of the user's document to disk. Then warn, and maybe exit gracefully.
Etc. This is just a overview, obviously there is more to it.
Handling low memory is not easy.
The new-handler function is the function called by allocation functions whenever new attempt to allocate the memory fails. We can have our own logging or some special action, eg,g arranging for more memory etc.
Its intended purpose is one of three things:
1) make more memory available
2) terminate the program (e.g. by calling std::terminate)
3) throw exception of type std::bad_alloc or derived from std::bad_alloc.
The default implementation throws std::bad_alloc. The user can have his own new-handler, which may offer behavior different than the default one. THis should be use only when you really need. See the example for more clarification and default behaviour,
#include <iostream>
#include <new>
void handler()
{
std::cout << "Memory allocation failed, terminating\n";
std::set_new_handler(nullptr);
}
int main()
{
std::set_new_handler(handler);
try {
while (true) {
new int[100000000ul];
}
} catch (const std::bad_alloc& e) {
std::cout << e.what() << '\n';
}
}
It's good to check/catch this exception when you are allocating memory based from something given from outside (from user space, network e.g.), because it could mean an attempt to compromise your application/service/system and you shouldn't allow this to happen.
new operator will throw std::bad_alloc exception when you run out of the memory ( virtual memory to be precise).
If new throws an exception then it is a serious error:
More than available VM is getting allocated ( it fails eventually). You can try reducing the amount of memory than exiting the program by catching std::bad_alloc exception.