void newHandler() {
cdebug << "memory allocation failure" << std::endl;
throw std::bad_alloc();
}
int main() {
std::set_new_handler(newHandler);
// ...
}
Once newHandler is established as our error handler, it will be called
when any heap allocation fails. The interesting thing about the error
handler is that it will be called continiously until the memory
allocation succeeds, or the function throws an error.
My question on above text is what does authore mean by " until the memory allocation succeeds, or the function throws an error." How can function can throw an error in this case? Request with example to understand.
Thanks for your time and help.
Basically, your handler may have 3 behavior
It throws a bad_alloc (or its derivate class).
It call exit or abord function which stop the program execution
It return, in which case a new allocation attempt will occur
refs: http://www.cplusplus.com/reference/new/set_new_handler/
This is helpful if you dont want to handle allocation error on each new call.
Depending on your system (using a lot of memory) you can for example free some allocated memory (cache), so the next try memory allocation can be successful.
void no_memory ()
{
if(cached_data.exist())
{
std::cout << "Free cache memory so the allocation can succeed!\n";
cached_data.remove();
}
else
{
std::cout << "Failed to allocate memory!\n";
std::exit (1); // Or throw an expection...
}
}
std::set_new_handler(no_memory);
The intent is that the handler can free some memory, return, and then new() can retry allocation. new() will call the handler as long as the allocation keeps failing. The handler can abort these attempts by throwing bad_alloc(), essentially saying 'I cannot free any more memory, so the allocation can't suceed'.
More details here:
http://www.cplusplus.com/reference/new/set_new_handler/
Related
When running the following code, I see the memory consumption of the process grow. Is there a memory leak in my code, is there a memory leak in the std implementation, or is it intended behaviour? It's running on a Windows 10 machine; both Visual Studio and the task manager show approximately 1MB memory growth per minute.
for (int i = 0; i < 10000; i++) {
std::exception_ptr exptr = nullptr;
std::string errmsg = "some error";
try {
throw std::runtime_error(errmsg.c_str());
}
catch (...) {
exptr = std::current_exception();
}
if (exptr) {
try {
rethrow_exception(exptr);
}
catch (std::exception const& ex) {
exptr = nullptr;
std::cout << ex.what() << std::endl;
}
}
std::this_thread::sleep_for(std::chrono::milliseconds(10ms));
}
When directly throwing and logging without delay (without using std::exception_ptr), the memory consumption doesn't grow.
std::exception_ptr is advertised as behaving like a smart pointer, so when resetting it (setting it to nullptr), the underlying resource should be destroyed. Therefore, I expect the underlying exception to be freed appropriately.
I have found out the following: when i copy the code into a fresh project, the memory consumption doesn't grow. I compared the compiler options, and the following compiler option needs to be activated to avoid growing memory consumption: /EHsc
From the msvc compiler documentation i read: [...]The default exception unwinding code doesn't destroy automatic C++ objects outside of try blocks that go out of scope because of an exception. Resource leaks and undefined behavior may result when a C++ exception is thrown.[...] So the growing memory consumption probably was a memory leak.
This is super basic but I can't find the answer anywhere. There's lots of posts out there about throwing and catching, but what actually happens if I throw from function1 and then call function1 from function2 but don't catch it, does that mean it just gets rethrown to the caller of function2? Judging from the following I'd say yes, but I wanted to get a solid guru-like answer before I soldier on and assume:
#include <iostream>
void function1()
{
throw 1;
}
void function2()
{
function1();
}
int main()
{
try
{
function2();
}
catch(...)
{
std::cout << "caught!";
return 0;
}
return 0;
}
Output:
caught!
Yes, that's how exceptions work. When an exception is thrown, it is caught by the topmost function in the call stack that has a handler for that exception in the scope of execution. Since you are going back to a function lower in the stack, some variables in the scope of the functions in the upper stack frames need to get out of scope, and therefore their destructors are called. This is called stack unwinding. It is really nice to combine that and RAII (lookup RAII if you don't know what that is). However, If any destructor throws an exception during stack unwinding, then it is bad and the std::terminate function will be called. Typically your program will then end (and this is why you are always advised to write non-throwing destructors).
From cppreference.com:
Once the exception object is constructed, the control flow works
backwards (up the call stack) until it reaches the start of a try
block, at which point the parameters of the associated catch blocks
are compared with the thrown expression to find a match. If no match
is found, the control flow continues to unwind the stack until the
next try block, and so on. If a match is found, the control flow jumps
to the matching catch block (the exception handler), which executes
normally.
As the control flow moves up the call stack, destructors are invoked
for all objects with automatic storage duration constructed since the
corresponding try-block was entered, in reverse order of construction.
If an exception is thrown from a constructor, destructors are called
for all fully-constructed non-static non-variant members and base
classes. This process is called stack unwinding.
Since function2() and function1() don't catch the exception it will propagate up the call stack until it is caught by the first suitable handler which you have in main(). Local objects destructors are being called along the way which is called stack unwinding. If you didn't have a suitable handler the C++ runtime would call unexpected() built-in function that would call abort() and terminate the program.
Yes, but it doesn't get "rethrown" - simply, when you throw an exception it will walk the call stack until it can find a catch block that can handle it; this is one of the most important "selling points" of exceptions.
If no suitable handler is found, std::terminate is called and your program terminates abnormally (notice that in this case it's not guaranteed that destructors will be called).
does that mean it just gets rethrown to the caller of function2?
No, it's not rethrown; the original throw sends it as far up the call stack as necessary until a handler is found. In this case, there's no handler in function1 or function2, so it ends up in the handler in main.
If it's not caught at all, and tries to leave main, then the program will terminate. (There are ways to change that behaviour, but that's not particularly relevant to this question).
Consider the following program:
#include <iostream>
void function1()
{
try
{
throw 1;
}
catch(...)
{
std::cout << "Exception caught in function1." << std::endl;
throw 1;
}
}
void function2()
{
try
{
function1();
}
catch(...)
{
std::cout << "Exception caught in function2." << std::endl;
throw 1;
}
}
int main()
{
try
{
function2();
}
catch(...)
{
std::cout << "Exception caught in main." << std::endl;
}
return 0;
}
Its output is
Exception caught in function1.
Exception caught in function2.
Exception caught in main.
You could throw without any try and catch block. For example,
std::string &someClass::operator[](unsigned position) {
// try {
if (position <= length && position >= 0) {
//return string at that position;
}else{
throw ("out of range");
}
}
In this case, the function checks if the position is within the dynamic array length. Otherwise, it throws a string which will tell the user the position they chose was out of bounds. So there are ways to use a throw statement without a try and catch block (but try and catch are used together, cannot exclude one).
From std::set_new_handler
The new-handler function is the function called by allocation functions whenever a memory allocation attempt fails. Its intended purpose is one of three things:
make more memory available
terminate the program (e.g. by calling std::terminate)
throw exception of type std::bad_alloc or derived from std::bad_alloc
Will the following overload gurantees anything ?
void * operator new(std::size_t size) throw(std::bad_alloc){
while(true) {
void* pMem = malloc(size);
if(pMem)
return pMem;
std::new_handler Handler = std::set_new_handler(0);
std::set_new_handler(Handler);
if(Handler)
(*Handler)();
else
throw bad_alloc();
}
}
std::set_new_handler doesn't make memory available, it sets a new-handler function to be used when allocation fails.
A user-defined new-handler function might be able to make more memory available, e.g. by clearing an in-memory cache, or destroying some objects that are no longer needed. The default new-handler does not do this, it's a null pointer, so failure to allocate memory just throws an exception, because the standard library cannot know what objects in your program might not be needed any more. If you write your own new handler you might be able to return some memory to the system based on your knowledge of the program and its requirements.
Here is a working example illustrating the functioning of custom new handlers.
#include <iostream>
#include <new>
/// buffer to be allocated after custom new handler has been installed
char* g_pSafetyBuffer = NULL;
/// exceptional one time release of a global reserve
void my_new_handler()
{
if (g_pSafetyBuffer) {
delete [] g_pSafetyBuffer;
g_pSafetyBuffer = NULL;
std::cout << "[Free some pre-allocated memory]";
return;
}
std::cout << "[No memory to free, throw bad_alloc]";
throw std::bad_alloc();
}
/// illustrates how a custom new handler may work
int main()
{
enum { MEM_CHUNK_SIZE = 1000*1000 }; // adjust according to your system
std::set_new_handler(my_new_handler);
g_pSafetyBuffer = new char[801*MEM_CHUNK_SIZE];
try {
while (true) {
std::cout << "Trying another new... ";
new char[200*MEM_CHUNK_SIZE];
std::cout << " ...succeeded.\n";
}
} catch (const std::bad_alloc& e) {
std::cout << " ...failed.\n";
}
return 0;
}
I do not suggest the demonstrated strategy for production code, it may be too heavy to predict, how many allocations will succeed after your new_handler is called once. I observed some successful allocations on my system (play with the numbers to see what happens on yours). Here's one possible output:
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... [Free some pre-allocated memory] ...succeeded.
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... ...succeeded.
Trying another new... [No memory to free, throw bad_alloc] ...failed.
Process returned 0 (0x0) execution time : 0.046 s
Press any key to continue.
Instead, from my perspective, do the release of a safety buffer only for terminating your program in a safe way. Even proper exception handling needs memory, if there isn't enough available, abort() is called (as I learned recently).
I have overloaded new function but unfortunetly never been able to execute global handler for requesting more memory access on my compiler. I also don't understand as per below code snippet if we invoke the
global handler for requesting more memory how it is gling to allocate to P.
I appreciate if anybody can through some light on this
void * Pool:: operator new ( size_t size ) throw( const char *)
{
int n=0;
while(1)
{
void *p = malloc (100000000L);
if(p==0)
{
new_handler ghd= set_new_handler(0);//deinstall curent handler
set_new_handler(ghd);// install global handler for more memory access
if(ghd)
(*ghd)();
else
throw "out of memory exception";
}
else
{
return p;
}
}
}
To have any effect, some other part of the program must have installed a global handler previously. That handler must also have some kind of memory to release when the handler is called (perhaps some buffers or cache that can be discarded).
The default new_handler is just a null pointer, so your code is very likely to end up throwing an exception.
Also, I would have thrown a bad_alloc exception to be consistent with other operator new overloads.
Here are two things to discuss, the first is using new_handler, the second is overloading operator new.
set_new_handler()
When you want use a new_handler, you have to register it. It is typically the first thing to do after entering main(). The handler should also be provided by you.
#include <iostream>
#include <new>
void noMemory() throw()
{
std::cout << "no memory" << std::endl;
exit(-1);
}
int main()
{
set_new_handler(noMemory);
// this will probably fail and noMemory() will be called
char *c = new char[100000000L];
std::cout << "end" << std::endl;
}
When no memory can be allocated, your registered handler will be called, and you have the chance to free up some memory. When the handler returns, operator new will give another try to allocate the amount of memory you requested.
operator new
The structure of the default operator new is something similar you presented. From the point of the new_handler the important part is the while(1) loop, since it is responsible for trying to get memory after called the new_handler.
There is two way out of this while(1) loop:
getting a valid pointer
throwing an exception
You have to have this in mind when you provide a new_handler, because if you can not do anything to free up memory you should deinstall the handler (or terminating or throwing an exception), otherwise you can stuck in an endless loop.
I guess omitting parameter size in your code is just for test purpose.
Also see Scott Meyers' Effective C++ Item 7 for details. As operator new must return a valid pointer even with parameter size = 0, the first thing to do in your operator new should be overwriting size to 1 in case of the user want to allocate 0 number of bytes. This trick is simple and fairly effective.
I was debugging an application and encountered following code:
int Func()
{
try
{
CSingleLock aLock(&m_CriticalSection, TRUE);
{
//user code
}
}
catch(...)
{
//exception handling
}
return -1;
}
m_CriticalSection is CCricialSection.
I found that user code throws an exception such that m_CriticalSection is not released at all. That means due to some reasons stack is corrupted and hence unwinding failed.
My question is:
1) In what different scenarios stack unwinding can fail ?
2) what different possibility of exception can be thrown such that stack unwinding fails.
3) Can I solve this problem by putting CSingleLock outside of try block ?
Thanks,
Are you getting an abnormal program termination?
I believe your CCriticalSection object will be released CSingleLock's destructor. The destructor will get called always since this is an object on the stack. When the usercode throws, all stacks between the throw and the catch in your function will be unwound.
However, chances are that some other object in your user code or even the CSingleLock destructor has thrown another exception in the meantime. In this case the m_CriticalSection object will not get released properly and std::terminate is called and your program dies.
Here's some sample to demonstrate. Note: I am using a std::terminate handler function to notify me of the state. You can also use the std::uncaught_exception to see if there are any uncaught exceptions. There is a nice discussion and sample code on this here.
struct S {
S() { std::cout << __FUNCTION__ << std::endl; }
~S() { throw __FUNCTION__; std::cout << __FUNCTION__ << std::endl; }
};
void func() {
try {
S s;
{
throw 42;
}
} catch(int e) {
std::cout << "Exception: " << e << std::endl;
}
}
void rip() {
std::cout << " help me, O mighty Lord!\n"; // pray
}
int main() {
std::set_terminate(rip);
try {
func();
}
catch(char *se) {
std::cout << "Exception: " << se << std::endl;
}
}
Read this FAQ for clarity.
Can I solve this problem by putting CSingleLock outside of try block ?
Hard to say without having a look at the stack and error(s)/crashes. Why don't you give it a try. It may also introduce a subtle bug by hiding the real problem.
Let me start by saying that I don't know what CSingleLock and CCriticalSection do.
What I do know is that an exception thrown in your "user code" section should unwind the stack and destroy any variables that were created within the try { } block.
To my eyes, I would expect your aLock variable to be destroyed by an exception, but not m_CriticalSection. You are passing a pointer to m_CriticalSection to the aLock variable, but the m_CriticalSection object already exists, and was created elsewhere.
are you sure that lifetime of your m_CriticalSection is longer that CSingleLock?
maybe someone corrupt your stack?
3) Can I solve this problem by putting CSingleLock outside of try block ?
in this case - yes. But remember, it is not good thing for performance to put large block in mutex.
btw, catch(...) is not good practice in general. in Win32 it (catch(...)) catching SEH exceptions too, not only c++ exception. maybe you have core in this function and catch it with catch(...).
My question is:
1) In what different scenarios stack unwinding can fail ?
If exit() terminate() abort() or unexpected() are called.
With the exception of a direct calls what situations are any of these likely to happen:
An unhandeled exception is thrown. (Does not apply here)
throw an exception from a destructor while another exception is popogating
2) what different possibility of exception can be thrown such that stack unwinding fails.
Exception thrown from constructor of throw expression
Exception thrown from destructor while exception propogating.
Exception thrown that is never caught (implementatin defined if this actually unwinds stack).
Exception thrown that is not specified in exception specification.
Exception thrown across a C ABI.
Exception thrown inside a thread that is not caught (Implementation defined what happens)
3) Can I solve this problem by putting CSingleLock outside of try block ?
No. All of the above cause the application to terminate without further unwinding of the stack.