If I've registered my very own vectored exception handler (VEH) and a StackOverflow exception had occurred in my process, when I'll reach to the VEH, will I'll be able to allocate more memory on the stack? will the allocation cause me to override some other memory? what will happen?
I know that in .Net this is why the entire stack is committed during the thread's creation, but let's say i'm writing in native and such scenario occurs ... what will i able to do inside the VEH? what about memory allocation..?
In the case of a stack overflow, you'll have a tiny bit of stack to work with. It's enough stack to start a new thread, which will have an entirely new stack. From there, you can do whatever you need to do before terminating.
You cannot recover from a stack overflow, it would involve unwinding the stack, but your entire program would be destroyed in the progress. Here's some code I wrote for a stack-dumping utility:
// stack overflows cannot be handled, try to get output then quit
set_current_thread(get_current_thread());
boost::thread t(stack_fail_thread);
t.join(); // will never exit
All this did was get the thread's handle so the stack dumping mechanism knew which thread to dump, start a new thread to do the dumping/logging, and wait for it to finish (which won't happen, the thread calls exit()).
For completeness, get_current_thread() looked like this:
const HANDLE process = GetCurrentProcess();
HANDLE thisThread = 0;
DuplicateHandle(process, GetCurrentThread(), process,
&thisThread, 0, true, DUPLICATE_SAME_ACCESS);
All of these are "simple" functions that don't require a lot of room to work (and keep in mind, the compiler will inline these msot likely, removing a function call). You cannot, contrarily, throw an exception. Not only does that require much more work, but destructors can do quite a bit of work (like deallocating memory), which tend to be complex as well.
Your best bet is to start a new thread, save as much information about your application as you can or want, then terminate.
No, you can't allocate memory in vectored exception handler.
MSDN says it explicitly:
"The handler should not call functions that acquire synchronization objects or allocate memory, because this can cause problems. Typically, the handler will simply access the exception record and return."
Stacks need to be contiguous, so you can't just allocate any random memory but have to allocate the next part of the address space.
If you are willing to preallocate the address space (i.e. just reserve a range of addresses without actually allocating memory), you can use VirtualAlloc. First you call it with the MEM_RESERVE flag to set aside the address space. Later, in your exception handler you can call it again with MEM_COMMIT to allocate physical memory to your pre-reserved address space.
Related
Suppose I have a function that uses "new", do I need to set aside some emergency memory in case "new" fails? Such as:
static char* emerg_mem = new char[EMERG_MEM_SZ];
FooElement* Foo::createElement()
{
try
{
FooElement* ptr;
ptr = new FooElement();
return ptr;
}
catch(bad_alloc ex)
{
delete[] emerg_mem;
emerg_mem = NULL;
return NULL;
}
}
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
I am using GCC on Linux Mint, but I suppose this question could apply to any platform.
So that there is enough (EMERG_MEM_SZ) memory remaining for class destructor functions etc, and to gracefully exit the program?
Before attempting to provide such memory for destructors, you should first be able to argue some reason why your destructors would need to allocate dynamic memory in the first place. Such requirement is a serious red flag about the design of the class.
Is it necessary to put aside some emergency memory when new fails?
Not necessarily. Firstly, graceful exit is often possible without allocating any dynamic memory. Secondly, a program running within the protection of an operating system doesn't necessarily need to terminate gracefully in such a dire situation as lack of memory.
P.S. Some systems (Linux in particular, given certain configuration) "overcommit" memory and never throw std::bad_alloc. Instead, allocation always succeeds, physical memory isn't allocated until it is actually accessed, and if no memory is available at that time the process (or some other process) is killed to free some memory. On such system there is no way in C++ to recover from lack of memory.
I would say no.
When your application is out of memory and throws an exception the stack will start to unwind (thus destroying and releasing memory as it goes). As a general rule destructors should not be using dynamic memory allocation more like they should be releasing the memory.
Thus if you have correctly used RAII then you will gain memory back as the stack unwinds, which potentially allows you to catch and continue (if the thing throwing is a discrete task whose results can be discarded).
Also in most situations your application will slow to an unusable crawl long before actual throwing an out of memory exception (as the OS tries to consolidate memory to get you that elusive slot).
I have a C++ app I'm developing in Linux. I'm allocating some dynamic memory and ultimately calling forkpty(). The child process is calling execl() and as we know, execl() never returns if it succeeds to execute the command. Furthermore, as we know, forkpty() makes a copy of all the parent's data. So, if the child() process never returns control back to my application in order to ultimately do memory cleanup, is it safe to say one better not have any dynamic memory allocated at the time execl() is called from the child process??? I can't believe I could not find this one on here... Thanks in advance.
Allocated memory is part of the process image; when you call
execl, the entire process image is replaced, and any memory in
it simply "disappears" like the rest of it, returning to the OS,
which will then use it elsewhere.
All of the "forked" process memory is freed as part of execl() (if the call is successful).
If this wasn't the case, there would be a lot of memory leaks all over a regular linux system, as it's almost impossible to write anything even a little complex without allocating memory, and, for example, if the arguments to execl() are allocated, you couldn't possibly free them before calling execl().
The general rule, only objects allocated in the free store can cause memory leaks.
But objects created in the stack doesn't.
Here is my doubt,
int main()
{
myclass x;
...
throw;
...
}
If throw is not handled, it calls, terminate(), which in turn calls abort() and crashes the application. At this time, the objects in the stack are not destoryed (The destructor is not invoked).
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
In a hosted environment (e.g. your typical Unix / Windows / Mac OS X, even DOS, machine) when the application terminates all the memory it occupied is automatically reclaimed by the operating system. Therefore, it doesn't make sense to worry about such memory leaks.
In some cases, before an application terminates, you may want to release all the dynamic memory you allocated in order to detect potential memory leaks through a leak detector, like valgrind. However, even in such a case, the example you describe wouldn't be considered a memory leak.
In general, failing to call a destructor is not the same as causing a memory leak. Memory leaks stem from memory allocated on the heap (with new or malloc or container allocators). Memory allocated on the stack is automatically reclaimed when the stack is unwound. However, if an object holds some other resource (say a file or a window handle), failing to call its destructor will call a resource leak, which can also be a problem. Again, modern OSs will reclaim their resources when an application terminates.
edit: as mentioned by GMan, "throw;" re-throws a previously thrown exception, or if there is none, immediately terminates. Since there is none in this case, immediate termination is the result.
Terminating a process always cleans up any leftover userland memory in any modern OS, so is not typically considered a "memory leak", which is defined as unreferenced memory not deallocated in a running process. However, it's really up to the OS as to whether such a thing is considered a "memory leak".
The answer is, it depends on the OS. I can't think of a modern OS that does not do it this way. But old systems (I think up to win 3.1 in windows, and some old embedded Linux platforms) if the program closed without releasing its memory requests the OS would hold them until you rebooted.
Memory leaks are considered a problem because a long running application will slowly bleed away system memory and may in the worst case make the whole machine unusable due to low memory conditions. In your case, the application terminates and all memory allocated to the application will be given back to the system, so hardly a problem.
The real question is, "Does myclass allocate any memory itself that must be free/deleted?"
If it does not -- if the only memory it uses is it's internal members -- then it exists entirely on the stack. Once it leaves that function (however it does), the memory on the stack is reclaimed and reused. myclass is gone. That's just the way stacks work.
If myclass does allocate memory that needs to be freed in it's dtor, then you are still in luck, as the dtor will be called as the stack is unwound during the throw. The dtor will have already been called before the exception is declared unhandled and terminate is called.
The only place you will have a problem is if myclass has a dtor, and the dtor throws as exception of it own. The second throw occurring during the stack unwind from the first throw will have it call terminate immedaitely without any more dtors being called.
From OP,
If throw is not handled, it calls,
terminate(), which in turn calls
abort() and crashes the application.
At this time, the objects in the stack
are not destoryed (The destructor is
not invoked).
This is an implementation defined behavior.
$15.3/9- "If no matching handler is
found in a program, the function
terminate() is called; whether or not
the stack is unwound before this call
to terminate() is
implementation-defined (15.5.1)."
Therefore, whether this constitues a memory leak or not is also implementation defined behavior, I guess.
My understanding is "When the application terminates (either by abort or by normal exit), it frees all the memory that was allocated for the application". Thus this cannot be considered as memory leak.
Am I correct?
Memory leak is a type of programming error which is ranked somewhat lower on scale of programming errors - compared to the uncaught exception.
IOW, if program doesn't terminate properly, a.k.a. crashes, then it is too soon to speak about memory leaks.
On other note, most memory analyzers I have worked with over past decade would not trigger any memory leak alarm in the case - because they do not trigger any alarms when program dumbly crashes. One first has to make program not crashing, only then debug the memory leaks.
Recently I heard that memory in the stack is not shared with other thread and memory in the heap is shared with other threads.
I normally do:
HWND otherThreadHwnd;
DWORD commandId;
// initialize commandId and otherThreadHwnd
struct MyData {
int data1_;
long data2_;
void* chunk_;
};
int abc() {
MyData myData;
// initialize myData
SendMessage(otherThreadHwnd,commandId,&myData);
// read myData
}
Is it alright to do this?
Yes, it is safe in this instance.
Data on the stack only exists for the lifetime of the function call. Since SendMessage is a synchronous, blocking call, the data will be valid for the duration of that call.
This code would be broken if you replace SendMessage with a call to PostMessage, SendNotifyMessage, or SendMessageCallback, since they would not block and the function may have returned before the target window received the message.
I think 2 different issues are being confused by whoever you "heard that memory in the stack is not shared with other thread":
object lifetime - the data on the stack is only valid as long the thread doesn't leave the scope of the variable's name. In the example you giove, you're handling this by making the call to the other thread synchronously.
memory address visibility - the addresses pspace for a process is shared among the various threads in that process. So variables addressable by one thread are addressable by other threads in that process. If you are passing the address to a thread in a different process, the situation is quite different and you'd need to use some other mechanism (which might be to ensure that the memory block is mapped into both processes - but that I don't think that can normally be done with stack memory).
Yes, it is okay.
SendMessage is working in blocking mode. Even if myData is allocated in stack, its address is still visible to all threads in the process. Each thread has its own private stack; but data in the stack can be explicitly shared, for example, by your code. However, as you guess, do not use PostThreadMessage in such case.
What you heard about is "potential infringement of privacy", which is sharing the data on one thread's private stack with another thread.
Although it is not encouraged, it is only a "potential" problem--with correct synchronization, it can be done safely. In your case, this synchronization is done by ::SendMessage(); it will not return until the message is processed in the other thread, so the data will not go out of scope on the main thread's stack. But beware that whatever you do with this pointer in the worker thread, it must be done before returning from the message handler (if you're storing it somewhere, be sure to make a copy).
As others have said already, how you have it written is just fine, and in general, nothing will immediately fail when passing a pointer to an object on the stack to another thread as long as everything's synchronized. However, I tend to cringe a little when doing so because things that seem threadsafe can get out of their intended order when an exception occurs or if one of the threads is involved with asynchronous IO callbacks. In the case of an exception in the other thread during your call to SendMessage, it may return 0 immediately. If the exception is later handled in the other thread, you may have an access violation. Yet another potential hazard is that whatever's being stored on the stack can never be forcibly disposed of from another thread. If it gets stuck waiting for some callback, object, etc, forever and the user has decided to cancel or quit the application, there is no way for the working thread to be sure the stalled thread has tidied up whatever objects are on its stack.
My point is this: In simple scenarios as you've described where everything works perfectly, nothing ever changes, and no outside dependencies fail, sharing pointers to the local stack is safe - but since allocating on the heap is really just as simple, and it gives you the opportunity to explicitly control the object's lifetime from any thread in extenuating circumstances, why not just use the heap?
Finally, I strongly suggest that you be very careful with the void* chunk_ member of your MyData structure, as it is not threadsafe as described if it's copied in the other thread.
Can the new operator throw an exception in real life?
And if so, do I have any options for handling such an exception apart from killing my application?
Update:
Do any real-world, new-heavy applications check for failure and recover when there is no memory?
See also:
How often do you check for an exception in a C++ new instruction?
Is it useful to test the return of “new” in C++?
Will new return NULL in any case?
Yes, new can and will throw if allocation fails. This can happen if you run out of memory or you try to allocate a block of memory too large.
You can catch the std::bad_alloc exception and handle it appropriately. Sometimes this makes sense, other times (read: most of the time) it doesn't. If, for example, you were trying to allocate a huge buffer but could work with less space, you could try allocating successively smaller blocks.
The new operator, and new[] operator should throw std::bad_alloc, but this is not always the case as the behavior can be sometimes overridden.
One can use std::set_new_handler and suddenly something entirely different can happen than throwing std::bad_alloc. Although the standard requires that the user either make memory available, abort, or throw std::bad_alloc. But of course this may not be the case.
Disclaimer: I am not suggesting to do this.
If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.
If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.
If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.
If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.
If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.
All the above is for the default global new. However, you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.
You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you terminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.
I have not personally seen an example where additional memory could be obtained after the exception. One possibility however, is the following: Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.
Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameter passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.
In Unix systems, it's customary to run long-running processes with memory limits (using ulimit) so that it doesn't eat up all of a system's memory. If your program hits that limit, you will get std::bad_alloc.
Update for OP's edit: the most typical case of programs recovering from an out-of-memory condition is in garbage-collected systems, which then performs a GC and continues. Though, this sort of on-demand GC is really for last-ditch efforts only; usually, good programs try to GC periodically to reduce stress on the collector.
It's less usual for non-GC programs to recover from out-of-memory issues, but for Internet-facing servers, one way to recover is to simply reject the request that's causing the memory to run out with a "temporary" error. ("First in, first served" strategy.)
osgx said:
Does any real-world applications
checks a lot number of news and can
recover when there is no memory?
I have answered this previously in my answer to this question, which is quoted below:
It is very difficult to handle this
sort of situation. You may want to
return a meaningful error to the user
of your application, but if it's a
problem caused by lack of memory, you
may not even be able to afford the
memory to allocate the error message.
It's a bit of a catch-22 situation
really.
There is a defensive programming
technique (sometimes called a memory
parachute or rainy day fund) where you
allocate a chunk of memory when your
application starts. When you then
handle the bad_alloc exception, you
free this memory up, and use the
available memory to close down the
application gracefully, including
displaying a meaningful error to the
user. This is much better than
crashing :)
You don't need to handle the exception in every single new :) Exceptions can propagate. Design your code so that there are certain points in each "module" where that error is handled.
It depends on the compiler/runtime and on the operator new that you are using (e.g. certain versions of Visual Studio will not throw out of the box, but would rather return a NULL pointer a la malloc instead.)
You can always catch a std::bad_alloc exception, or explicitly use nothrow new to return NULL instead of throwing. (Also see past StackOverflow posts revolving around the subject.)
Note that operator new, like malloc, will fail when you have run out of memory, out of address space (e.g. 2-3GB in a 32-bit process depending on the OS), out of quota (ulimit was already mentioned) or out of contiguous address space (e.g. fragmented heap.)
Yes, new can throw std::bad_alloc (a subclass of std::exception), which you may catch.
If you absolutely want to avoid this exception, and instead are ready to test the result of new for a null pointer, you may add a nothrow argument:
T* p = new (nothrow) T(...);
if (p == 0)
{
// Do something about the bad allocation!
}
else
{
// Here you may use p.
}
Yes new will throw an exception if there is no more memory available, but that doesn't mean you should wrap every new in a try ... catch. Only catch the exception if your program can actually do something about it.
If the program cannot do anything to handle that exceptional situation, what is often the case if you run out of memory, there is no use in catching the exception. If the only thing you could reasonably do is to abort the program you can as well just let the exception bubble up to top level, where it will terminate the program as well.
In many cases there's no reasonable recovery for an out of memory situation, in which case it's probably perfectly reasonable to let the application terminate. You might want to catch the exception at a high level to display a nicer error message than the compiler might give by default, but you might have to play some tricks to get even that to work (since the process is likely to be very low on resources at that point).
Unless you have a special situation that can be handled and recovered, there's probably no reason to spend a lot of effort trying to handle the exception.
Note that in Windows, very large new/mallocs will just allocate from virtual memory. In practice, your machine will crash before you see that exception.
char *pCrashMyMachine = new char[TWO_GIGABYTES];
Try it if you dare!
I use Mac OS X, and I've never seen malloc return NULL (which would imply an exception from new in C++). The machine bogs down, does its best to allocate dwindling memory to processes, and finally sends SIGSTOP and invites the user to kill processes rather than have them deal with allocation failure.
However, that's just one platform. CERTAINLY there are platforms where the default allocator does throw. And, as Chris says, ulimit may introduce an artificial constraint so that an exception would be the expected behavior.
Also, there are allocators besides the default one/malloc. If a class overrides operator new, you use custom arguments to new(…), or you pass an allocator object into a container, it probably defines its own conditions to throw bad_alloc.
new operator will throw std::bad_alloc exception when there are not enough available memory in the pool to fulfill runtime request.
This can happen on bad design or when memory allocated are not freed correctly.
Handling of such exception is based on your design, one way will be pause and retry some time later, hoping more memory returned to the pool and the request may succeed.
Most realistically new will throw due to a decision to limit a resource. Say this class (which may be memory intensive) takes memory out of the physicals pool and if to many objects take from it (we need memory for other things like sound, textures etc) it may throw instead of crashing later on when something that should be able to allocate memory takes it. (looks like a weird side effect).
Overloading new can be useful in devices with restricted memory. Such as handhelds or on consoles when its too easy to go overboard with cool effects.
Yes, new can and will throw.
Since you are asking about 'real' programs: I've worked on various shrink-wrapped commercial software applications for over 20 years. 'Real' programs with millions of users. That you can go and buy off the shelf today. Yes, new can throw.
There are various ways to handle this.
First, write your own new_handler (this is called before new gives up and throws - see set_new_handler() function). When your new_handler is called, see if you can free some things you don't really need. Also warn the user that they are running low on memory. (yes, it can be hard to warn the user about anything if you are really low).
One thing is to have pre-allocated, at the start of your program some 'extra' memory. When you run out of memory, use this extra memory to help save a copy of the user's document to disk. Then warn, and maybe exit gracefully.
Etc. This is just a overview, obviously there is more to it.
Handling low memory is not easy.
The new-handler function is the function called by allocation functions whenever new attempt to allocate the memory fails. We can have our own logging or some special action, eg,g arranging for more memory etc.
Its intended purpose is one of three things:
1) make more memory available
2) terminate the program (e.g. by calling std::terminate)
3) throw exception of type std::bad_alloc or derived from std::bad_alloc.
The default implementation throws std::bad_alloc. The user can have his own new-handler, which may offer behavior different than the default one. THis should be use only when you really need. See the example for more clarification and default behaviour,
#include <iostream>
#include <new>
void handler()
{
std::cout << "Memory allocation failed, terminating\n";
std::set_new_handler(nullptr);
}
int main()
{
std::set_new_handler(handler);
try {
while (true) {
new int[100000000ul];
}
} catch (const std::bad_alloc& e) {
std::cout << e.what() << '\n';
}
}
It's good to check/catch this exception when you are allocating memory based from something given from outside (from user space, network e.g.), because it could mean an attempt to compromise your application/service/system and you shouldn't allow this to happen.
new operator will throw std::bad_alloc exception when you run out of the memory ( virtual memory to be precise).
If new throws an exception then it is a serious error:
More than available VM is getting allocated ( it fails eventually). You can try reducing the amount of memory than exiting the program by catching std::bad_alloc exception.