My program creates a minidump on crash (using MiniDumpWriteDump from DBGHELP.DLL) and I would like to keep the size of the dump as low as possible while still having important memory information available. I have gone through the different possible combinations of flags and callback functionalities you can pass to MiniDumpWriteDump (links to debuginfo.com or MSDN).
I think I am limited to these MINIDUMP_TYPE flags, since it has to work on an old WinXP machine:
MiniDumpNormal
MiniDumpWithDataSegs
MiniDumpWithFullMemory
MiniDumpWithHandleData
MiniDumpFilterMemory
MiniDumpScanMemory
I am searching for a way how to combine these flags and the callback function to get a dump with the following requirements:
Relative small size (Full memory dump results in ~200MB filesize, I want max. 20MB)
Stack trace of the crashed thread, maybe also the stack trace of other threads but without memory info
Memory information of the whole stack of the crashed thread. This is where it gets complicated: Including stack memory should be no problem in terms of size but heap memory might be overkill.
The question is how can I limit the memory info to the crashed thread and how to include stack memory (local variables) of the whole call stack?
Is it also possible to include parts of the heap memory, like only these parts that are referenced by the current call stack?
Related
So let's say I have created a thread, on Windows OS, for which I know that the default stack size is much more than what is needed. Can I then run the application and ask the thread about the amount of stack it actually has used so that I know how much I should set its stack size instead of the default stack size?
Windows usually doesn't commit the entire stack, it only reserves it. (Well, unless you ask it to, e.g. by specifying a non-zero stack size argument for CreateThread without also passing the STACK_SIZE_PARAM_IS_A_RESERVATION flag).
You can use this to figure out how much stack your thread has ever needed while running, including any CRT, WinAPI or third-party library calls.
To do that, simply read the StackBase and StackLimit values from the TEB - see answers to this question for how to do that. The difference of the two values should be the amount of stack memory that has been committed, i.e. - the amount of stack memory that the thread has actually used.
Alternatively, if a manual process is sufficient: Simply start the application in WinDBG, set a breakpoint before the thread exits and then dump the TEB with the !teb command. You can also dump the entire memory map with !address and look for committed areas with usage Stack.
I have an native c++ application from which i am making call to .net dll(external function), I see that when i make call to managed, it allocates the full stack allocated for thread specified using /stack linker option, however if I make only native function calls, it allocates the stack it is required for the calculations.
Below are my observations
With /stack option set to 80MB, and with calls to managed external function.
With /stack option set to 1MB, and with calls to managed external function.
With /stack option set to 80MB, and with calls to native internal function.
When we make calls to .Net external function, there are also some extra threads which are related to GC. Also threads in our application are also using considerably more stack space compared to the case where we do not call .Net external function. I am not sure if managed stack is place on top of native stack. Can someone help me in understanding why full stack for a thread is allocated when we make calls to .Net external function and also memory management in mixed mode application.
OK I finally found the answer.
The CLR always commits the whole stack memory for managed threads as soon as a managed thread is created, or lazily when a native thread becomes a managed thread. this is done to ensure that stack overflow can be dealt with predictably by the execution engine.
In managed code, the System.Threading.Thread class's constructor provides two overloads that accepts a maxStackSize parameter. As full stack is committed at creation time for all managed threads, and so maxStackSize parameter represents both the reserve and the commit size:they are effectively same.
Just to clarify, there are three steps to using memory on the stack:
Reserve virtual address space for the stack in the process
Commit the pages
Use the pages
The default behaviour in normal Win32 programs is to do just 1 when a thread starts up. The problem with that is that the dynamic stack growth can fail under high load - you might find that when you want to add a stack frame, the system doesn't have any free virtual memory available for you.
By doing step 2, the CLR ensures that memory will be kept in reserve so that step 3 will never fail.
Any useful information regarding this are still appreciated.
Thank You.
We have a front end written in Visual Basic 6.0 that calls several back end DLLs written in mixed C/C++. The problem is that each DLL appears to have its own heap and one of them isn’t big enough. The heap collides with the program stack when we’ve allocated enough memory.
Each DLL is written entirely in C, except for the basic DLL wrapper, which is written in C++. Each DLL has a handful of entry points. Each entry point immediately calls a C routine. We would like to increase the size of the heap in the DLL, but haven’t been able to figure out how to do that. I searched for guidance and found these MSDN articles:
http://msdn.microsoft.com/en-us/library/hh405351(v=VS.85).aspx
These articles are interesting but provide conflicting information. In our problem it appears that each DLL has its own heap. This matches the “Heaps: Pleasures and Pains” article that says that the C Run-Time (C RT) library creates its own heap on startup. The “Managing Heap Memory” article says that the C RT library allocated out of the default process heap. The “Memory management options in Win32” article says the behavior depends on the version of the C RT library being used.
We’ve temporarily solved the problem by allocating memory from a private heap. However, in order to improve the structure of this very large complex program, we want to switch from C with a thin C++ wrapper to real C++ with classes. We’re pretty certain that the new and free operator won’t allocate memory from our private heap and we’re wondering how to control the size of the heap C++ uses to allocate objects in each DLL. The application needs to run in all versions of desktop Windows-NT, from 2000 through 7.
The Question
Can anyone point us to definitive and correct documentation that
explains how to control the size of the heap C++ uses to allocate
objects?
Several people have asserted that stack corruption due to heap allocations overwriting the stack are impossible. Here is what we observed. The VB front end uses four DLLs that it dynamicly loads. Each DLL is independant of the others and provides a handful of methods called by the front end. All the DLLs comunicate via data structures written to files on disk. These data structures are all structured staticlly. They contain no pointers, just value types and fixed sized arrays of value types. The problem DLL is invoked by a single call where a file name is passed. It is designed to allocate about 20MB of data structures required to do complete its processing. It does a lot of calculation, writes the results to disk, releases the 20MB of data structures, and returns and error code. The front end then unloads the DLL. While debugging the problem under discussion, we set a break point at the beginning of the data structure allocation code and watched the memory values returned from the calloc calls and compared them with the current stack pointer. We watched as the allocated blocks approached the the stack. After the allocation was complete the stack began to grow until it overlapped the heap. Eventually the calculations wrote into the heap and corrupted the stack. As the stack unwound it tried to return to an invalid address and crashed with a segmentation fault.
Each of our DLLs is staticly linked to the CRT, so that each DLL has its own CRT heap and heap manager. Microsoft says in http://msdn.microsoft.com/en-us/library/ms235460(v=vs.80).aspx:
Each copy of the CRT library has a separate and distinct state.
As such, CRT objects such as file handles, environment variables, and
locales are only valid for the copy of the CRT where these objects are
allocated or set. When a DLL and its users use different copies of the
CRT library, you cannot pass these CRT objects across the DLL boundary
and expect them to be picked up correctly on the other side.
Also, because each copy of the CRT library has its own heap manager,
allocating memory in one CRT library and passing the pointer across a
DLL boundary to be freed by a different copy of the CRT library is a
potential cause for heap corruption.
We don't pass pointers between DLLs. We aren't experiencing heap corruption, we are experiencing stack corruption.
OK, the question is:
Can anyone point us to definitive and correct documentation that
explains how to control the size of the heap C++ uses to allocate
objects?
I am going to answer my own question. I got the answer from reading Raymond Chen's blog The Old New Thing, specifically There's also a large object heap for unmanaged code, but it's inside the regular heap. In that article Raymond recommends Advanced Windows Debugging by Mario Hewardt and Daniel Pravat. This book has very specific information on both stack and heap corruption, which is what I wanted to know. As a plus it provides all sorts of information about how to debug these problems.
Could you please elaborate on this your statement:
The heap collides with the program stack when we’ve allocated enough memory.
If we're talking about Windows (or any other mature platform), this should not be happening: the OS makes sure that stacks, heaps, mapped files and other objects never intersect.
Also:
Can anyone point us to definitive and correct documentation that explains how to control the size of the heap C++ uses to allocate objects?
The heap size is not fixed on Windows: it grows as the application uses more and more memory. It will grow until all available virtual memory space for the process is used. It is pretty easy to confirm this: just write a simple test app which keeps allocating memory and counts how much has been allocated. On a default 32-bit Windows you'll reach almost 2Gb. Surely, initially the heap doesn't occupy all available space, therefore it must grow in the process.
Without many details about the "collision" it's hard to tell what's happening in your case. However, looking at the tags to this question prompts me to one possibility. It is possible (and happens quite often, unfortunately) that ownership of allocated memory areas is being passed between modules (DLLs in your case). Here's the scenario:
there are two DLLs: A and B. Both of them created their own heaps
the DLL A allocates an object in its heap and passes the pointer and ownership to B
the DLL B receives the pointer, uses the memory and deallocates the object
If the heaps are different, most heap managers would not check if the memory region being deallocated actually belongs to it (mostly for performance reasons). So they would deacllocate something which doesn't belong to them. By doing that they corrupt the other module's heap. This may (and often does) lead to a crash. But not always. Depending on your luck (and particular heap manager implementation), this operation may change one of the heaps in a manner that the next allocation will happen outside of the area where the heap is located.
This often happens when one module is managed code, while the other is native one. Since you have the VB6 tag in the question, I'd check if this is the case.
If the stack grows large enough to hit the heap, a prematurely aborted stack overflow may be the problem: Invalid data is passed that does not satisfy the exit condition of some recursion (loop detection is not working or not existing) in the problem DLL so that an infinite recursion consumes ridiculously large stack space. One would expect such a DLL to terminate with a stack overflow exception, but for maybe compiler / linker optimizations or large foreign heap sizes it crashes elsewhere.
Heaps are created by the CRT. That is to say, the malloc heap is created by the CRT, and is unrelated to HeapCreate(). It's not used for large allocations, though, which are handed off to the OS directly.
With multiple DLLs, you might have multiple heaps (newer VC versions are better at sharing, but even VC6 had no problem if you used MSVCRT.DLL - that's shared)
The stack, on the other hand, is managed by the OS. Here you see why multiple heaps don't matter: The OS allocation for the different heaps will never collide with the OS allocation for the stack.
Mind you, the OS may allocate heap space close to the stack. The rule is just no overlap, after all, there's no guaranteed "unused separation zone". If you then have a buffer overflow, it could very well overflow into the stack space.
So, any solutions? Yes: move to VC2010. It has buffer security checks, implemented in quite an efficient way. They're the default even in release mode.
I have a C++ application that has some minimal leaks, and I would like to fix them. I am using AppVerifier to dump the leaked objects, and I can get the addresses and first few bytes of the allocated memory.
Unfortunately, those first bytes and raw address is not enough to pinpoint the allocation stack trace, is there a method to get complete allocation data dump, and find the stack that's allocating the memory?
I could put _CrtSetBreakAlloc via the leak number, but unfortunatelly it's a threaded application and those numbers float up and down.
Does anyone have a suggestion what I could try?
With the gflags utility you can enable storing call stack information (gflags +ust). However, your applications will now run slower and take more memory.
Side-remark: To be honest, I never got all those Microsoft utilities (leak-tracing in the C-RunTime, Gflags, UMDH, AppVerifier, LeakDiag) to do exactly what I wanted. In the end, I simply wrote my own memory allocator in which I can add whatever tracing I want (call stack, red zone marking, delayed freeing, consistency checking, ...).
You could try using UMDH to track memory leaks. You first have to use GFlags to turn on storing call stack tracing whenever memory is allocated. The docs on UMDH state how to use it.
But recently I've finally tried out visual leak detector, and it works fabulous on my monstrous, big app.
http://vld.codeplex.com
An application I am working with is exhibiting the following behaviour:
During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows)
After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second.
I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd.
I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code).
Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger?
Update
I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.
Ahh! You're looking at the wrong counter!
Mem Usage doesn't tell you that memory is being freed. Only that the working set is being purged! This could mean some other application needs memory, or the VMM decided to mark some of your process's pages as Stand By for some other process to quickly use. It does not mean that VirtualFree, HeapFree or any other free function is being called.
Look at the commit size (VM Size, Private Bytes, etc).
But if you still want to know when memory is being decommitted or freed or what-have-you, then break on some free calls. E.g. (for Visual C++)
{,,kernel32.dll}HeapFree
or
{,,msvcr80.dll}free
etc.
Or just a regular function breakpoint on the above. Just make sure it resolves the address.
cdb/WinDbg let you do it via
bp kernel32!HeapFree
bp msvcrt!free
etc.
Names may vary depending on which CRT version you use and how you link against it (via /MT or /MD and its variants)
You might find this article useful:
http://www.gamasutra.com/view/feature/1430/monitoring_your_pcs_memory_usage_.php?print=1
basically what I had in mind was hooking the low level allocation functions.
A couple different ideas:
The C runtime has a set of memory debugging functions; you'd need to recompile though. You could get a snapshot at computation completion and later, and use _CrtMemDifference to see what changed.
Or, you can attach to the process in your debugger, and cause it to dump a core before and after the memory freeing. Using NTSD, you can see what heaps are around, and the sizes of things. (You'll need a lot of disk space, and a fair amount of patience.) There's a setting (I think you get it through gflags, but I don't remember) that causes it to save a piece of the call stack as part of the dump; using that you can figure out what kind of object is being deallocated. Unfortunately, it only stores 4 or 5 stack frames, so you'll likely have to do something more clever as the next step to figure out where it's being freed. Either look at the code ("oh yeah, there's only one place where that can happen") or put in breakpoints on those destructors, or add tracing to the allocations and deallocations.
If your memory manager wipes free'd data to a known value (usually something like 0xfeeefeee), you can set a data breakpoint on a particular instance of something you're interested in. When it gets free'd, the breakpoint will trigger when the memory gets wiped.
I recommend you to check UMDH tool that comes with Debugging Tools For Windows (You can find usage and samples in the debugging tools help). You can snap shot running process's heap allocations with stack trace and compare them.
You could try Memory Validator to monitor the allocations and deallocations. Memory Validator has a couple of features that will help you identify where data is being deallocated:
Hotspots view. This can show you a tree of all allocations and deallocations or just all allocations or just all deallocations. It presents the data as a percentage of memory activity (based on amount of memory (de)allocated at a given location).
Analysis view. You can perform queries asking for data in a given address range. You can restrict these queries to any of alloc, realloc, dealloc behaviours.
Objects view. You can view allocations by type and see the maximum number of objects of each type (plus lots of other stats). Right click on a type to get a context menu, choose show all deallocations - will show deallocation locations for that type on Analysis tab.
I think the Hotspots view may give you the insight you need.