Visual C++: possible to limit heap size? - c++

I have a problem with an application I'm debugging. Steady state memory usage is a few hundred megabytes. Occasionally (after several hours) it gets into a state where its memory usage soars to many gigabytes. I would like to be able to stop the program as soon as memory usage this happens.
Where control passes through my own code, I can trap excessive memory use with code like this:
bool usingTooMuchMemory()
{
PROCESS_MEMORY_COUNTERS pmc;
if (GetProcessMemoryInfo(GetCurrentProcess(), &pmc, sizeof pmc))
return pmc.WorkingSetSize > 0x80000000u; // 2GB working set
return false;
}
This doesn't help me because I need to test working set size at the right point. I really want the program to break on the first malloc or new that takes either working set or heap size over some threshold. And ideally I'd like to have this done by the CRT heap itself with minimal overhead because the library likes to allocate huge numbers of small blocks.
The suspect code is in a DLL running in a thread created by my calling code. The DLL links statically to the CRT and has no special heap management. I have source code for the DLL.
Any ideas? Am I missing something obvious?

You can set memory allocation and deallocation hooks, using _CrtSetAllocHook.

You can hook the HeapAlloc function, which malloc calls internally, by using the Detours library.

http://msdn.microsoft.com/en-us/library/aa366778%28v=vs.85%29.aspx
If you clear the IMAGE_FILE_LARGE_ADDRESS_AWARE flag in VS's linker options, the program's heap will be limited to 2GB in size, and should crash if attempts are made to acquire memory that would put it over that limit.

Related

Possible memory leak with malloc()

I have this code:
#include <malloc.h>
int main()
{
int x = 1000;
//~600kb memory at this point
auto m = (void**)malloc(x * sizeof(void*));
for (int i = 0; i < x; i++)
m[i] = malloc(x * sizeof(void*));
for (int i = 0; i < x; i++)
free(m[i]);
free(m);
//~1700kb memory at this point
return 0;
}
When program starts memory consumption is about ~600kb, and when it ends ~1700kb. Is it memory leak or what?
malloc() acquires memory from the system using a variety of platform-specific methods. But free() does not necessarily always give memory back to the system.
One reason the behavior you're seeing might exist is that when you first call malloc() it will ask the system for a "chunk" of memory, say 1 MB as a hypothetical example. Subsequent calls will be fulfilled from that same 1 MB chunk until it is exhausted, then another chunk will be allocated.
There is little reason to immediately return all allocated memory to the system just because the application is no longer using it. Indeed, the most likely possibilities are that (a) the application requests more memory, in which case the recently freed pieces can be doled out again, or (b) the application terminates, in which case the system can efficiently clean up all its allocated memory in a single operation.
Is it memory leak or what?
No. You have each malloc matching a free and if I dont miss something you have no memory leak.
Note that what you see in the process manager is the memory assigned to the process. This is not necessarily equal to the memory actually in use by the process. The OS does not immediately reclaim it when you call free.
If you have a more complex code you can use a tool like valgrind to inspect it for leaks. Though, you better dont use manual memory managment at all, but use std containers instead (eg std::vector for dynamic arrays).
free() and malloc() are part of your Standard Library and their behavior is implementation dependent. The standard does not require free() to release once acquired memory back to your Operating System.
How the memory is reserved is plattform specific. On POSIX systems the mmap() and munmap() system calls can be used.
Please Note that most Operating Systems implement Paging, allocating memory in chunks to processes anyway. So releasing each single byte would only pose a performance overhead.
When you are running your application under management of operating system, the process of memory allocation in languages like C/C++ is two-fold. First, there is a business of actually requesting memory to be mapped into process from operating system. This is done through OS-specific call, and an application needs not bothering about it. On Linux this call is either sbrk or mmap, depending on several factors. However, this OS-requested memory is not directly accessible to C++ code, it has no control over it. Instead, C++ runtime manages it.
The second thing is actually providing a usable pointer to C/C++ code. This is also done by runtime when C++ code asks for it.
When C/C++ code calls malloc(or new), C++ runtime first determines, if it has enough continuous memory under it's management to provide a pointer to it to application code. This process is further complicated by efficiency optimizations inside malloc/new, which usually provide for several allocation arenas, used depending on object sizes, but we won't talk about it. If it has, the memory region will be marked as used by application, and a pointer will be returned to the code.
If there is no memory available, a chunk of memory would be requested from the OS. The size of the chunk normally will be way bigger than what was requested - because requesting from OS is expensive!
When the memory is deleted/freed, C++ runtime reclaims it. It might decide to return memory back to OS (for example, if there is a pressure in terms of total memory consumption on the machine), or it can just store it for future use. Many times memory will never be returned to the OS until process exits, so any OS-level tool will show process consuming memory, even though the code delete'd/freed it all!
It is also worth noting that usually application can request memory from OS bypassing C++ runtime. For example, on Linux it could be done through mmap call or, in bizarre cases, through sbrk - but in latter case you won't be able to use runtime memory management at all after this. If you use those mechanisms, you will immediately see process memory consumption growing or decreasing with every memory request or return.

Debugging Memory Leak on C/C++ Application running on Embedded Linux Device

I have an application running on an ARM Cortex-A9. When I enter a certain portion of the code, I can see in the Linux tasks view 'top' that the application grows in memory usage until it gets Killed due to running out of physical memory.
Now, I have done some research on this and tried to implement mtrace, but it didn't give me very concise results. Basically I get something like this
Memory not freed:
-----------------
Address Size Caller
0x03aafe18 0x38 at 0x76e73c18
0x53a004a8 0x38 at 0x76e73c18
And I do not even think this is the big problem (maybe another smaller issue).
I also cannot use Valgrind (which would probably work great) because there is not enough space on the device to install it and a compiler...
So I fear that I just have to go through the code and look for something that could be causing growing memory usage. Is there a guide for this somewhere? In the code, "malloc" or "new" is almost never used.
I do have access to use gdb, if that can help.
One thing I am not clear on is if the following is a problem:
while(someloop){
...
double *someptr;
...
}
or like
while(someloop){
...
int32 someArray[100] = {0};
...
}
Of which there is a lot of in the code. When that loop comes around, and instantiates those variables or pointers, does it just keep using free space, or use the spaces from the last iteration?
If it is alocated on the stack, the memory is reused. However by alocating on heap you need to delete.
Also if you alocate with double * ptr; ... ptr = new double [5], you need to delete by delete [] ptr.
In C++ you can overwrite the new and delete operators to print some message for debuging.
Best would be to debug using gdb and see what object is created and not deleted.
It is possible that you use a class in your code that does not delete something internal.
Tip: for small objects alocating on stack is both faster and safer.

C++ Change Max RAM Limit

How would I change the max amount of RAM for a program? I constantly am running out of memory (not system max, ulimit max), and I do not wish to change the global memory limit. I looked around and saw the vlimit() function might work, but I'm unsure exactly how to use it.
Edit: I'm on linux 2.6.38-11-generic
This is not a memory leak, I literally must allocate 100k of a given class, no way around it.
Do you allocate the objects on the stack and are in fact hitting the stack limit?
Do you, e.g., write something like this:
void SomeFunction() {
MyObject aobject[100000];
// do something with myobject
}
This array will be allocated on the stack. A better solution -- which automates heap allocation for you -- would be to write
void SomeFunction() {
std::vector<MyObject> veccobject(100000); // 100.000 default constructed objects
// do something with myobject
}
If for some reason, you really want a bigger stack, consult your compiler documentation for the appropriate flag:
How to increase the gcc executable stack size?
Can you set the size of the call stack in c++? (vs2008)
And you might want to consider:
When do you worry about stack size?
Do you understand why you are hitting the RAM limit? Are you sure that you don't have memory leaks (if you do have leaks, you'll need more and more RAM when running your application for a longer time).
Assuming a Linux machine, you might use valgrind to hunt and debug memory leaks, and you could also use Boehm's conservative garbage collector to often avoid them.
If these limits are being imposed on you by the system administrator, then no - you're stuck. If you're ulimiting yourself, sure - just raise the soft and hard limits.
As pointed out by Chris in the comments, if you're a privileged process, you could use setrlimit to raise your hard limit in process. However, one presumes if you're under a ulimit, then you're unlikely to be a privileged process.
If you have permissions you can use the setrlimit call (which supersedes vlimit) to set the limits at the start of your program. This will only affect this program and its offspring.
#include <sys/time.h>
#include <sys/resource.h>
int main() {
struct rlimit limit;
limit.rlim_cur = RLIM_INFINITY;
limit.rlim_max = RLIM_INFINITY;
if (0 != setrlimit(RLIMIT_AS, &limit)) {
//errro
}
}

C++/Windows: How to report an out-of-memory exception (bad_alloc)?

I'm currently working on an exception-based error reporting system for Windows MSVC++ (9.0) apps (i.e. exception structures & types / inheritance, call stack, error reporting & logging and so on).
My question now is: how to correctly report & log an out-of-memory error?
When this error occurs, e.g. as an bad_alloc thrown by the new op, there may be many "features" unavailable, mostly concerning further memory allocation. Normally, I'd pass the exception to the application if it has been thrown in a lib, and then using message boxes and error log files to report and log it. Another way (mostly for services) is to use the Windows Event Log.
The main problem I have is to assemble an error message.
To provide some error information, I'd like to define a static error message (may be a string literal, better an entry in a message file, then using FormatMessage) and include some run-time info such as a call stack.
The functions / methods necessary for this use either
STL (std::string, std::stringstream, std::ofstream)
CRT (swprintf_s, fwrite)
or Win32 API (StackWalk64, MessageBox, FormatMessage, ReportEvent, WriteFile)
Besides being documented on the MSDN, all of them more (Win32) or less (STL) closed source in Windows, so I don't really know how they behave under low memory problems.
Just to prove there might be problems, I wrote a trivial small app provoking a bad_alloc:
int main()
{
InitErrorReporter();
try
{
for(int i = 0; i < 0xFFFFFFFF; i++)
{
for(int j = 0; j < 0xFFFFFFFF; j++)
{
char* p = new char;
}
}
}catch(bad_alloc& e_b)
{
ReportError(e_b);
}
DeinitErrorReporter();
return 0;
}
Ran two instances w/o debugger attached (in Release config, VS 2008), but "nothing happened", i.e. no error codes from the ReportEvent or WriteFile I used internally in the error reporting. Then, launched one instance with and one w/o debugger and let them try to report their errors one after the other by using a breakpoint on the ReportError line. That worked fine for the instance with the debugger attached (correctly reported & logged the error, even using LocalAlloc w/o problems)! But taskman showed a strange behaviour, where there's a lot of memory freed before the app exits, I suppose when the exception is thrown.
Please consider there may be more than one process [edit] and more than one thread [/edit] consuming much memory, so freeing pre-allocated heap space is not a safe solution to avoid a low memory environment for the process which wants to report the error.
Thank you in advance!
"Freeing pre-allocated heap space...". This was exactly that I thought reading your question. But I think you can try it. Every process has its own virtual memory space. With another processes consuming a lot of memory, this still may work if the whole computer is working.
pre-allocate the buffer(s) you need
link statically and use _beginthreadex instead of CreateThread (otherwise, CRT functions may fail) -- OR -- implement the string concat / i2a yourself
Use MessageBox (MB_SYSTEMMODAL | MB_OK) MSDN mentions this for reporting OOM conditions (and some MS blogger described this behavior as intended: the message box will not allocate memory.)
Logging is harder, at the very least, the log file needs to be open already.
Probably best with FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH, to avoid any buffering attempts. The first one requires that writes and your memory buffers are sector aligned (i.e. you need to query GetDiskFreeSpace, align your buffer by that, and write only to "multiple of sector size" file offsets, and in blocks that are multiples of sector size. I am not sure if this is necessary, or helps, but a system-wide OOM where every allocation fails is hard to simulate.
Please consider there may be more than one process consuming much memory, so freeing pre-allocated heap space is not a safe solution to avoid a low memory environment for the process which wants to report the error.
Under Windows (and other modern operating systems), each process has its own address space (aka memory) separate from every other running process. And all of that is separate from the literal RAM in the machine. The operating system has virtualized the process address space away from the physical RAM.
This is how Windows is able to push memory used by processes into the page file on the hard disk without those processes having any knowledge of what happened.
This is also how a single process can allocate more memory than the machine has physical RAM and yet still run. For instance, a program running on a machine with 512MB of RAM could still allocate 1GB of memory. Windows would just couldn't keep all of it in the RAM at the same time and some of it would be in the page file. But the program wouldn't know.
So consequently, if one process allocates memory, it does not cause another process to have less memory to work with. Each process is separate.
Each process only needs to worry about itself. And so the idea of freeing a pre-allocated chunk of memory is actually very viable.
You can't use CRT or MessageBox functions to handle OOM since they might need memory, as you describe. The only truly safe thing you can do is alloc a chunk of memory at startup you can write information into and open a handle to a file or a pipe, then WriteFile to it when you OOM out.

How can I get the size of a memory block allocated using malloc()? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How can I get the size of an array from a pointer in C?
Is there any way to determine the size of a C++ array programmatically? And if not, why?
I get a pointer to a chunk of allocated memory out of a C style function.
Now, it would be really interesting for debugging purposes to know how
big the allocated memory block that this pointer points is.
Is there anything more elegant than provoking an exception by blindly running over its boundaries?
Thanks in advance,
Andreas
EDIT:
I use VC++2005 on Windows, and GCC 4.3 on Linux
EDIT2:
I have _msize under VC++2005
Unfortunately it results in an exception in debug mode....
EDIT3:
Well. I have tried the way I described above with the exception, and it works.
At least while I am debugging and ensuring that immediately after the call
to the library exits I run over the buffer boundaries. Works like a charm.
It just isn't elegant and in no way usable in production code.
It's not standard but if your library has a msize() function that will give you the size.
A common solution is to wrap malloc with your own function that logs each request along with the size and resulting memory range, in the release build you can switch back to the 'real' malloc.
If you don't mind sleazy violence for the sake of debugging, you can #define macros to hook calls to malloc and free and pad the first 4 bytes with the size.
To the tune of
void *malloc_hook(size_t size) {
size += sizeof (size_t);
void *ptr = malloc(size);
*(size_t *) ptr = size;
return ((size_t *) ptr) + 1;
}
void free_hook (void *ptr) {
ptr = (void *) (((size_t *) ptr) - 1);
free(ptr);
}
size_t report_size(ptr) {
return * (((size_t *) ptr) - 1);
}
then
#define malloc(x) malloc_hook(x)
and so on
The C runtime library does not provide such a function. Furthermore, deliberately provoking an exception will not tell you how big the block is either.
Usually the way this problem is solved in C is to maintain a separate variable which keeps track of the size of the allocated block. Of course, this is sometimes inconvenient but there's generally no other way to know.
Your C runtime library may provide some heap debug functions that can query allocated blocks (after all, free() needs to know how big the block is), but any of this sort of thing will be nonportable.
With gcc and the GNU linker, you can easily wrap malloc
#include <stdlib.h>
#include <stdio.h>
void* __real_malloc(size_t sz);
void* __wrap_malloc(size_t sz)
{
void *ptr;
ptr = __real_malloc(sz);
fprintf(stderr, "malloc of size %d yields pointer %p\n", sz, ptr);
/* if you wish to save the pointer and the size to a data structure,
then remember to add wrap code for calloc, realloc and free */
return ptr;
}
int main()
{
char *x;
x = malloc(103);
return 0;
}
and compile with
gcc a.c -o a -Wall -Werror -Wl,--wrap=malloc
(Of course, this will also work with c++ code compiled with g++, and with the new operator (through it's mangled name) if you wish.)
In effect, the statically/dynamically loaded library will also use your __wrap_malloc.
No, and you can't rely on an exception when overrunning its boundaries, unless it's in your implementation's documentation. It's part of the stuff you really don't need to know about to write programs. Dig into your compiler's documentation or source code if you really want to know.
There is no standard C function to do this. Depending on your platform, there may be a non-portable method - what OS and C library are you using?
Note that provoking an exception is unreliable - there may be other allocations immediately after the chunk you have, and so you might not get an exception until long after you exceed the limits of your current chunk.
Memory checkers like Valgrind's memcheck and Google's TCMalloc (the heap checker part) keep track of this sort of thing.
You can use TCMalloc to dump a heap profile that shows where things got allocated, or you can just have it check to make sure your heap is the same at two points in program execution using SameHeap().
Partial solution: on Windows you can use the PageHeap to catch a memory access outside the allocated block.
PageHeap is an alternate memory manager present in the Windows kernel (in the NT varieties but nobody should be using any other version nowadays). It takes every allocation in a process and returns a memory block that has its end aligned with the end of a memory page, then it makes the following page unaccessible (no read, no write access). If the program tries to read or write past the end of the block, you'll get an access violation you can catch with your favorite debugger.
How to get it: Download and install the package Debugging Tools for Windows from Microsoft: http://www.microsoft.com/whdc/devtools/debugging/default.mspx
then launch the GFlags utility, go to the 3rd tab and enter the name of your executable, then Hit the key. Check the PageHeap checkbox, click OK and you're good to go.
The last thing: when you're done with debugging, don't ever forget to launch GFlags again, and disable PageHeap for the application. GFlags enters this setting into the Registry (under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\), so it is persistent, even across reboots.
Also, be aware that using PageHeap can increase the memory needs of your application tremendously.
The way to do what you want is to BE the allocator. If you filter all requests, and then record them for debugging purposes, then you can find out what you want when the memory is free'd.
Additionally, you can check at the end of the program to see if all allocated blocks were freed, and if not, list them. An ambitious library of this sort could even take FUNCTION and LINE parameters via a macro to let you know exactly where you are leaking memory.
Finally, Microsoft's MSVCRT provides a a debuggable heap that has many useful tools that you can use in your debug version to find memory problems: http://msdn.microsoft.com/en-us/library/bebs9zyz.aspx
On Linux, you can use valgrind to find many errors. http://valgrind.org/