I have a C++ application with a very strange phenomenon.
I'm running my application on a large input, and I have many buffers that are allocated and de-allocated during run-time.
For input that it large enough, I have allocation error, meaning out of memory.
But, when I put a breakpoint on each allocation, and then run from allocation to allocation, my application won't crash.
My assumption that it has to be something related to the way windows XP manages the memory.
Is anyone has an idea what can cause this phenomenon, and how to over come it?
Thanks.
Frequent allocation and deallocation can lead to memory fragmentation. My guess is that, when you step through the program with a debugger, it gives the OS idle-time to defragment the memory. To avoid the problem when running your program normally, you should consider memory/object-pool (see here and here).
Application behavior is different in Release and Debug runs. As you are Saying in normal run it gives Out of Memory there is some thing wrong with your code. It may be saying there is no memory or no continuous memory.
You can use some static or dynamic code analyses to find out the problem.
IBM Purifier( Trial version)
Related
In C++ programs, I have sometimes had problems with "weak" memory leaks. By that, I mean that some objects accumulate resources, but, eventually, these objects are destroyed properly and their memory is released, so that these leaks do not show up using the traditional memory debugging tools like valgrind or address sanitizers.
A typical example would be a poorly-written cache that keeps all the cached results from the beginning of the program. It grows forever, but its memory is reclaimed at the end of the program, when the cache is destroyed.
How can one debug this ? Are there tools available to see where are the largest objects allocated by the program ? To dump the current state of allocated memory (including call stack) ? To see which objects are growing ? I'm using Linux, but I am interested in other platforms as well.
If other platforms are an option, I would recommend Visual Studio on Windows.
It has powerful profiling options, including one for memory usage.
https://learn.microsoft.com/en-us/visualstudio/profiling/memory-usage
While debugging you can take a snapshot to see where memory is being used.
You can also take memory usage snapshots at different times and compare them.
You can use a profiler like e.g. Intel VTune (this is available for Linux as well) to trace memory consumption of your application. In VTune, you can see the memory consumption over time, and select a time window to see where memory was allocated during that window.
It still will be difficult to detect such problems if your application allocates and deallocates a lot of memory correctly, and only a small fraction is deallocated too late. In that case you need to check a lot of allocations/deallocations before you can find the bad one(s).
I'm trying to understand how the objects (variables, functions, structs, etc) work in c++. In this case I see there are basically two ways of storing them: the stack and the heap. Accordingly, whenever the heap storage is used it needs to be dealocated manually, but if the stack is used, then the dealocation is automaticcally done. so my question is related to the kinds of problems that bad practice might cause the program itself or to the computer. For example:
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
3.- What are the problems of getting a segmentation fault (the heap).
Some other dangers or cares relevant to this are welcome.
Accordingly, whenever the stack storage is used it needs to be
dealocated manually, but if the heap is used, then the dealocation is
automaticcally done.
When you use stack - local variables in the function - they are deallocated automatically when the function ends (returns).
When you allocate from the heap, the memory allocated remains "in use" until it is freed. If you don't do that, your program, if it runes for long enough and keep allocating "stuff", will use all memory available to it, and eventually fail.
Note that "stackfault" is almost impossible to recover from in an application, because the stack is no longer usable when it's full, and most operations to "recover from error" will involve using SOME stack memory. The processor typically has a special trap to recover from stack fault, but that lands insise the operating system, and if the OS determines the application has run out of stack, it often shows no mercy at all - it just "kills" the application immediately.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program
crashes (stack overflow), but does it cause some trouble to the
computer itself? (To the RAM maybe or to the SO).
No, the computer itself is not harmed by this in and of itself. There may of course be data-loss if your program wasn't saving something that the user was working on.
Unless the hardware is very badly designed, it's very hard to write code that causes any harm to the computer, beyond loss of stored data (of course, if you write a program that fills the entire hard disk from the first to the last sector, your data will be overwritten with whatever your program fills the disk with - which may well cause the machine to not boot again until you have re-installed an operating system on the disk). But RAM and processors don't get damaged by bad coding (fortunately, as most programmers make mistakes now and again).
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the
computer in general. I mean it might be that such memory could not be
used never again or something.
Once the program finishes (and most programs that use "too much memory" does terminate in some way or another, at some point).
Of course, how well the operating system and other applications handle "there is no memory at all available" varies a little bit. The operating system in itself is generally OK with it, but some drivers that are badly written may well crash, and thus cause your system to reboot if you are unlucky. Applications are more prone to crashing due to there not being enough memory, because allocations end up with NULL (zero) as the "returned address" when there is no memory available. Using address zero in a modern operating system will almost always lead to a "Segmentation fault" or similar problem (see below for more on that).
But these are extreme cases, most systems are set up such that one application gobbling all available memory will in itself fail before the rest of the system is impacted - not always, and it's certainly not guaranteed that the application "causing" the problem is the first one to be killed if the OS kills applications simply because they "eat a lot of memory". Linux does have a "Out of memory killer", which is a pretty drastic method to ensure the system can continue to work [by some definition of "work"].
3.- What are the problems of getting a segmentation fault (the heap).
Segmentation faults don't directly have anything to do with the heap. The term segmentation fault comes from older operating systems (Unix-style) that used "segments" of memory for different usages, and "Segmentation fault" was when the program went outside it's allocated segment. In modern systems, the memory is split into "pages" - typically 4KB each, but some processors have larger pages, and many modern processors support "large pages" of, for examble, 2MB or 1GB, which is used for large chunks of memory.
Now, if you use an address that points to a page that isn't there (or isn't "yours"), you get a segmentation fault. This, typically will end the application then and there. You can "trap" segmentation fault, but in all operating systems I'm aware of, it's not valid to try to continue from this "trap" - but you could for example store away some files to explain what happened and help troubleshoot the problem later, etc.
Firstly, your understanding of stack/heap allocations is backwards: stack-allocated data is automatically reclaimed when it goes out of scope. Dynamically-allocated data (data allocated with new or malloc), which is generally heap-allocated data, must be manually reclaimed with delete/free. However, you can use C++ destructors (RAII) to automatically reclaim dynamically-allocated resources.
Secondly, the 3 questions you ask have nothing to do with the C++ language, but rather they are only answerable with respect to the environment/operating system you run a C++ program in. Modern operating systems generally isolate processes so that a misbehaving process doesn't trample over OS memory or other running programs. In Linux, for example, each process has its own address space which is allocated by the kernel. If a misbehaving process attempts to write to a memory address outside of its allocated address space, the operating system will send a SIGSEGV (segmentation fault) which usually aborts the process. Older operating systems, such as MS-DOS, didn't have this protection, and so writing to an invalid pointer or triggering a stack overflow could potentially crash the whole operating system.
Likewise, with most mainstream modern operating systems (Linux/UNIX/Windows, etc.), memory leaks (data which is allocated dynamically but never reclaimed) only affect the process which allocated them. When the process terminates, all memory allocated by the process is reclaimed by the OS. But again, this is a feature of the operating system, and has nothing to do with the C++ language. There may be some older operating systems where leaked memory is never reclaimed, even by the OS.
1.- Let'suposse that I run a program with a recursion solution by using an infinite iteration of functions. Theoretically the program crashes (stack overflow), but does it cause some trouble to the computer itself? (To the RAM maybe or to the SO).
A stack overflow should not cause trouble neither to the Operating System nor to the computer. Any modern OS provides an isolated address space to each process. When a process tries to allocate more data in its stack than space is available, the OS detects it (usually via an exception) and terminates the process. This guarantees that no other processes are affected.
2.- What happens if I forget to dealocate memory on the heap. I mean, does it just cause trouble to the program or it is permanent to the computer in general. I mean it might be that such memory could not be used never again or something.
It depends on whether your program is a long running process or not, and the amount of data that you're failing to deallocate. In a long running process (e.g. a server) a recurrent memory leak can lead to thrashing: after some time, your process will be using so much memory that it won't fit in your physical memory. This is not a problem per se, because the OS provides virtual memory but the OS will spend more time moving memory pages from your physical memory to disk than doing useful work. This can affect other processes and it might slow down the system significantly (to the point that it might be better to reboot it).
3.- What are the problems of getting a segmentation fault (the heap).
A Segmentation Fault will crash your process. It's not directly related to the usage of the heap, but rather to accessing a memory region that does not belong to your process (because it's not part of its address space or because it was, but it was freed). Depending on what your process was doing, this can cause other problems: for instance, if the process was writing to a file when the crash happened it's very likely that it will end up corrupt.
First, stack means automatic memory and heap means manual memory. There are ways around both, but that's generally a more advanced question.
On modern operating systems, your application will crash but the operating system and machine as a whole will continue to function. There are of course exceptions to this rule, but they're (again) a more advanced topic.
Allocating from the heap and then not deallocating when you're done just means that your program is still considered to be using the memory even though you're not using it. If left unchecked, your program will fail to allocate memory (out of memory errors). How you handle out-of-memory errors could mean anything from a crash (unhandled error resulting in an unhandled exception or a NULL pointer being accessed and generating a segmentation fault) to odd behavior (caught exception or tested for NULL pointer but no special handling case) to nothing at all (properly handled).
On modern operating systems, the memory will be freed when your application exits.
A segmentation fault in the normal sense will simply crash your application. The operating system may immediately close file or socket handles. It may also perform a dump of your application's memory so that you can try to debug it posthumously with tools designed to do that (more advanced subject).
Alternatively, most (I think?) modern operating systems will use a special method of telling the program that it has done something bad. It is then up to the program's code to decide whether or not it can recover from that or perhaps add additional debug information for the operating system, or whatever really.
I suggest you look into auto pointers (also called smart pointers) for making your heap behave a little bit like a stack -- automatically deallocating memory when you're done using it. If you're using a modern compiler, see std::unique_ptr. If that type name can't be found, then look into the boost library (google). It's a little more advanced but highly valuable knowledge.
Hope this helps.
I've written some C++ code that runs perfectly fine on my laptop PC (compiled under both a Microsoft compiler and g++ under MinGW). I am in the process of porting it to a Unix machine.
I've compiled with both g++ and with Intel's ipcp on the Unix machine and in both cases, my program crashes (segfaults) after running for a while. I can run it for a short time without a crash.
When I debug, I find that the crash is happening when the program tries to copy an STL list - specifically, it happens when the program tries to allocate memory to create a new node in the list. And the error I get in the debugger (TotalView) is that "an allocation call failed or the address returned is null."
The crash does not always happen in the same place in the code each time I run it, but does always happen during an allocation call to create a node in an STL list. I don't think I'm running out of memory. I have a few memory leaks, but they're very small. What else can cause a memory allocation error? And why does it happen on the Unix machine and not on my PC?
UPDATE: I used MemoryScape to help debug. When I used guard blocks, the program ran through without crashing, further suggesting a memory issue. What finally worked to nail down the problem was to "paint" allocated memory. It turns out I was initializing a variable, but not setting it to a value before I used it as an array index. The array was therefore overrunning because it was using whatever garbage was in the variable's memory location -- often it was 0 or some other small number, so no problem. But when I ran the program long enough, it was more likely to hold a larger number and corrupt the heap when I wrote out of bounds of the array. Painting the allocated memory with a large number forced a segfault right at the line of code where I attempted to write a value in the array and I could see that large painted number being used as the array index.
This is likely caused by heap corruption - elsewhere in the code, you're overwriting freed memory, or writing to memory outside the bounds of your memory allocations (buffer overflows, or writing before the start of allocated memory). Heap corruption typically results in crashes at an unrelated location, such as in STL code. Since you're on a unix platform, you should try running your program under valgrind to try to identify the original heap corruption.
This sounds like a corruption of the dynamic memory allocation data structures, which is often caused by other, unrelated code. This type of bug is notorious for being hard to find and reproduce without external tools, because any change in memory layout can mask it. It probably worked through luck in the Windows version.
Memory debuggers are great tools to catch such corruption. valgrind, dmalloc and efence are very good alternatives to check the correctness of your program.
I have a few memory leaks, but they're very small.
Well, if you run it for a while, then it ends up being a lot of memory. That's kind of the thing about leaks. You should log your memory usage at the point of the crash to see if there was any memory available.
My program fails with 'std::bad_alloc' error message. The program is scalable, so I've tested on a smaller version with valgrind and there are no memory leaks.
This is an application of statistical mechanics, so I am basically making hundreds of objects, changing their internal data (in this case stl vectors of doubles), and writing to a datafile. The creation of objects lies inside a loop, so when it ends the memory is free. Something like:
for (cont=0;cont<MAX;cont++){
classSection seccion;
seccion.GenerateObjects(...);
while(somecondition){
seccion.evolve();
seccion.writedatatofile();
}}
So there are two variables which set the computing time of the program, the size of the system and the number of runs. There is only crash for big systems with many runs. Any ideas on how to catch this memory problem?
Thanks,
Run the program under debugger so that it stops once that exception is thrown and you can observe the call stack.
Three most probable problems are:
heap fragmentation
too many objects created on heap (but still pointed to from the program)
a request for an unreasonably large block of memory
valgrind would not show a memory leak because you may well not have one that valgrind would find.
You can actually have memory leaks in garbage-collected languages like Java. Although the memory is cleaned up there, it does not mean a bad programmer cannot hold on indefinitely to data they no longer need (eg building up a hash-map indefinitely). The garbage collector cannot determine that the user does not really need that data anymore.
You may be doing something like that here but we would need to see more of your code.
By the way, if you have a collection that really does have masses of data you are often better off using std::deque rather than std::vector unless you really really need it all to be contiguous.
I have the need to allocate large blocks of memory with new.
I am stuck with using new because I am writing a mock for the producer side of a two part application. The actual producer code is allocating these large blocks and my code has responsibility to delete them (after processing them).
Is there a way I can ensure my application is capable of allocating such a large amount of memory from the heap? Can I set the heap to a larger size?
My case is 64 blocks of 288000 bytes. Sometimes I am getting 12 to allocate and other times I am getting 27 to allocate. I am getting a std::bad_alloc exception.
This is: C++, GCC on Linux (32bit).
With respect to new in C++/GCC/Linux(32bit)...
It's been a while, and it's implementation dependent, but I believe new will, behind the scenes, invoke malloc(). Malloc(), unless you ask for something exceeding the address space of the process, or outside of specified (ulimit/getrusage) limits, won't fail. Even when your system doesn't have enough RAM+SWAP. For example: malloc(1gig) on a system with 256Meg of RAM + 0 SWAP will, I believe, succeed.
However, when you go use that memory, the kernel supplies the pages through a lazy-allocation mechanism. At that point, when you first read or write to that memory, if the kernel cannot allocate memory pages to your process, it kills your process.
This can be a problem on a shared computer, when your colleague has a slow core leak. Especially when he starts knocking out system processes.
So the fact that you are seeing std::bad_alloc exceptions is "interesting".
Now new will run the constructor on the allocated memory, touching all those memory pages before it returns. Depending on implementation, it might be trapping the out-of-memory signal.
Have you tried this with plain o'l malloc?
Have you tried running the "free" program? Do you have enough memory available?
As others have suggested, have you checked limit/ulimit/getrusage() for hard & soft constraints?
What does your code look like, exactly? I'm guessing new ClassFoo [ N ]. Or perhaps new char [ N ].
What is sizeof(ClassFoo)? What is N?
Allocating 64*288000 (17.58Meg) should be trivial for most modern machines... Are you running on an embedded system or something otherwise special?
Alternatively, are you linking with a custom new allocator? Does your class have its own new allocator?
Does your data structure (class) allocate other objects as part of its constructor?
Has someone tampered with your libraries? Do you have multiple compilers installed? Are you using the wrong include or library paths?
Are you linking against stale object files? Do you simply need to recompile your all your source files?
Can you create a trivial test program? Just a couple lines of code that reproduces the bug? Or is your problem elsewhere, and only showing up here?
--
For what it's worth, I've allocated over 2gig data blocks with new in 32bit linux under g++. Your problem lies elsewhere.
It's possible that you are being limited by the process' ulimit; run ulimit -a and check the virutal memory and data seg size limits. Other than that, can you post your allocation code so we can see what's actually going on?
Update:
I have since fixed an array indexing bug and it is allocating properly now.
If I had to guess... I was walking all over my heap and was messing with the malloc's data structures. (??)
i would suggest allocating all your memory at program startup and using placement new to position your buffers. why this approach? well, you can manually keep track of fragmentation and such. there is no portable way of determining how much memory is able to be allocated for your process. i'm positive there's a linux specific system call that will get you that info (can't think of what it is). good luck.
The fact that you're getting different behavior when you run the program at different times makes me think that the allocation code isn't the real problem. Instead, somebody else is using the memory and you're the canary finding out it's missing.
If that "somebody else" is in your program, you should be able to find it by using Valgrind.
If that somebody else is another program, you should be able to determine that by going to a different runlevel (although you won't necessarily know the culprit).