I have two threads running in a program.
They are created using boost::thread.
The two threads do not share anything in terms of memory. No data-structures or objects are shared between them.
Now the second thread uses a class which has as private members a lot of Eigen double Matrices.
I make sure that the matrices are aligned using the Eigen directive EIGEN_MAKE_ALIGNED_OPERATOR_NEW etc
When the first thread is running the elements at the matrices of the second class are over-written .
I checked that by inspections since elements that should be decimals suddenly become integers.
When the first thread is not running the second has no problem and its Eigen members have correct values.
Again:
1) The two threads share no data structures.
2) There is no segmentation fault message or something similar or some error message while the program is running.
3) Any suggestions how to protect the memory of the second thread or how to track done how the memory is violated?
Thank you in advance. I am really sorry I did not post code but it is huge.
Let me know if you want me to post something specific from the code.
You likely want a debugging tool like mallocguard for Mac or Electric Fence for Linux.
These work by adding "Guard Pages" before allocations which mark them as inaccessible virtual memory. When the memory is freed, it too is marked as inaccessibile. If the program attempts to access memory that it shouldn't, the modified allocator ensures that it crashes immediately, so that your debugger will hopefully highlight the line of code that was causing the corruption. Beware that this can consume large amounts of memory, so you'll potentially need a small data set that reproduces the corruption.
Related
I got the Problem, that my application has an infinite growing Memory Leak, which is not detected. What I do very simplified, is to create an object, run a method on it and then delete the object. Each time I do this, the memory usage in the TaskManager grows by around 50-100MB. This exhausts my whole memory after some runs. I do this by multithreading, but there are no static variables, so there is no collision between the different objects in my threads. They only use static methods of other objects, that don't modify any other memory than passed in the parameters - so it's thread-safe.
What I tried to find out the reason:
Using crtdbg.h (CRT-Memeory-Leak-Detection), but there are only leaks which exist since the start of my application - they will get deleted at shutdown and they are not that big.
I was looking for the virtual destructors in all the objects that I inherit from, but they are all OK
What else can I try to find out where my application leaks? I can't find any leaks in the HEAP and I don't know any other reasons than the destructor problem that can cause Leaks in the STACK (by this I mean that an object doesn't destroy a local std::string objects which has allocated space in heap). I don't know if there are other reasons for "STACK-Leaks", but I know that in the parts of my method, where the memory grows the most, there are no HEAP allocation.
You probably want to use a nicer, more robust leak detector. You may also need to use a leak detector that can output a heap report at different times while your program is running. Finally, you should consider that your problem might be due to heap fragmentation rather than just a leak.
You can try Visual Leak Detector which is free from Google.
This question contains a list of other memory check products, from the basic to the quite advanced/expensive. CRTDBG is the lowest-common-denominator solution; I've had good luck with BoundsChecker, although it is not free.
Not sure how you have used CRTDBG library but it provides lots of goodies:
http://msdn.microsoft.com/en-us/library/x98tx3cf.aspx
You can use _CrtMemCheckpoint in divide and conquer manner. It allows you to measure difference in memory use between two points in your code. With multithreading this can be difficult.
Another is _CrtDumpMemoryLeaks (which i suppose is executed anyway on app end) with _CRTDBG_MAP_ALLOC enabled, this should show exact location of memory allocations.
Another hint is that maybe you have your CRTDBG overconfigured, with lots of small allocations it can create huge internal memory structures.
Try switching off parts of your code, and check if problem persists.
If you build your app on daily basis, try running previous versions to spot where problem appeared, then compare changes in source code repository.
...
I am trying to understand the C++ code I am working with gives out of memory errors. This is a scientific code with several flag variables to turn on/off a bunch of code functionalities. The code works fine when a couple of functions are turned off. However, when these routines are active, it causes 'Out of memory' situations....
Error file created by Qsub, says
Exit status : -4
job terminated due to one or more nodes running out of memory. The function I am talking about used to work fine until I made some additions. I basically created some pointers, intialize to NULL, create a memory chunk to associate with it, store a quantity of interest in it and later delete []*p
I am trying hard to figure out the source of the problem. I wonder what is causing it.. I believe its some C++ programming error (which I am overlooking due to my inexperience with C++). Is there a way to figure out what the bug is .... where it is or how to resolve it.
Some thoughts that ran my mind,
- use try{ } catch {}
- Run some memory program to track the memory usage in the system (in realtime)
- Any other efficient way of debugging a MPI/C++ code for such situations.
I Read about something on stacks and heaps and how memory is stored... What the safest way to declare a 2D-array, 1D-array on the fly... pointer based or array definition based..??
Please educate me with your thoughts.
valgrind should be able to give you an indication on where memory is being leaked.
I wrote a program in c++ to perform montecarlo. The thing is that after five iterations (every iteration runs monte carlo with different configurations), the process is killed.
At the begining I thought it was a memory problem but after reading this nice post on memory management (http://stackoverflow.com/questions/76796/memory-management-in-c), my scoping seems to be correct.
I do not use a lot of memory since my results are stored in a relativelly small array which is rewritten ever iteration. In an iteration I am not using more memory than in the previous.
I cannot find, if there is any, where is the leak. I have a lot of function calls to perform
the calculations, but I do not need to destroy the objects once I am out of the function right?
Any tip?
EDIT: The program takes all the processor power of my computer, when it is running I cannot even move the mouse.
Thanks in advance.
EDIT SOLVED: The problem was that I was not deletening the pointers I used so every iteration the memory was not delocated and a whole new set of pointers was created using more memory. Thanks a lot to those that answered.
Depending on the platform you are on, you can use a tools like valgrind or vld to find memory leaks in your program.
My program fails with 'std::bad_alloc' error message. The program is scalable, so I've tested on a smaller version with valgrind and there are no memory leaks.
This is an application of statistical mechanics, so I am basically making hundreds of objects, changing their internal data (in this case stl vectors of doubles), and writing to a datafile. The creation of objects lies inside a loop, so when it ends the memory is free. Something like:
for (cont=0;cont<MAX;cont++){
classSection seccion;
seccion.GenerateObjects(...);
while(somecondition){
seccion.evolve();
seccion.writedatatofile();
}}
So there are two variables which set the computing time of the program, the size of the system and the number of runs. There is only crash for big systems with many runs. Any ideas on how to catch this memory problem?
Thanks,
Run the program under debugger so that it stops once that exception is thrown and you can observe the call stack.
Three most probable problems are:
heap fragmentation
too many objects created on heap (but still pointed to from the program)
a request for an unreasonably large block of memory
valgrind would not show a memory leak because you may well not have one that valgrind would find.
You can actually have memory leaks in garbage-collected languages like Java. Although the memory is cleaned up there, it does not mean a bad programmer cannot hold on indefinitely to data they no longer need (eg building up a hash-map indefinitely). The garbage collector cannot determine that the user does not really need that data anymore.
You may be doing something like that here but we would need to see more of your code.
By the way, if you have a collection that really does have masses of data you are often better off using std::deque rather than std::vector unless you really really need it all to be contiguous.
I have the need to allocate large blocks of memory with new.
I am stuck with using new because I am writing a mock for the producer side of a two part application. The actual producer code is allocating these large blocks and my code has responsibility to delete them (after processing them).
Is there a way I can ensure my application is capable of allocating such a large amount of memory from the heap? Can I set the heap to a larger size?
My case is 64 blocks of 288000 bytes. Sometimes I am getting 12 to allocate and other times I am getting 27 to allocate. I am getting a std::bad_alloc exception.
This is: C++, GCC on Linux (32bit).
With respect to new in C++/GCC/Linux(32bit)...
It's been a while, and it's implementation dependent, but I believe new will, behind the scenes, invoke malloc(). Malloc(), unless you ask for something exceeding the address space of the process, or outside of specified (ulimit/getrusage) limits, won't fail. Even when your system doesn't have enough RAM+SWAP. For example: malloc(1gig) on a system with 256Meg of RAM + 0 SWAP will, I believe, succeed.
However, when you go use that memory, the kernel supplies the pages through a lazy-allocation mechanism. At that point, when you first read or write to that memory, if the kernel cannot allocate memory pages to your process, it kills your process.
This can be a problem on a shared computer, when your colleague has a slow core leak. Especially when he starts knocking out system processes.
So the fact that you are seeing std::bad_alloc exceptions is "interesting".
Now new will run the constructor on the allocated memory, touching all those memory pages before it returns. Depending on implementation, it might be trapping the out-of-memory signal.
Have you tried this with plain o'l malloc?
Have you tried running the "free" program? Do you have enough memory available?
As others have suggested, have you checked limit/ulimit/getrusage() for hard & soft constraints?
What does your code look like, exactly? I'm guessing new ClassFoo [ N ]. Or perhaps new char [ N ].
What is sizeof(ClassFoo)? What is N?
Allocating 64*288000 (17.58Meg) should be trivial for most modern machines... Are you running on an embedded system or something otherwise special?
Alternatively, are you linking with a custom new allocator? Does your class have its own new allocator?
Does your data structure (class) allocate other objects as part of its constructor?
Has someone tampered with your libraries? Do you have multiple compilers installed? Are you using the wrong include or library paths?
Are you linking against stale object files? Do you simply need to recompile your all your source files?
Can you create a trivial test program? Just a couple lines of code that reproduces the bug? Or is your problem elsewhere, and only showing up here?
--
For what it's worth, I've allocated over 2gig data blocks with new in 32bit linux under g++. Your problem lies elsewhere.
It's possible that you are being limited by the process' ulimit; run ulimit -a and check the virutal memory and data seg size limits. Other than that, can you post your allocation code so we can see what's actually going on?
Update:
I have since fixed an array indexing bug and it is allocating properly now.
If I had to guess... I was walking all over my heap and was messing with the malloc's data structures. (??)
i would suggest allocating all your memory at program startup and using placement new to position your buffers. why this approach? well, you can manually keep track of fragmentation and such. there is no portable way of determining how much memory is able to be allocated for your process. i'm positive there's a linux specific system call that will get you that info (can't think of what it is). good luck.
The fact that you're getting different behavior when you run the program at different times makes me think that the allocation code isn't the real problem. Instead, somebody else is using the memory and you're the canary finding out it's missing.
If that "somebody else" is in your program, you should be able to find it by using Valgrind.
If that somebody else is another program, you should be able to determine that by going to a different runlevel (although you won't necessarily know the culprit).