"Memory error" in the title means the type of error that can cause the program to crash or corrupt managed memory.
To make it clearer, also assume memory full is not this type of "memory error".
Thanks
if your leak causes you to run out of memory then one thing that can happen is that memory allocations will fail. If you are not correctly dealing with these failed allocations then all sorts of bad things can happen
But , in general, I would say that if you have memory corruption going on its not due directly to the leak. More likely the leak is irrelevant or the leak and the memory trashing are a symptom of a different bug
valgrind?
If leak will be big enough, yes it will.
Yes, it does. memory allocation will just allocate memory and when you are out of memory it will allocate memory which is in use.
If you are able to simulate your program in a simulator you can just put your function in a infinite while loop and check your task manager. if the task of your simulation is going up to tens of MB's there is certainly a leak in your memory.
Related
I have a complicated application with lots of third-party libraries, dynamically loaded plugins. And something causes the app to crash (SIGSEGV) after main exits. The call stack points to unknown addresses so not only I can't debug, I don't even have an idea where the crash happens.
I tried to run the app with Valgrind - it shows leaks (some kilobytes) but I believe that they are false positives and/or I can't do anything about them because they are coming from the third-party.
My question: I believe that memory leaks can't cause a segmentation fault, at least I can't find out the possible scenario. But since I'm not sure I'd like to hear cases when a leak can break the program (assuming it's not a crazy leak when I'm simply out of memory).
No, memory leaks by themselves would not cause a segmentation fault. However, memory leaks usually indicate sloppy code, and in sloppy code other issues, which would cause a segmentation fault, are likely to be present.
No, a segmentation fault in itself is not that much more than trying to access a piece of memory that you are not allowed access to. A memory leak on the other hand is when you allocate some memory and later on 'forget' the location of the piece of memory. The data stored is still there, but it cannot be accessed anymore from that program instance.
Both errors/faults almost always occur because of sloppy coding practices. So it might be possible that the same sloppy coding that causes a memory leak is also responsible for segmentation faults.
I'm trying to allocate some memory but sometimes get error "out of memory". cudaMemGetInfo says that available more memory that I need. So, problem in memory fragmentation. Is it possible to fix this problem? Is it possible to place elements in memory not one by one and fragment to few peaces that I can place in memory?
If you get "out of memory" because of memory fragmentation, then there is some error in the way that you work with memory!! You are responsible for fragmenting that memory, consider a redesign of your program and for example use a pool of memory to avoid too much new/delete to avoid fragmenting memory
I've written some C++ code that runs perfectly fine on my laptop PC (compiled under both a Microsoft compiler and g++ under MinGW). I am in the process of porting it to a Unix machine.
I've compiled with both g++ and with Intel's ipcp on the Unix machine and in both cases, my program crashes (segfaults) after running for a while. I can run it for a short time without a crash.
When I debug, I find that the crash is happening when the program tries to copy an STL list - specifically, it happens when the program tries to allocate memory to create a new node in the list. And the error I get in the debugger (TotalView) is that "an allocation call failed or the address returned is null."
The crash does not always happen in the same place in the code each time I run it, but does always happen during an allocation call to create a node in an STL list. I don't think I'm running out of memory. I have a few memory leaks, but they're very small. What else can cause a memory allocation error? And why does it happen on the Unix machine and not on my PC?
UPDATE: I used MemoryScape to help debug. When I used guard blocks, the program ran through without crashing, further suggesting a memory issue. What finally worked to nail down the problem was to "paint" allocated memory. It turns out I was initializing a variable, but not setting it to a value before I used it as an array index. The array was therefore overrunning because it was using whatever garbage was in the variable's memory location -- often it was 0 or some other small number, so no problem. But when I ran the program long enough, it was more likely to hold a larger number and corrupt the heap when I wrote out of bounds of the array. Painting the allocated memory with a large number forced a segfault right at the line of code where I attempted to write a value in the array and I could see that large painted number being used as the array index.
This is likely caused by heap corruption - elsewhere in the code, you're overwriting freed memory, or writing to memory outside the bounds of your memory allocations (buffer overflows, or writing before the start of allocated memory). Heap corruption typically results in crashes at an unrelated location, such as in STL code. Since you're on a unix platform, you should try running your program under valgrind to try to identify the original heap corruption.
This sounds like a corruption of the dynamic memory allocation data structures, which is often caused by other, unrelated code. This type of bug is notorious for being hard to find and reproduce without external tools, because any change in memory layout can mask it. It probably worked through luck in the Windows version.
Memory debuggers are great tools to catch such corruption. valgrind, dmalloc and efence are very good alternatives to check the correctness of your program.
I have a few memory leaks, but they're very small.
Well, if you run it for a while, then it ends up being a lot of memory. That's kind of the thing about leaks. You should log your memory usage at the point of the crash to see if there was any memory available.
I have a C++ application with a very strange phenomenon.
I'm running my application on a large input, and I have many buffers that are allocated and de-allocated during run-time.
For input that it large enough, I have allocation error, meaning out of memory.
But, when I put a breakpoint on each allocation, and then run from allocation to allocation, my application won't crash.
My assumption that it has to be something related to the way windows XP manages the memory.
Is anyone has an idea what can cause this phenomenon, and how to over come it?
Thanks.
Frequent allocation and deallocation can lead to memory fragmentation. My guess is that, when you step through the program with a debugger, it gives the OS idle-time to defragment the memory. To avoid the problem when running your program normally, you should consider memory/object-pool (see here and here).
Application behavior is different in Release and Debug runs. As you are Saying in normal run it gives Out of Memory there is some thing wrong with your code. It may be saying there is no memory or no continuous memory.
You can use some static or dynamic code analyses to find out the problem.
IBM Purifier( Trial version)
I'm getting
*** glibc detected *** (/my/program/...): malloc(): memory corruption: 0xf28000fa ***
I've run under valgrind, which reports cases of reading memory that has been freed, but no cases of illegal memory writes.
Could reading freed memory cause memory corruption? If not, any suggestions where else to look beyond the valgrind output?
You can use GDB to watch each write in this memory address, like this:
(gdb) watch *((int*)0xf28000fa)
Then you can debug where the problem is.
Reading doesn't cause memory corruption, but there are a lot of situations that you don't even imagine that could be the cause of this, and Valgrind is not a perfect tool.
See more information about debugging memory issues here.
It won't corrupt the memory you read, but it isn't going to do wonders for the working of your program.
Reading freed memory is also considered memory corruption.
You can also check http://en.wikipedia.org/wiki/Memory_corruption.
No, reading invalid locations can't possibly cause the error you are seeing. If the location is valid in your address space, you'll just be reading junk, if not, you'll get a segmentation fault.
Check valgrind's output to see where the invalid reads are coming from - this will give you a hint towards where the real mistake lies. Once you find this, I'm quite sure the real culprit won't be far away, and it's probably an invalid write.
It shouldn't be that common on current processors, but I've worked on platforms where even a read operation could do magic. In the specific 6502 processor has mapped I/O, so a regular "read" instruction with a I/O mapped address can do surprising stuff.
About 30 years ago I got bitten by that because my bad read provoked a memory bank switch (that is every byte of memory, including the area containing the code, got a new different value just after that instruction).
The funny part is that it wasn't a truly "unintentional" bad read... I actually did the read even if knowing that it was going to be garbage because this saved me a few assembler instructions... not a smart move.
What can really happen, is that free can use madvise(MADV_DONTNEED) syscall to tell the kernel "I don't need this page, drop it" (see madvise(2) manpage). If that page gets really deallocated and you read anything from it, kernel will silently provide freshly new page, zeroed out - and thus cause your application to encounter completely unexpected data!