I am using PageHeap to identify heap corruption. My application has a heap corruption. But the application breaks(due to crash) when it creates an stl object for a string passed to a method. I cannot see any visible memory issues near the crash location. I enabled full page heap for detecting heap corruption and /RTCs for detcting stack corruption.
What should I do to break at the exact location where the heap corruption occurs?
Enabling FULL pageheap can increase the chances of the debugger catching a heap corruption as it's happening:
gflags /p /enable /full <processname>
Also, if you can find out what address is getting overwritten, you can set up a breakpoint on memory access in windbg. Not sure if the VS debugger has the same feature.
Pageheap does not always detect heap corruption exactly at the moment when it occurs.
Pageheap inserts an invalid page right after allocations. So whenever you overrun an allocated block you get an AV. But there are other possible cases. One example is writing just before an allocated block corrupting heap block header data structure. Heap block header is a valid writable memory (most likely in the same page with the allocated block). Consider the following example:
#include <stdlib.h>
int
main()
{
void* block = malloc(100);
int* intPtr = (int*)block;
*(intPtr-1) = 0x12345; // no crash
free(block); // crash
return 0;
}
So writing some garbage just before the allocated block passes just fine. With Pageheap enabled the example breaks inside free() call. Here is the call stack:
verifier.dll!_VerifierStopMessage#40() + 0x206 bytes
verifier.dll!_AVrfpDphReportCorruptedBlock#16() + 0x239 bytes
verifier.dll!_AVrfpDphCheckNormalHeapBlock#16() + 0x11a bytes
verifier.dll!_AVrfpDphNormalHeapFree#16() + 0x22 bytes
verifier.dll!_AVrfDebugPageHeapFree#12() + 0xe3 bytes
ntdll.dll!_RtlDebugFreeHeap#12() + 0x2f bytes
ntdll.dll!#RtlpFreeHeap#16() + 0x36919 bytes
ntdll.dll!_RtlFreeHeap#12() + 0x722 bytes
heapripper.exe!free(void * pBlock=0x0603bf98) Line 110 C
> heapripper.exe!main() Line 11 + 0x9 bytes C++
heapripper.exe!__tmainCRTStartup() Line 266 + 0x12 bytes C
kernel32.dll!#BaseThreadInitThunk#12() + 0xe bytes
ntdll.dll!___RtlUserThreadStart#8() + 0x27 bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x1b bytes
Pageheap enables rigorous heap consistency checks, but the checks do not kick in untill some other heap API is called. The check routines are seen on stack. (Without Pageheap the application would probably just AV in heap implementation attempting to use an invalid pointer.)
So Pageheap does not give you 100% guarantee to catch a corruption exactly at the moment when it occurs. You need tools like Purify or Valgrind that track every memory access.
Don't get me wrong, I think Pageheap is still very useful. It causes much less performance degradation compared to the mentioned Purify and Valgrind, so it allows running much more complex scenarios.
Related
I am getting an invalid read error when the src string ends with \n, the error disappear when i remove \n:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main (void)
{
char *txt = strdup ("this is a not socket terminated message\n");
printf ("%d: %s\n", strlen (txt), txt);
free (txt);
return 0;
}
valgrind output:
==18929== HEAP SUMMARY:
==18929== in use at exit: 0 bytes in 0 blocks
==18929== total heap usage: 2 allocs, 2 frees, 84 bytes allocated
==18929==
==18929== All heap blocks were freed -- no leaks are possible
==18929==
==18929== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==18929==
==18929== 1 errors in context 1 of 1:
==18929== Invalid read of size 4
==18929== at 0x804847E: main (in /tmp/test)
==18929== Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd
==18929== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==18929== by 0x8048415: main (in /tmp/test)
==18929==
==18929== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
How to fix this without sacrificing the new line character?
It's not about the newline character, nor the printf format specifier. You've found what is arguably a bug in strlen(), and I can tell you must be using gcc.
Your program code is perfectly fine. The printf format specifier could be a little better, but it won't cause the valgrind error you are seeing. Let's look at that valgrind error:
==18929== Invalid read of size 4
==18929== at 0x804847E: main (in /tmp/test)
==18929== Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd
==18929== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==18929== by 0x8048415: main (in /tmp/test)
"Invalid read of size 4" is the first message we must understand. It means that the processor ran an instruction which would load 4 consecutive bytes from memory. The next line indicates that the address attempted to be read was "Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd."
With this information, we can figure it out. First, if you replace that '\n' with a '$', or any other character, the same error will be produced. Try it.
Secondly, we can see that your string has 40 characters in it. Adding the \0 termination character brings the total bytes used to represent the string to 41.
Because we have the message "Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd," we now know everything about what is going wrong.
strdup() allocated the correct amount of memory, 41 bytes.
strlen() attempted to read 4 bytes, starting at the 40th, which would extend to a non-existent 43rd byte.
valgrind caught the problem
This is a glib() bug. Once upon a time, a project called Tiny C Compiler (TCC) was starting to take off. Coincidentally, glib was completely changed so that the normal string functions, such as strlen() no longer existed. They were replaced with optimized versions which read memory using various methods such as reading four bytes at a time. gcc was changed at the same time to generate calls to the appropriate implementations, depending on the alignment of the input pointer, the hardware compiled for, etc. The TCC project was abandoned when this change to the GNU environment made it so difficult to produce a new C compiler, by taking away the ability to use glib for the standard library.
If you report the bug, glib maintainers probably won't fix it. The reason is that under practical use, this will likely never cause an actual crash. The strlen function is reading bytes 4 at a time because it sees that the addresses are 4-byte aligned. It's always possible to read 4 bytes from a 4-byte-aligned address without segfaulting, given that reading 1 byte from that address would succeed. Therefore, the warning from valgrind doesn't reveal a potential crash, just a mismatch in assumptions about how to program. I consider valgrind technically correct, but I think there is zero chance that glib maintainers will do anything to squelch the warning.
The error message seems to indicate that it's strlen that read past the malloced buffer allocated by strdup. On a 32-bit platform, an optimal strlen implementation could read 4 bytes at a time into a 32-bit register and do some bit-twiddling to see if there's a null byte in there. If near the end of the string, there are less than 4 bytes left, but 4 bytes are still read to perform the null byte check, then I could see this error getting printed. In that case, presumably the strlen implementer would know if it's "safe" to do this on the particular platform, in which case the valgrind error is a false positive.
In valgrind, we have leak logs like this
==15788== 480 bytes in 20 blocks are definitely lost in loss record 5,016 of 5,501
==20901== 112 (48 direct, 64 indirect) bytes in 2 blocks are definitely lost in loss record 3,501 of 5,122
==20901== 1,375,296 bytes in 78 blocks are possibly lost in loss record 5,109 of 5,122
==20901== Conditional jump or move depends on uninitialised value(s)
==20901== Use of uninitialised value of size 8
In Valgrind's documentation, i couldn't find exact details. Can please somebody explain
I know definitely lost means - the memory allocated is not at all freed. But can what does it mean by "20 blocks" and what it means of "lost in loss record 5,016 of 5,501". And if it says 480 bytes are lost, does it mean for one run in a loop or total ..?
In the Second line ,"112 (48 direct, 64 indirect) bytes in 2 blocks are definitely lost", what does it means "48 direct, 64 indirect".
And i understand the meaning of"possibly lost", but does that mean valgrind is not sure if it's a leak ..?
And regarding the 4th line, i have no idea at all. I checked the call stack provided along with that 4th line. I don't notice any "jump or move".
For the 5th line, it says the uninitialised is in last line of this code snippet. I don't see any uninitialised value here.
char *data = new char[somebigSize];
memset(data, '\0', somebigSize);
int sizeInt = sizeof(int);
int length = 20; //some value obtained
int position = 10;
char *newPtrVar = new char[sizeInt + 1];
memset(newPtrVar, '\0', sizeInt + 1);
memcpy(newPtrVar, &length, sizeInt);
memcpy(&data[position], newPtrVar, sizeInt);
The valgrind manual covers this in detail. It's quite complicated - see the link for full details, but in essence you can have:
"still reachable" (memory that is pointed to by a pointer).
"directly lost" (memory that is not pointed to be any live pointer)
"indirectly lost" (memory that is pointed to by a pointer that is in memory that is "directly lost)
"possibly lost" (memory that is pointed to, but the pointer does not point to the start of the memory).
The last case could be some random pointer, and it could be something like a memory manager that allocates a redzone before the memory returned to the user.
I'm playing with some basic stuff of cpp. I'm new in this language... so I'm warning that my question maybe was not correctly formulated. I appreciate any help.
The thing is that after saw the example in www.cplusplus.com/reference/cstdlib/malloc/ I found my self with this code:
#include <stdio.h>
int main (void) {
char *str;
str = (char*) malloc(2);
str[0] ='8';
str[1] ='8';
str[2] ='6';
str[3] ='\0';
printf ("%s\n",str);
}
And compiling with:
gcc -O0 -pedantic -Wall test2.cpp
(gcc version 4.7.2)
I get no errors and the output 886. Why I get no errors? Have I not passed the boundary of the allocated space?
I didn't get no errors and I got the output 886. Why no errors? Have I not passed the boundary of the allocated space?
In the case that code is ok... Why the example in the reference?
In the other (more probable) case... What are the risks?
Thanks!
You don't get any errors because C and C++ don't do bounds checking. You overwrote sections of memory that you weren't using, but you got lucky and it wasn't anything important. Compare it to putting a row of nails into a wall where you know there's a stud. If you miss the stud, most of the time, you just put a hole in the plaster, but it's dangerous to keep doing it because eventually, you're going to hit one of the live wires instead.
You have passed over the boundary of the allocated memory.
However, printf does not bother what size of a memory you have declared. All it cares is it will start from the start and continue till it finds a 0.
The case you created is an undefined behaviour. There can be some other data right after your allocated region (maybe another variable) in which case it will get corrupted. If the next part is unallocated memory you might escape without a visible problem. And if the memory right after your allocated memory belongs to another process, you will see the nice and tidy Segmentation Fault. The consequences can be even worse, so better not try this anywhere.
the following can be found in comments in malloc.c of glibc:
Minimum overhead per allocated chunk: 4 or 8 bytes Each malloced
chunk has a hidden word of overhead holding size and status
information.
Minimum allocated size: 4-byte ptrs: 16 bytes (including 4
overhead)
8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
ptrs but 4 byte size) or 24 (for 8/8) additional bytes are needed;
4 (8) for a trailing size field and 8 (16) bytes for free list
pointers. Thus, the minimum allocatable size is 16/24/32 bytes.
Since minimum allocated size would be 16/24/32, since it is greater than 3 bytes your program ran without errors. This is one of the possibility executing your program correctly.
What is wrong if we push the strings into vector like this:
globalstructures->schema.columnnames.push_back("id");
When i am applied valgrind on my code it is showing
possibly lost of 27 bytes in 1 blocks are possibly lost in loss record 7 of 19.
like that in so many places it is showing possibly lost.....because of this the allocations and frees are not matching....which is resulting in some strange error like
malloc.c:No such file or directory
Although I am using calloc for allocation of memory everywhere in my code i am getting warnings like
Syscall param write(buf) points to uninitialised byte(s)
The code causing that error is
datapage *dataPage=(datapage *)calloc(1,PAGE_SIZE);
writePage(dataPage,dataPageNumber);
int writePage(void *buffer,long pagenumber)
{
int fd;
fd=open(path,O_WRONLY, 0644);
if (fd < 0)
return -1;
lseek(fd,pagenumber*PAGE_SIZE,SEEK_SET);
if(write(fd,buffer,PAGE_SIZE)==-1)
return false;
close(fd);
return true;
}
Exact error which i am getting when i am running through gdb is ...
Breakpoint 1, getInfoFromSysColumns (tid=3, numColumns=#0x7fffffffdf24: 1, typesVector=..., constraintsVector=..., lengthsVector=...,
columnNamesVector=..., offsetsVector=...) at dbheader.cpp:1080
Program received signal SIGSEGV, Segmentation fault.
_int_malloc (av=0x7ffff78bd720, bytes=8) at malloc.c:3498
3498 malloc.c: No such file or directory.
When i run the same through valgrind it's working fine...
Well,
malloc.c:No such file or directory
can occur while you are debugging using gdb and you use command "s" instead of "n" near malloc which essentially means you are trying to step into malloc, the source of which may not be not available on your Linux machine.
That is perhaps the same reason why it is working fine with valgrind.
Why error is in malloc:
The problem is that you overwrote some memory buffer and corrupted one
of the structures used by the memory manager. (c)
Try to run valgrind with --track-origins=yes and see where that uninitialized access comes from. If you believe that it should be initialized and it is not, maybe the data came from a bad pointer, valgrind will show you where exactly the values were created. Probably those uninitialized values overwrote your buffer, including memory manager special bytes.
Also, review all valgrind warnings before the crash.
I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows:
ntdll.dll!NtWaitForSingleObject() + 0xa bytes
ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes
ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes
ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes
ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes
ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes
ntdll.dll!RtlAllocateHeap() - 0x1711a bytes
MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C
MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++
ntdll.dll!NtWaitForSingleObject()
ntdll.dll!RtlpWaitOnCriticalSection()
ntdll.dll!RtlEnterCriticalSection()
ntdll.dll!RtlpDebugPageHeapFree()
ntdll.dll!RtlDebugFreeHeap()
ntdll.dll!RtlFreeHeapSlowly()
ntdll.dll!RtlFreeHeap()
MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C
BTW, the param values passed to the new operator is not correct here maybe due to optimization.
Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked.
m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE);
......
VirtualFree(m_pBuf, 0, MEM_RELEASE);
This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this.
Does anyone has similar experience on this before? Could you share with me so I may get more hints?
Thanks a lot!
Your program is using PageHeap, which is intended for debugging only and imposes a ton of memory overhead. To see which programs have PageHeap activated, do this at a command line.
% Gflags.exe /p
To disable it for your process, type this (for MyProg.exe):
% Gflags.exe /p /disable MyProg.exe
Pageheap.exe detects most heap-related bugs - try Pageheap
Also you should look in to "the param values passed to the new ..." - does this corruption occur in the debug mode? make sure all optimizations are disabled.
If your system is running out of memory, it might be the case that the OS is swapping, that means that for a single allocation, in the worst case the OS could need to locate the best candidate for swapping, write it to disk, free the memory and return it. Are you sure that it is locking or might it just be performing very slowly? Can another thread be swapping memory to disk while these two threads wait for it's call to malloc/free to complete?
My preferred solution for debugging leaks in native applications in to use UMDH to get consecutive snapshots of the user-mode heap(s) in the process and then run UMDH again to diff the snapshots. Any pattern of change in the snapshots is likely a leak.
You get a count and size of memory blocks bucketed by their allocating callstack so it's reasonably straightforward to see where the biggest hogs are.
The user-mode dump heap (UMDH) utility
works with the operating system to
analyze Windows heap allocations for a
specific process.