Apparently this function in SDL_Mixer keeps dying, and I'm not sure why. Does anyone have any ideas? According to visual studio, the crash is caused by Windows triggering a breakpoint somewhere in the realloc() line.
The code in question is from the SVN version of SDL_Mixer specifically, if that makes a difference.
static void add_music_decoder(const char *decoder)
{
void *ptr = realloc(music_decoders, num_decoders * sizeof (const char **));
if (ptr == NULL) {
return; /* oh well, go on without it. */
}
music_decoders = (const char **) ptr;
music_decoders[num_decoders++] = decoder;
}
I'm using Visual Studio 2008, and music_decoders and num_decoders are both correct (music_decoders contains one pointer, to the string "WAVE", and music_decoders. ptr is 0x00000000, and the best I can tell, the crash seems to be in the realloc() function. Does anyone have any idea how I could handle this crash problem? I don't mind having to do a bit of refactoring in order to make this work, if it comes down to that.
For one thing, it's not valid to allocate an array of num_decoders pointers, and then write to index num_decoders in that array. Presumably the first time this function was called, it allocated 0 bytes and wrote a pointer to the result. This could have corrupted the memory allocator's structures, resulting in a crash/breakpoint when realloc is called.
Btw, if you report the bug, note that add_chunk_decoder (in mixer.c) is broken in the same way.
I'd replace
void *ptr = realloc(music_decoders, num_decoders * sizeof (const char **));
with
void *ptr = realloc(music_decoders, (num_decoders + 1) * sizeof(*music_decoders));
Make sure that the SDL_Mixer.DLL file and your program build are using the same C Runtime settings. It's possible that the memory is allocated using one CRT, and realloc'ed using another CRT.
In the project settings, look for C/C++ -> Code Generation. The Runtime Library setting there should be the same for both.
music_decoders[num_decoders++] = decoder;
You are one off here. If num_decoders is the size of the array then the last index is num_decoders - 1. Therefore you should replace the line with:
music_decoders[num_decoders-1] = decoder;
And you may want to increment num_decoders at the beginning of the function, not at the end since you want to reallow for the new size, not for the old one.
One additional thing: you want to multiply the size with sizeof (const char *), not with double-star.
Ah, the joys of C programming. A crash in realloc (or malloc or free) can be triggered by writing past the bounds of a memory block -- and this can happen anywhere else in your program. The approach I've used in the past is some flavor of debugging malloc package. Before jumping in with a third party solution, check the docs to see if Visual Studio provides anything along these lines.
Crashes are not generally triggered by breakpoints. Are you crashing, breaking due to a breakpoint or crashing during the handling of the breakpoint?
The debug output window should have some information as to why a CRT breakpoint is being hit. For example, it might notice during the memory operations that guard bytes around the original block have been modified (due to a buffer overrun that occurred before add_music_decoder was even invoked). The CRT will check these guard pages when memory is freed and possibly when realloced too.
Related
This is my first ever project that I've managed to complete so I'm a bit unsure of how to reference an executable vs a project being worked on and debugged in "debug mode" or whether there's multiple ways to do so etc, etc.
To be more specific, however, I encountered a heap corruption issue that only occurred when Visual Studio 2019 had been set to Release Mode, spit out the "exe" version of my program, and then went through its first debugging session in that form. It turns out (I'm probably wrong, but this is the last thing I changed before the issue completely disappeared) that the following code:
std::unique_ptr<std::vector<Stat>> getSelStudStats(HWND listboxcharnames) {
std::unique_ptr<std::vector<Stat>> selStats = std::make_unique<std::vector<Stat>>();
int pos = ListBox_GetCurSel(listboxcharnames);
int len = ListBox_GetTextLen(listboxcharnames, pos);
const wchar_t* buffer = new const wchar_t[++len];
ListBox_GetText(listboxcharnames, pos, buffer);
for (int i = 0; i < getSize(); i++) {
Character character = getCharacterPtr(i);
std::wstring name = character.getName();
if (name.compare(buffer) == 0) {
*selStats = character.getAllStats();
return selStats;
}
}
return selStats;
delete[] buffer;
}
was not assigning the correct size to the buffer variable through len. By adding the prefix increment operator to len, the null terminator character that would come along with the list box's text was now being accounted for; Consequently, the heap corruption error stopped occurring.
While I'm glad to have figured out the issue, I don't know why VS2019 didn't bring this issue up in Debug Mode. In attempting to debug the issue, I've learned that optimizations in Release Mode can change the structure and order of code execution.
Is there something in this block of code that would create the error I had, but only in Release Mode/executable form?
EDITED: I removed the asterisks that were originally surrounding ++len in my attempt to highlight the change that I reference making. Apologies for the confusion it, understandably, caused.
Docs explain the behavior:
When you request a memory block, the debug heap manager allocates from the base heap a slightly larger block of memory than requested and returns a pointer to your portion of that block. For example, suppose your application contains the call: malloc( 10 ). In a Release build, malloc would call the base heap allocation routine requesting an allocation of 10 bytes. In a Debug build, however, malloc would call _malloc_dbg, which would then call the base heap allocation routine requesting an allocation of 10 bytes plus approximately 36 bytes of additional memory.
So in debug you don't overrun your buffer. However, it may cause other bugs later (but unlikely for one byte overrun.)
Recently I met a memory release problem. First, the blow is the C codes:
#include <stdio.h>
#include <stdlib.h>
int main ()
{
int *p =(int*) malloc(5*sizeof (int));
int i ;
for(i =0;i<5; i++)
p[i ]=i;
p[i ]=i;
for(i =0;i<6; i++)
printf("[%p]:%d\n" ,p+ i,p [i]);
free(p );
printf("The memory has been released.\n" );
}
Apparently, there is the memory out of range problem. And when I use the VS2008 compiler, it give the following output and some errors about memory release:
[00453E80]:0
[00453E84]:1
[00453E88]:2
[00453E8C]:3
[00453E90]:4
[00453E94]:5
However when I use the gcc 4.7.3 compiler of cygwin, I get the following output:
[0x80028258]:0
[0x8002825c]:1
[0x80028260]:2
[0x80028264]:3
[0x80028268]:4
[0x8002826c]:51
The memory has been released.
Apparently, the codes run normally, but 5 is not written to the memory.
So there are maybe some differences between VS2008 and gcc on handling these problems.
Could you guys give me some professional explanation on this? Thanks In Advance.
This is normal as you have never allocated any data into the mem space of p[5]. The program will just print what ever data was stored in that space.
There's no deterministic "explanation on this". Writing data into the uncharted territory past the allocated memory limit causes undefined behavior. The behavior is unpredictable. That's all there is to it.
It is still strange though to see that 51 printed there. Typically GCC will also print 5 but fail with memory corruption message at free. How you managed to make this code print 51 is not exactly clear. I strongly suspect that the code you posted is not he code you ran.
It seems that you have multiple questions, so, let me try to answer them separately:
As pointed out by others above, you write past the end of the array so, once you have done that, you are in "undefined behavior" territory and this means that anything could happen, including printing 5, 6 or 0xdeadbeaf, or blow up your PC.
In the first case (VS2008), free appears to report an error message on standard output. It is not obvious to me what this error message is so it is hard to explain what is going on but you ask later in a comment how VS2008 could know the size of the memory you release. Typically, if you allocate memory and store it in pointer p, a lot of memory allocators (the malloc/free implementation) store at p[-1] the size of the memory allocated. In practice, it is common to also store at address p[p[-1]] a special value (say, 0xdeadbeaf). This "canary" is checked upon free to see if you have written past the end of the array. To summarize, your 5*sizeof(int) array is probably at least 5*sizeof(int) + 2*sizeof(char*) bytes long and the memory allocator used by code compiled with VS2008 has quite a few checks builtin.
In the case of gcc, I find it surprising that you get 51 printed. If you wanted to investigate wwhy that is exactly, I would recommend getting an asm dump of the generated code as well as running this under a debugger to check if 5 is actually really written past the end of the array (gcc could well have decided not to generate that code because it is "undefined") and if it is, to put a watchpoint on that memory location to see who overrides it, when, and why.
I have a situation where a part of my code has been found to be passed uninitialized memory at times. I am looking for a way in which I could assert when this case occurs when running with the debug-heap. This is a function that could be thrown about in places for that extra help in tracking bugs:
void foo( char* data, int dataBytes )
{
assert( !hasUninitialisedData(data,dataBytes) ); //, This is what we would like
...
}
I have seen that there are tools like valgrind and as I run on windows there is DrMemory. These however run external to the application so don't find the issue when it occurs for the developer. More importantly these throw up thousands of reports for Qt and other irrelevant functions making things impossible.
I think the idea is to have a function that would search for the 0xBAADFOOD within the array but there are a whole series of potential hex values and these change per platform. These hex values may also sometimes be valid when integers are stored so not sure if there is more information that can be obtained form the debug-heap.
I am primarily interested the potential there could be a CRT function, library, visual-studio breakpoint, or other helper function for doing this sort of check. It 'feels' like there should be one somewhere already, I couldn't find it yet so if anybody has some nice solutions for this sort of situation it would be appreciated.
EDIT: I should explain better, I know the debug-heap will initialize all allocations with a value in attempt to allow detecting uninitialised data. As mentioned the data being received contains some 0xBAADFOOD values, normally memory is initialized with 0xCDCDCDCD but this is a third party library allocating the data and apparently there are multiple magic numbers hence I am interested if there is a generalized check hidden somewhere.
The VC++ runtime, at least in debug builds, initialize all heap allocations with a certain value. It has been the same value for as long as I can remember. I can't, however, remember the actual value. You could do a quick allocation test and check.
Debug builds of VC++ programs often set uninitialized memory to 0xCD at startup. That's not dependable over the life of the session (once the memory's been allocated/used/deallocated the value will change), but it's a place to start.
I have implemented a function now that basically does what is intended after finding a list of magic numbers on wiki (Magic numbers):
/** Performs a check for potentially unintiialised data
\remarks May incorrectly report uninitialised data as it is always possible the contained data may match the magic numbers in rare circumstances so this function should be used for initial identification of uninitialised data only
*/
bool hasUninitialisedData( const char* data, size_t lenData )
{
const unsigned int kUninitialisedMagic[] =
{
0xABABABAB, // Used by Microsoft's HeapAlloc() to mark "no man's land" guard bytes after allocated heap memory
0xABADCAFE, // A startup to this value to initialize all free memory to catch errant pointers
0xBAADF00D, // Used by Microsoft's LocalAlloc(LMEM_FIXED) to mark uninitialised allocated heap memory
0xBADCAB1E, // Error Code returned to the Microsoft eVC debugger when connection is severed to the debugger
0xBEEFCACE, // Used by Microsoft .NET as a magic number in resource files
0xCCCCCCCC, // Used by Microsoft's C++ debugging runtime library to mark uninitialised stack memory
0xCDCDCDCD, // Used by Microsoft's C++ debugging runtime library to mark uninitialised heap memory
0xDEADDEAD, // A Microsoft Windows STOP Error code used when the user manually initiates the crash.
0xFDFDFDFD, // Used by Microsoft's C++ debugging heap to mark "no man's land" guard bytes before and after allocated heap memory
0xFEEEFEEE, // Used by Microsoft's HeapFree() to mark freed heap memory
};
const unsigned int kUninitialisedMagicCount = sizeof(kUninitialisedMagic)/sizeof(kUninitialisedMagic[0]);
if ( lenData < 4 ) return assert(false=="not enough data for checks!"), false;
for ( unsigned int i =0; i < lenData - 4; ++i ) //< we don't check the last few bytes as keep to full 4-byte/int checks for now, this is where the -4 comes in
{
for ( unsigned int iMagic = 0; iMagic < kUninitialisedMagicCount; ++iMagic )
{
const unsigned int* ival = reinterpret_cast<const unsigned int*>(data + i);
if ( *ival == kUninitialisedMagic[iMagic] )
return true;
}
}
return false;
}
Having read this interesting article outlining a technique for debugging heap corruption, I started wondering how I could tweak it for my own needs. The basic idea is to provide a custom malloc() for allocating whole pages of memory, then enabling some memory protection bits for those pages, so that the program crashes when they get written to, and the offending write instruction can be caught in the act. The sample code is C under Linux (mprotect() is used to enable the protection), and I'm curious as to how to apply this to native C++ and Windows. VirtualAlloc() and/or VirtualProtect() look promising, but I'm not sure how a use scenario would look like.
Fred *p = new Fred[100];
ProtectBuffer(p);
p[10] = Fred(); // like this to crash please
I am aware of the existence of specialized tools for debugging memory corruption in Windows, but I'm still curious if it would be possible to do it "manually" using this approach.
EDIT: Also, is this even a good idea under Windows, or just an entertaining intellectual excercise?
Yes, you can use VirtualAlloc and VirtualProtect to set up sections of memory that are protected from read/write operations.
You would have to re-implement operator new and operator delete (and their [] relatives), such that your memory allocations are controlled by your code.
And bear in mind that it would only be on a per-page basis, and you would be using (at least) three pages worth of virtual memory per allocation - not a huge problem on a 64-bit system, but may cause problems if you have many allocations in a 32-bit system.
Roughly what you need to do (you should actually find the page-size for the build of Windows - I'm too lazy, so I'll use 4096 and 4095 to represent pagesize and pagesize-1 - you also will need to do more error checking than this code does!!!):
void *operator new(size_t size)
{
Round size up to size in pages + 2 pages extra.
size_t bigsize = (size + 2*4096 + 4095) & ~4095;
// Make a reservation of "size" bytes.
void *addr = VirtualAlloc(NULL, bigsize, PAGE_NOACCESS, MEM_RESERVE);
addr = reinterpret_cast<void *>(reinterpret_cast<char *>(addr) + 4096);
void *new_addr = VirtualAlloc(addr, size, PAGE_READWRITE, MEM_COMMIT);
return new_addr;
}
void operator delete(void *ptr)
{
char *tmp = reinterpret_cast<char *>(ptr) - 4096;
VirtualFree(reinterpret_cast<void*>(tmp));
}
Something along those lines, as I said - I haven't tried compiling this code, as I only have a Windows VM, and I can't be bothered to download a compiler and see if it actually compiles. [I know the principle works, as we did something similar where I worked a few years back].
This is what Gaurd Pages are for (see this MSDN tutorial), they raise a special exception when the page is accessed the first time, allowing you to do more than crash on the first invalid pages access (and catch bad read/writes as opposed to NULL pointers etc).
I have a huge MMC snapin written in Visual C++ 9. Every once in a while when I hit F5 in MMC mmc.exe crashes. If I attach a debugger to it I see the following message:
A buffer overrun has occurred in mmc.exe which has corrupted the program's internal state. Press Break to debug the program or Continue to terminate the program.
For more details please see Help topic 'How to debug Buffer Overrun Issues'.
First of all, there's no How to debug Buffer Overrun Issues topic anywhere.
When I inspect the call stack I see that it's likely something with security cookies used to guard against stack-allocated buffer overruns:
MySnapin.dll!__crt_debugger_hook() Unknown
MySnapin.dll!__report_gsfailure() Line 315 + 0x7 bytes C
mssvcr90d.dll!ValidateLocalCookies(void (unsigned int)* CookieCheckFunction=0x1014e2e3, _EH4_SCOPETABLE * ScopeTable=0x10493e48, char * FramePointer=0x0007ebf8) + 0x57 bytes C
msvcr90d.dll!_except_handler4_common(unsigned int * CookiePointer=0x104bdcc8, void (unsigned int)* CookieCheckFunction=0x1014e2e3, _EXCEPTION_RECORD * ExceptionRecord=0x0007e764, _EXCEPTION_REGISTRATION_RECORD * EstablisherFrame=0x0007ebe8, _CONTEXT * ContextRecord=0x0007e780, void * DispatcherContext=0x0007e738) + 0x44 bytes C
MySnapin.dll!_except_handler4(_EXCEPTION_RECORD * ExceptionRecord=0x0007e764, _EXCEPTION_REGISTRATION_RECORD * EstablisherFrame=0x0007ebe8, _CONTEXT * ContextRecord=0x0007e780, void * DispatcherContext=0x0007e738) + 0x24 bytes C
ntdll.dll!7c9032a8()
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!7c90327a()
ntdll.dll!7c92aa0f()
ntdll.dll!7c90e48a()
MySnapin.dll!IComponentImpl<CMySnapin>::GetDisplayInfo(_RESULTDATAITEM * pResultDataItem=0x0007edb0) Line 777 + 0x14 bytes C++
// more Win32 libraries functions follow
I have lots of code and no idea where the buffer overrun might occur and why. I found this forum discussion and specifically the advise to replace all wcscpy-like functions with more secure versions like wcscpy_s(). I followed the advise and that didn't get me closer to the problem solution.
How do I debug my code and find why and where the buffer overrun occurs with Visual Studio 2008?
Add /RTCs switch to the compiler. This will enable detection of buffer overruns and underruns at runtime. When overrun will be detected, program will break exactly in place where it happened rather than giving you postmortem message.
If that does not help, then investigate wcscpy_s() calls that you mentioned. Verify that the 'number of elements' has correct value. I recently fixed buffer overrun caused incorrect usage of wcscpy_s(). Here is an example:
const int offset = 10;
wchar_t buff[MAXSIZE];
wcscpy_s(buff + offset, MAXSIZE, buff2);
Notice that buff + offset has MAXSIZE - offset elements, not MAXSIZE.
I just had this problem a minute ago, and I was able to solve it. I searched first on the net with no avail, but I got to this thread.
Anyways, I am running VS2005 and I have a multi-threaded program. I had to 'guess' which thread caused the problem, but luckily I only have a few.
So, what I did was in that thread I ran through the debugger, stepping through the code at a high level function. I noticed that it always occurred at the same place in the function, so now it was a matter of drilling down.
The other thing I would do is step through with the callstack window open making sure that the stack looked okay and just noting when the stack goes haywire.
I finally narrowed down to the line that caused the bug, but it wasn't actually that line. It was the line before it.
So what was the cause for me? Well, in short-speak I tried to memcpy a NULL pointer into a valid area of memory.
I'm surprised the VS2005 can't handle this.
Anyways, hope that helps. Good luck.
I assume you aren't able to reproduce this reliably.
I've successfully used Rational Purify to hunt down a variety of memory problems in the past, but it costs $ and I'm not sure how it would interact with MMC.
Unless there's some sort of built-in memory debugger you may have to try solving this programmatically. Are you able to remove/disable chunks of functionality to see if the problem manifests itself?
If you have "guesses" about where the problem occurs you can try disabling/changing that code as well. Even if you changed the copy functions to _s versions, you still need to be able to reliably handle truncated data.
I have got this overrun when I wanted to increment a value in a pointer variable like this:
*out_BMask++;
instead
(*out_BMask)++;
where out_BMask was declared as int *out_BMask
If you did something like me then I hope this will help you ;)