While researching this issue, I found multiple mentions of the following scenario online, invariably as unanswered questions on programming forums. I hope that posting this here will at least serve to document my findings.
First, the symptom: While running pretty standard code that uses waveOutWrite() to output PCM audio, I sometimes get this when running under the debugger:
ntdll.dll!_DbgBreakPoint#0()
ntdll.dll!_RtlpBreakPointHeap#4() + 0x28 bytes
ntdll.dll!_RtlpValidateHeapEntry#12() + 0x113 bytes
ntdll.dll!_RtlDebugGetUserInfoHeap#20() + 0x96 bytes
ntdll.dll!_RtlGetUserInfoHeap#20() + 0x32743 bytes
kernel32.dll!_GlobalHandle#4() + 0x3a bytes
wdmaud.drv!_waveCompleteHeader#4() + 0x40 bytes
wdmaud.drv!_waveThread#4() + 0x9c bytes
kernel32.dll!_BaseThreadStart#8() + 0x37 bytes
While the obvious suspect would be a heap corruption somewhere else in the code, I found out that that's not the case. Furthermore, I was able to reproduce this problem using the following code (this is part of a dialog based MFC application:)
void CwaveoutDlg::OnBnClickedButton1()
{
WAVEFORMATEX wfx;
wfx.nSamplesPerSec = 44100; /* sample rate */
wfx.wBitsPerSample = 16; /* sample size */
wfx.nChannels = 2;
wfx.cbSize = 0; /* size of _extra_ info */
wfx.wFormatTag = WAVE_FORMAT_PCM;
wfx.nBlockAlign = (wfx.wBitsPerSample >> 3) * wfx.nChannels;
wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec;
waveOutOpen(&hWaveOut,
WAVE_MAPPER,
&wfx,
(DWORD_PTR)m_hWnd,
0,
CALLBACK_WINDOW );
ZeroMemory(&header, sizeof(header));
header.dwBufferLength = 4608;
header.lpData = (LPSTR)GlobalLock(GlobalAlloc(GMEM_MOVEABLE | GMEM_SHARE | GMEM_ZEROINIT, 4608));
waveOutPrepareHeader(hWaveOut, &header, sizeof(header));
waveOutWrite(hWaveOut, &header, sizeof(header));
}
afx_msg LRESULT CwaveoutDlg::OnWOMDone(WPARAM wParam, LPARAM lParam)
{
HWAVEOUT dev = (HWAVEOUT)wParam;
WAVEHDR *hdr = (WAVEHDR*)lParam;
waveOutUnprepareHeader(dev, hdr, sizeof(WAVEHDR));
GlobalFree(GlobalHandle(hdr->lpData));
ZeroMemory(hdr, sizeof(*hdr));
hdr->dwBufferLength = 4608;
hdr->lpData = (LPSTR)GlobalLock(GlobalAlloc(GMEM_MOVEABLE | GMEM_SHARE | GMEM_ZEROINIT, 4608));
waveOutPrepareHeader(hWaveOut, &header, sizeof(WAVEHDR));
waveOutWrite(hWaveOut, hdr, sizeof(WAVEHDR));
return 0;
}
Before anyone comments on this, yes - the sample code plays back uninitialized memory. Don't try this with your speakers turned all the way up.
Some debugging revealed the following information: waveOutPrepareHeader() populates header.reserved with a pointer to what appears to be a structure containing at least two pointers as its first two members. The first pointer is set to NULL. After calling waveOutWrite(), this pointer is set to a pointer allocated on the global heap. In pseudo code, that would look something like this:
struct Undocumented { void *p1, *p2; } /* This might have more members */
MMRESULT waveOutPrepareHeader( handle, LPWAVEHDR hdr, ...) {
hdr->reserved = (Undocumented*)calloc(sizeof(Undocumented));
/* Do more stuff... */
}
MMRESULT waveOutWrite( handle, LPWAVEHDR hdr, ...) {
/* The following assignment fails rarely, causing the problem: */
hdr->reserved->p1 = malloc( /* chunk of private data */ );
/* Probably more code to initiate playback */
}
Normally, the header is returned to the application by waveCompleteHeader(), a function internal to wdmaud.dll. waveCompleteHeader() tries to deallocate the pointer allocated by waveOutWrite() by calling GlobalHandle()/GlobalUnlock() and friends. Sometimes, GlobalHandle() bombs, as shown above.
Now, the reason that GlobalHandle() bombs is not due to a heap corruption, as I suspected at first - it's because waveOutWrite() returned without setting the first pointer in the internal structure to a valid pointer. I suspect that it frees the memory pointed to by that pointer before returning, but I haven't disassembled it yet.
This only appears to happen when the wave playback system is low on buffers, which is why I'm using a single header to reproduce this.
At this point I have a pretty good case against this being a bug in my application - after all, my application is not even running. Has anyone seen this before?
I'm seeing this on Windows XP SP2. The audio card is from SigmaTel, and the driver version is 5.10.0.4995.
Notes:
To prevent confusion in the future, I'd like to point out that the answer suggesting that the problem lies with the use of malloc()/free() to manage the buffers being played is simply wrong. You'll note that I changed the code above to reflect the suggestion, to prevent more people from making the same mistake - it doesn't make a difference. The buffer being freed by waveCompleteHeader() is not the one containing the PCM data, the responsibility to free the PCM buffer lies with the application, and there's no requirement that it be allocated in any specific way.
Also, I make sure that none of the waveOut API calls I use fail.
I'm currently assuming that this is either a bug in Windows, or in the audio driver. Dissenting opinions are always welcome.
Now, the reason that GlobalHandle()
bombs is not due to a heap corruption,
as I suspected at first - it's because
waveOutWrite() returned without
setting the first pointer in the
internal structure to a valid pointer.
I suspect that it frees the memory
pointed to by that pointer before
returning, but I haven't disassembled
it yet.
I can reproduce this with your code on my system. I see something similar to what Johannes reported. After the call to WaveOutWrite, hdr->reserved normally holds a pointer to allocated memory (which appears to contain the wave out device name in unicode, among other things).
But occasionally, after returning from WaveOutWrite(), the byte pointed to by hdr->reserved is set to 0. This is normally the least significant byte of that pointer. The rest of the bytes in hdr->reserved are ok, and the block of memory that it normally points to is still allocated and uncorrupted.
It probably is being clobbered by another thread - I can catch the change with a conditional breakpoint immediately after the call to WaveOutWrite(). And the system debug breakpoint is occurring in another thread, not the message handler.
However, I can't cause the system debug breakpoint to occur if I use a callback function instead of the windows messsage pump. (fdwOpen = CALLBACK_FUNCTION in WaveOutOpen() )
When I do it this way, my OnWOMDone handler is called by a different thread - possibly the one that's otherwise responsible for the corruption.
So I think there is a bug, either in windows or the driver, but I think you can work around by handling WOM_DONE with a callback function instead of the windows message pump.
You're not alone with this issue:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=100589
I'm seeing the same problem and have done some analysis myself:
waveOutWrite() allocates (i.e. GlobalAlloc) a pointer to a heap area of 354 bytes and correctly stores it in the data area pointed to by header.reserved.
But when this heap area is to be freed again (in waveCompleteHeader(), according to your analysis; I don't have the symbols for wdmaud.drv myself), the least significant byte of the pointer has been set to zero, thus invalidating the pointer (while the heap is not corrupted yet). In other words, what happens is something like:
(BYTE *) (header.reserved) = 0
So I disagree with your statements in one point: waveOutWrite() stores a valid pointer first; the pointer only becomes corrupted later from another thread.
Probably that's the same thread (mxdmessage) that later tries to free this heap area, but I did not yet find the point where the zero byte is stored.
This does not happen very often, and the same heap area (same address) has successfully been allocated and deallocated before.
I'm quite convinced that this is a bug somewhere in the system code.
Not sure about this particular problem, but have you considered using a higher-level, cross-platform audio library? There are a lot of quirks with Windows audio programming, and these libraries can save you a lot of headaches.
Examples include PortAudio, RtAudio, and SDL.
The first thing that I'd do would be to check the return values from the waveOutX functions. If any of them fail - which isn't unreasonable given the scenario you describe - and you carry on regardless then it isn't surprising that things start to go wrong. My guess would be that waveOutWrite is returning MMSYSERR_NOMEM at some point.
Use Application Verifier to figure out what's going on, if you do something suspicious, it will catch it much earlier.
It may be helpful to look at the source code for Wine, although it's possible that Wine has fixed whatever bug there is, and it's also possible Wine has other bugs in it. The relevant files are dlls/winmm/winmm.c, dlls/winmm/lolvldrv.c, and possibly others. Good luck!
What about the fact that you are not allowed to call winmm functions from within callback?
MSDN does not mention such restrictions about window messages, but usage of window messages is similar to callback function. Possibly, internally it's implemented as a callback function from the driver and that callback does SendMessage.
Internally, waveout has to maintain linked list of headers that were written using waveOutWrite; So, I guess that:
hdr->reserved = (Undocumented*)calloc(sizeof(Undocumented));
sets previous/next pointers of the linked list or something like this. If you write more buffers, then if you check the pointers and if any of them point to one another then my guess is most likely correct.
Multiple sources on the web mention that you don't need to unprepare/prepare same headers repeatedly. If you comment out Prepare/unprepare header in the original example then it appears to work fine without any problems.
I solved the problem by polling the sound playback and delays:
WAVEHDR header = { buffer, sizeof(buffer), 0, 0, 0, 0, 0, 0 };
waveOutPrepareHeader(hWaveOut, &header, sizeof(WAVEHDR));
waveOutWrite(hWaveOut, &header, sizeof(WAVEHDR));
/*
* wait a while for the block to play then start trying
* to unprepare the header. this will fail until the block has
* played.
*/
while (waveOutUnprepareHeader(hWaveOut,&header,sizeof(WAVEHDR)) == WAVERR_STILLPLAYING)
Sleep(100);
waveOutClose(hWaveOut);
Playing Audio in Windows using waveOut Interface
Related
As the title suggests, I'm doing some low-level volume access (as close to C code as possible) to make a disk clone program, I want to set all the free clusters to zero to use a simple compression and keep the size small.
I've been beating my head off a wall forever trying to figure out why I can't get the FSCTL_GET_VOLUME_BITMAP function working properly... so if possible please don't link me to any external reading as I've probably already been there and its either been C#, invalid links, or has no explanation I am looking for.
I want to understand the buffer itself more than i need the actual code.
The simplest way I can ask is what is the proper way to read from an array with a length of [1] in C/C++ like the one used by VOLUME_BITMAP_BUFFER?
The only way I can even assign anything to it is by recreating it with my own huge buffer and I still end up with errors, even after locking the volume in Recovery mode. I do get all needed permissions to access the raw disk just on a side note.
I know I'm likely missing some fundamental basic in C++ that would allow me to read from the memory its stored in, but I just can't figure it out without getting errors.
In case I happen to be looping through the bytes wrong which is causing my error, I added how I was doing it...although that still leaves me with the Buffer question.
I know you can call multiple times, but I have to assume its not 8 bytes at a time.
Something like this (pardon my code..I typed this on my phone so it likely has errors)...I tried adding any relevant cause of failure in case, but the buffer is the real question.
#define BYTE_MASK = 0x80;
#define BITS_PER_BYTE = 8;
void function foo() {
const int BUFFER_SIZE = 268435456;
struct {
LARGE_INTEGER StartingLcn;
LARGE_INTEGER BitmapSize;
BYTE Buffer[BUFF_SIZE];
} volBuff;
// I want to use VOLUME_BITMAP_BUFFER
/* Part of a larger loop checking for errors and more data
BYTE Mask = 1;
BOOL b = DeviceIoControl(vol, FSCTL_GET-VOLUME_BITMAP, &lcnStart, sizeof(STARTING_LCN_INPUT_BUFFER), &volBuff, sizeof(volBuff), &dwRet);
*/
for (x = 0; x < (bmpLen / BITS_PER_BYTE;) {
if ((volBuff.Buffer[x] & Mask) != 0) {
NotFree++;
} else {
FreeSpc++;
}
// I did try not dividing the size
if (Mask == BYTE_MASK) {
Mask = 1;
x++;
} else {
Mask = (Mask << 1);
}
}
return;
}
I've literally put an entire project on hold for I don't even know how long just being stubborn at this point...and I can't find any answer that actually explains the buffer, forget the rest of the process.
If someone wants to be more thorough I won't complain after all my attempts, but the buffer is driving me crazy at this point.
Any help would be greatly appreciated.
I can safely assume the one answer I was given
"...array with a length of [1]..." there is no way in Standard C++ of accessing the additional bytes. You can either: (a) pray that your compiler can do this (via an extension) or (b) write a C module (where this is well defined) that you can call from C++. - Richard Critton"
Was as correct of an answer I can expect after my extensive attempts to make this work any other way, especially since I was only able to make my own array work using standard C and not C++ directly.
I wanted to put a close on this since my computer is out of use for a bit.
If the problem continues after I dig through some examples for defragmenting in C that I FINALLY came across I'll ask a more direct question with full code to support it.
That answer was enough to remove the wall I had hit and get me thinking again. I thank you for that.
I have a char pointer & have used malloc like
char *message;
message=(char *)malloc(4000*sizeof(char));
later I'm receiving data from socket in message what happens if data exceeds 4000 bytes ?
I'll assume you are asking what will happen if you do something like this:
recv(socket,message,5000,0);
and the amount of data read is greater than 4000.
This will be undefined behavior, so you need to make sure that it can't happen. Whenever you read from a socket, you should be able to specify the maximum number of characters to read.
Your question leaves out many details about the network protocol, see the answer by #DavidSchwartz.
But focussing on the buffer in which you store it: if you try to write more than 4K chars into the memory allocated by message, your program could crash.
If you test for the size of the message being received, you could do realloc:
int buf_len = 4000;
char *message;
message = static_cast<char*>(malloc(buf_len));
/* read message, and after you have read 4000 chars, do */
buf_len *= 2;
message = static_cast<char*>(realloc(message, buf_len));
/* rinse and repeat if buffer is still too small */
free(message); // don't forget to clean-up
But this is very labor-intensive. Just use a std::string
int buf_len = 4000;
std::string message;
message.reserve(buf_len); // allocate 4K to save on repeated allocations
/* read message, std::string will automatically expand, no worries! */
// destructor will automatically clean-up!
It depends on a few factors. Assuming there's no bug in your code, it will depend on the protocol you're using.
If TCP, you will never get more bytes than you asked for. You'll get more of the data the next time you call the receive function.
If UDP, you may get truncation, you may get an error (like MSG_TRUNC). This depends on the specifics of your platform and how you're invoking a receive function. I know of no platform that will save part of a datagram for your next invocation of a receive function.
Of course, if there's a bug in your code and you actually overflow the buffer, very bad things can happen. So make sure you pass only sane values to whatever receive function you're using.
For the best result,you get a segmentation fault error
see
What is a segmentation fault?
dangers of heap overflows?
I am having some odd behavior when using virtualalloc. I'm in c++, Visual Studio 2010.
I have two things I want to allocate, and I'm using VirtualAlloc (I have my reasons, irrelevant to the question)
1 - Space to hold a buffer of x86 assembly code
2 - Space to hold the data structure that the x86 code wants
In my code I am doing:
thread_data_t * p_data = (thread_data_t*)VirtualAlloc(NULL, sizeof(thread_data_t), MEM_COMMIT, PAGE_READWRITE);
//set up all the values in the structure
unsigned char* p_function = (unsigned char*)VirtualAlloc(NULL, sizeof(buffer), MEM_COMMIT, PAGE_EXECUTE_READWRITE);
memcpy(p_function, buffer, sizeof(buffer));
CreateThread( 0, (LPTHREAD_START_ROUTINE)p_function, p_data, 0, NULL);
in DEBUG mode: Works fine
in RELEASE mode: The spun up thread receives a null as its input data. Verified through debugging that when I call createThread the pointer is correct
if I switch the VirtualAlloc's around, so that I allocate the function space before the data space, then both DEBUG and RELEASE mode work fine.
Any ideas why? I've verified all my VS build settings are the same between DEBUG/RELEASE
After copying assembly code into a memory buffer, you can't just jump straight into that buffer. You need to flush CPU caches and the like or it will not work. You can use FlushInstructionCache to do this.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms679350%28v=vs.85%29.aspx
It's hard to say exactly why reordering the allocations would fix the issue, but if you copied the instructions into their buffer and then did a lot of work before jumping into the buffer, that would likely improve the odds of "getting away with it," as the CPU caches would have more of an opportunity to get flushed out by other means.
I have a huge MMC snapin written in Visual C++ 9. Every once in a while when I hit F5 in MMC mmc.exe crashes. If I attach a debugger to it I see the following message:
A buffer overrun has occurred in mmc.exe which has corrupted the program's internal state. Press Break to debug the program or Continue to terminate the program.
For more details please see Help topic 'How to debug Buffer Overrun Issues'.
First of all, there's no How to debug Buffer Overrun Issues topic anywhere.
When I inspect the call stack I see that it's likely something with security cookies used to guard against stack-allocated buffer overruns:
MySnapin.dll!__crt_debugger_hook() Unknown
MySnapin.dll!__report_gsfailure() Line 315 + 0x7 bytes C
mssvcr90d.dll!ValidateLocalCookies(void (unsigned int)* CookieCheckFunction=0x1014e2e3, _EH4_SCOPETABLE * ScopeTable=0x10493e48, char * FramePointer=0x0007ebf8) + 0x57 bytes C
msvcr90d.dll!_except_handler4_common(unsigned int * CookiePointer=0x104bdcc8, void (unsigned int)* CookieCheckFunction=0x1014e2e3, _EXCEPTION_RECORD * ExceptionRecord=0x0007e764, _EXCEPTION_REGISTRATION_RECORD * EstablisherFrame=0x0007ebe8, _CONTEXT * ContextRecord=0x0007e780, void * DispatcherContext=0x0007e738) + 0x44 bytes C
MySnapin.dll!_except_handler4(_EXCEPTION_RECORD * ExceptionRecord=0x0007e764, _EXCEPTION_REGISTRATION_RECORD * EstablisherFrame=0x0007ebe8, _CONTEXT * ContextRecord=0x0007e780, void * DispatcherContext=0x0007e738) + 0x24 bytes C
ntdll.dll!7c9032a8()
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!7c90327a()
ntdll.dll!7c92aa0f()
ntdll.dll!7c90e48a()
MySnapin.dll!IComponentImpl<CMySnapin>::GetDisplayInfo(_RESULTDATAITEM * pResultDataItem=0x0007edb0) Line 777 + 0x14 bytes C++
// more Win32 libraries functions follow
I have lots of code and no idea where the buffer overrun might occur and why. I found this forum discussion and specifically the advise to replace all wcscpy-like functions with more secure versions like wcscpy_s(). I followed the advise and that didn't get me closer to the problem solution.
How do I debug my code and find why and where the buffer overrun occurs with Visual Studio 2008?
Add /RTCs switch to the compiler. This will enable detection of buffer overruns and underruns at runtime. When overrun will be detected, program will break exactly in place where it happened rather than giving you postmortem message.
If that does not help, then investigate wcscpy_s() calls that you mentioned. Verify that the 'number of elements' has correct value. I recently fixed buffer overrun caused incorrect usage of wcscpy_s(). Here is an example:
const int offset = 10;
wchar_t buff[MAXSIZE];
wcscpy_s(buff + offset, MAXSIZE, buff2);
Notice that buff + offset has MAXSIZE - offset elements, not MAXSIZE.
I just had this problem a minute ago, and I was able to solve it. I searched first on the net with no avail, but I got to this thread.
Anyways, I am running VS2005 and I have a multi-threaded program. I had to 'guess' which thread caused the problem, but luckily I only have a few.
So, what I did was in that thread I ran through the debugger, stepping through the code at a high level function. I noticed that it always occurred at the same place in the function, so now it was a matter of drilling down.
The other thing I would do is step through with the callstack window open making sure that the stack looked okay and just noting when the stack goes haywire.
I finally narrowed down to the line that caused the bug, but it wasn't actually that line. It was the line before it.
So what was the cause for me? Well, in short-speak I tried to memcpy a NULL pointer into a valid area of memory.
I'm surprised the VS2005 can't handle this.
Anyways, hope that helps. Good luck.
I assume you aren't able to reproduce this reliably.
I've successfully used Rational Purify to hunt down a variety of memory problems in the past, but it costs $ and I'm not sure how it would interact with MMC.
Unless there's some sort of built-in memory debugger you may have to try solving this programmatically. Are you able to remove/disable chunks of functionality to see if the problem manifests itself?
If you have "guesses" about where the problem occurs you can try disabling/changing that code as well. Even if you changed the copy functions to _s versions, you still need to be able to reliably handle truncated data.
I have got this overrun when I wanted to increment a value in a pointer variable like this:
*out_BMask++;
instead
(*out_BMask)++;
where out_BMask was declared as int *out_BMask
If you did something like me then I hope this will help you ;)
I have this strange call stack and I am stumped to understand why.
It seems to me that asio calls open ssl's read and then gets a negative return value (-37) .
Asio seems to then try to use it inside the memcpy function.
The function that causes this call stack is used hunderds of thousands of times without this error.
It happens only rarely, about once a week.
ulRead = (boost::asio::read(spCon->socket(), boost::asio::buffer(_requestHeader, _requestHeader.size()), boost::asio::transfer_at_least(_requestHeader.size()), error_));
Note that request header's size is exactly 3 bytes always.
Could anyone shed some light on possible reasons?
Note: I'm using boost asio 1.36
Here is the crashing call stack crash happens in memcpy because of the huge "count":
A quick look at evp_lib.c shows that it tries to pull a length from the cipher context, and in your case gets a Very Bad Value(tm). It then uses this value to copy a string (which does the memcpy). My guess is something is trashing your cipher, be it a thread safety problem, or a reading more bytes into a buffer than allowed.
Relevant source:
int EVP_CIPHER_set_asn1_iv(EVP_CIPHER_CTX *c, ASN1_TYPE *type)
{
int i=0,j;
if (type != NULL)
{
j=EVP_CIPHER_CTX_iv_length(c);
OPENSSL_assert(j <= sizeof c->iv);
i=ASN1_TYPE_set_octetstring(type,c->oiv,j);
}
return(i);
}