D3D device invalidated or destroyed prematurely - c++

At least one user of my software has encountered a very strange crash after a Windows 10 update. This crash always happens in the same place, and it appears as if the IDirect3DDevice9 has been destroyed or invalidated in some way during a previous call.
There is nothing else in the program that would release or destroy this device prematurely, and there are no other threads that could possibly interfere. The user has said updating their video drivers did not fix the problem, and their graphics card is an Nvidia GTX 1060 6GB, so a little older but by no means a potato.
IDirect3DSurface9 *s;
HRESULT hr = m_d3dDevice->GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO,&s);
if(FAILED(hr)) {
...
return;
}
// crash happens here, when pushing m_d3dDevice to the stack before the call
m_d3dDevice->SetRenderTarget(0,s);
The above code crashes before calilng SetRenderTarget. The m_d3dDevice value is read successfully from this, but when the pointer is dereferenced again to get the vftable, the program crashes. Here's the disassembly:
mov eax, [edi+1Ch] ; read m_d3dDevice
push [ebp+var_E0] ; push s
push 0 ; push 0
mov ecx, [eax] ; load vftable; crashes here
push eax ; push m_d3dDevice (this)
call dword ptr [ecx+94h] ; call SetRenderTarget
The call to GetBackBuffer() completes successfully just before this point. Without a successful completion, it would bail out of the function. Nothing else in my code could possibly be destroying the device, or the object this code belongs to, during this time.
Also, I should mention that this code is in a final presentation routine, which is usually only called after other rendering steps have been done. (After SetRenderTarget(), a temporary surface that was used for all the drawing is rendered to the back buffer using a special shader for upscaling, before Present() is called.) Just prior to this code being called, the device has been confirmed to still be active via TestCooperativeLevel(), so this code will not be reached if the device is not ready to do any of this.
As far as I know this crash does not happen to every user, only to some (one confirmed, possibly two). Is it possible, even perhaps likely, that some other program on their system is the issue? I don't know why it would appear out of the blue even if so, but I have no idea why the device is destroyed/invalid when the second call happens yet perfectly valid during the first.

Update to this issue: The user who reported this was using MSI Afterburner, which apparently was the cause of this. After shutting down Afterburner, the application ran correctly. I was correct that an outside program was interfering, although it still isn't clear why the Windows 10 update impacted it. This suggests Afterburner does some DirectX hooks.

Related

DirectX12 - ExecuteCommandLists and Present function

I found that in the Microsoft sample example:
void D3D12HelloTriangle::OnRender()
{
// Record all the commands we need to render the scene into the command list.
PopulateCommandList();
// Execute the command list.
ID3D12CommandList* ppCommandLists[] = { m_commandList.Get() };
m_commandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists);
// Present the frame.
ThrowIfFailed(m_swapChain->Present(1, 0));
WaitForPreviousFrame();
}
How does is actually work ? ExecuteCommandLists is a asynchronous function call, so it means the code execution will continue and it hits Present function.
What will happen after Present call ? Let's say, GPU is still drawing, working and present is called. Is Present sychronous call ? It cannot present buffer when gpu is still drawing. Is that correct ? Could someone explain what's happening here ?
Present is also an asynchronous command that tells the GPU to start scanning out (displaying) from the next buffer in the swap chain. You don't have to worry about the GPU not having finished executing all previously issued work (on the graphics command queue) before the 'Flip' takes place.

waveOutWrite buffers are never returned to application

I have a problem with Microsoft's WaveOut API:
edit1: Added Link to sample project:
edit2: removed link, its not representative of the issue
After playing some audio, when I want to terminate a given playback stream, I call the function:
waveOutClose(hWaveOut_);
However, even after waveOutClose() is called, sometimes the library will still access memory previously passed to it by waveOutWrite(), causing an invalid memory access.
I then tried to ensure all the buffers are marked as done before freeing the buffer:
PcmPlayback::~PcmPlayback()
{
if(hWaveOut_ == nullptr)
return;
waveOutReset(hWaveOut_); // infinite-loops, never returns
for(auto it = buffers_.begin(); it != buffers_.end(); ++it)
waveOutUnprepareHeader(hWaveOut_, &it->wavehdr_, sizeof(WAVEHDR));
while( buffers_.empty() == false ) // infinite loops
removeCompletedBuffers();
waveOutClose(hWaveOut_);
//Unhandled exception at 0x75629E80 (msvcrt.dll) in app.exe:
// 0xC0000005: Access violation reading location 0xFEEEFEEE.
}
void PcmPlayback::removeCompletedBuffers()
{
for(auto it = buffers_.begin(); it != buffers_.end();)
{
if( it->wavehdr_.dwFlags & WHDR_DONE )
{
waveOutUnprepareHeader(hWaveOut_, &it->wavehdr_, sizeof(WAVEHDR));
it = buffers_.erase(it);
}
else
++it;
}
}
However, this situation never happens - the buffer never becomes empty. There will be 4-5 blocks remaining with wavehdr_.dwFlags == 18 (I believe this means the blocks are still marked as in playback)
How can I resolve this issue?
# Martin Schlott ("Can you provide the loop where you write the buffer to waveOutWrite?")
Its not quite a loop, instead I have a function that is called whenever I receive an audio packet over the network:
void PcmPlayback::addData(const std::vector<short> &rhs)
{
removeCompletedBuffers();
if(rhs.empty())
return;
// add new data
buffers_.push_back(Buffer());
Buffer & buffer = buffers_.back();
buffer.data_ = rhs;
ZeroMemory(&buffers_.back().wavehdr_, sizeof(WAVEHDR));
buffer.wavehdr_.dwBufferLength = buffer.data_.size() * sizeof(short);
buffer.wavehdr_.lpData = (char *)(buffer.data_.data());
waveOutPrepareHeader(hWaveOut_, &buffer.wavehdr_, sizeof(WAVEHDR)); // prepare block for playback
waveOutWrite(hWaveOut_, &buffer.wavehdr_, sizeof(WAVEHDR));
}
The described behavior can happen if you do not call
waveOutUnprepareHeader
to every buffer you used before you use
waveOutClose
The flagfield _dwFlags seems to indicate that the buffers are still enqueued (WHDR_INQUEUE | WHDR_PREPARED) try:
waveOutReset
before unprepare buffers.
After analyses your code, I found two problems/bugs which are not related to waveOut (funny, you use C++11 but the oldest media interface). You use a vector as buffer. During some calling operations, the vector is copied! One bug I found is:
typedef std::function<void(std::vector<short>)> CALLBACK_FN;
instead of:
typedef std::function<void(std::vector<short>&)> CALLBACK_FN;
which forces a copy of the vector.
Try to avoid using vectors if you expect to use it mostly as rawbuffer. Better use std::unique_pointer as buffer pointer.
Your callback in the recorder is not monitored by a mutex, nor does it check if a destructor was already called. The destructing happens during the callback (mostly) which leads to an exception.
For your test program, go back and use raw pointer and static callbacks before blaming waveOut. Your code is not bad, but the first bug already shows, that a small bug will lead to unpredictical errors. As you also organize your buffers in a std::array, I would search for bugs there. I guess, you make a unintentional copy of your whole buffer array, unpreparing the wrong buffers.
I did not have the time to dig deeper, but I guess those are the problems.
I managed to find my problem in the end, it was caused by multiple bugs and a deadlock. I will document what happened here so people can learn from this in the future
I was clued in to what was happening when I fixed the bugs in the sample:
call waveInStop() before waveInClose() in ~Recorder.cpp
wait for all buffers to have the WHDR_DONE flag before calling waveOutClose() in ~PcmPlayback.
After doing this, the sample worked fine and did not display the behavior of the WHDR_DONE flag never being marked.
In my main program, that behavior was caused by a deadlock that occurs in the following situation:
I have a vector of objects representing each peer I am streaming audio with
Each Object owns a Playback class
This vector is protected by a mutex
Recorder callback:
mutex.lock()
send audio packet to each peer.
Remove Peer:
mutex.lock()
~PcmPlayback
wait for WHDR_DONE flags to be marked
A deadlock occurs when I remove a peer, locking the mutex and the recorder callback tries to acquire a lock too.
Note that this will happen often because the playback buffer is usually (~4 * 20ms) while the recorder has a cadence of 20ms.
In ~PcmPlayback, the buffers will never be marked as WHDR_DONE and any calls to the WaveOut API will never return because the WaveOut API is waiting for the Recorder callback to complete, which is in turn waiting on mutex.lock(), causing a deadlock.

What's wrong with this Windows API call WaitForSingleObject?

The process got crashed unstably in Windows 7. I use !analyze -v command in WinDbg for exception analysis. It tells below information. The exception is actually thrown by WaitForSingleObject function which is called by IrsSim!IrsNet_BlockOutput. WinDbg's exception analysis told me that it was INVALID_POINTER_READ error.
For the calling code, the pChannel->hMutex is not NULL. I already dumped it and checked its value.
IRSNETRET IrsNet_BlockOutput( IRSNET *pChannel)
{
// Check channel
IRSNET_CHECK_CHANNEL(pChannel);
// Wait for synchronization mutex
switch(WaitForSingleObject(pChannel->hMutex, INFINITE))
{
...
}
<<<<<==========
FAULTING_IP: IrsSim!Channel::SendIrsMessage+285
[s:\som5\ics\scsv\isv\test.u\irssim\irsiftransport.cpp # 539] 00520ed5
8b06 mov eax,dword ptr [esi]
EXCEPTION_RECORD: ffffffff -- (.exr 0xffffffffffffffff)
ExceptionAddress: 77db4639
(ntdll!RtlDeactivateActivationContextUnsafeFast+0x00000058)
ExceptionCode: c0150010 ExceptionFlags: 00000001 NumberParameters: 3
Parameter[0]: 00000000 Parameter[1]: 07befc58 Parameter[2]:
00000000
DEFAULT_BUCKET_ID: INVALID_POINTER_READ
PROCESS_NAME: IrsSim.exe
ERROR_CODE: (NTSTATUS) 0xc0150010 - The activation context being
deactivated is not active for the current thread of execution.
EXCEPTION_CODE: (NTSTATUS) 0xc0150010 - The activation context being
deactivated is not active for the current thread of execution.
EXCEPTION_PARAMETER1: 00000000
EXCEPTION_PARAMETER2: 07befc58
EXCEPTION_PARAMETER3: 00000000
STACK_TEXT: 07d2fce0 00520ed5 irssim!Channel::SendIrsMessage+0x285
07d2fd1c 00521072 irssim!CChannelArray::SendIrsMessage+0x132 07d2fd50
0052208a irssim!CNetLibInterface::SendIrsMessage+0xba 07d2fd78
005c01b6 irssim!CSendActivity::Execute+0x76 07d2fdac 005e0b3f
irssim!SimulationThreadState::ExecuteOneActivity+0x11f 07d2fdf8
005cc937 irssim!CSimulationSubThreadState::ExecuteState+0x267 07d2fe8c
005ccf02 irssim!ThreadFctSubSimulation+0xf2 07d2fec4 73b1e3ee
mfc90u!_AfxThreadEntry+0xf2 07d2ff4c 739f3433
msvcr90!_endthreadex+0x44 07d2ff84 739f34c7 msvcr90!_endthreadex+0xd8
07d2ff90 767d339a kernel32!BaseThreadInitThunk+0xe 07d2ff9c 77d69ed2
ntdll!__RtlUserThreadStart+0x70 07d2ffdc 77d69ea5
ntdll!_RtlUserThreadStart+0x1b
================================
After that I use !teb command to try get more stack information.
0:011> k L=07beec2c 100 ChildEBP RetAddr 07bef54c 76be0bdd
ntdll!NtWaitForMultipleObjects+0x15 07bef5e8 767d1a2c
KERNELBASE!WaitForMultipleObjectsEx+0x100 07bef630 767d4208
kernel32!WaitForMultipleObjectsExImplementation+0xe0 07bef64c 767f80a4
kernel32!WaitForMultipleObjects+0x18 07bef6b8 767f7f63
kernel32!WerpReportFaultInternal+0x186 07bef6cc 767f7858
kernel32!WerpReportFault+0x70 07bef6dc 767f77d7
kernel32!BasepReportFault+0x20 07bef768 77da21d7
kernel32!UnhandledExceptionFilter+0x1af 07bef770 77da20b4
ntdll!__RtlUserThreadStart+0x62 07bef784 77da1f59
ntdll!_EH4_CallFilterFunc+0x12 07bef7ac 77d76ab9
ntdll!_except_handler4+0x8e 07bef7d0 77d76a8b
ntdll!ExecuteHandler2+0x26 07bef7f4 77d76a2d ntdll!ExecuteHandler+0x24
07bef880 77d40143 ntdll!RtlDispatchException+0x127 07bef880 77db4639
ntdll!KiUserExceptionDispatcher+0xf 07befc34 76be0ad7
ntdll!RtlDeactivateActivationContextUnsafeFast+0x58 07befc38 76be0abc
KERNELBASE!WaitForSingleObjectEx+0xde 07befc98 767d1194
KERNELBASE!WaitForSingleObjectEx+0xc3 07befcb0 767d1148
kernel32!WaitForSingleObjectExImplementation+0x75
07befcc4 005e3b6e kernel32!WaitForSingleObject+0x12
07befcd4 00520d3b IrsSim!IrsNet_BlockOutput+0x1e
07befd14 00521072 IrsSim!Channel::SendIrsMessage+0xeb 07befd48
0052208a IrsSim!CChannelArray::SendIrsMessage+0x132 07befd70 005c01b6
IrsSim!CNetLibInterface::SendIrsMessage+0xba 07befda4 005e0b3f
IrsSim!CSendActivity::Execute+0x76 07befdf0 005cc937
IrsSim!SimulationThreadState::ExecuteOneActivity+0x11f 07befe84
005ccf02 IrsSim!CSimulationSubThreadState::ExecuteState+0x267 07befebc
73b1e3ee IrsSim!ThreadFctSubSimulation+0xf2 07beff44 739f3433
mfc90u!_AfxThreadEntry+0xf2 07beff7c 739f34c7
msvcr90!_endthreadex+0x44 07beff88 767d339a msvcr90!_endthreadex+0xd8
07beff94 77d69ed2 kernel32!BaseThreadInitThunk+0xe 07beffd4 77d69ea5
ntdll!__RtlUserThreadStart+0x70 07beffec 00000000
ntdll!_RtlUserThreadStart+0x1b
====================================>>>>>>
This looks a lot like the 0xC015000f exception encountered in MFC applications ("The activation context being deactivated is not the most recently activated one.")
In all cases where I have encountered this exception, the exception is not the primary issue. It is a side effect of an earlier exception, usually an access violation, where the stack is not unwound properly. Somewhere a call frame that used a macro such as the AFX_MANAGE_STATE macro is missed in the exception handling. The result is that the next time the activation context is manipulated, say by another routine that results in a call to something like AFX_MAINTAIN_STATE2::~AFX_MAINTAIN_STATE2, the system detects a cookie mismatch and throws the exception.
In your case you may either be causing an exception (most likely an AV) in one piece of code that then is manifested by the context exception. To trap the root cause, run the debugger with first chance exception handling enabled. That way the AV that is being trapped elsewhere up the call frame by someone perhaps using a try/catch(...) will be exposed. Since you appear to be threading, you may simply have a race condition on a memory access that causes the primary exception (if that is indeed what is happening).
I see in a previous post:
"In fact, this problem comes from porting the program from 64-bit Win XP to 64-bit Win7. The compiler is switched therefore from VC6 to VC9. "
This is not a bug in MFC. MFC 6 did not include the activation context switching code (which is cookie based) that was added, I think, in Visual Studio 2005. So you would not encounter this exception. We too thought the newer MFC had issues but in every case we have encountered, it was our code that caused the problem. The original problems are masked by code flows that started with a try/catch (usually ...) that eventually called code that used one of the MFC manage state macros that then called more code where eventually the AV would occur. Since the catch was way up the stack, and depending on the corruption, not all frames are unwound properly, the back side of the MFC macros are missed (some destructor failed to pop its context). To make matters worse (for debugging), the eventual context crash can occur anywhere in your code (we experienced a lot of them in CWnd's base window message processing routing method). We eventually created another tool for a user to run that would attach itself as a debugger to our (release target) executable that trapped first chance exceptions and created a dmp file so we could find the inital point where the exception occurred since a dump of the context exception almost never was useful since the original source of the problem was long since past execution.
The only way that call can fail in that manner is if
pChannel->hMutex
is invalid. Either pChannel itself is invaild, or hMutex. Most likely the former.
You should be checking if the handle is invalid not simply not NULL like:
if (myHandle != INVALID_HANDLE_VALUE)
{
// do something
}
Usually the create handle function will return this value if there is an error.
looks like problem in context deactivation (thoughts based on windbg dump). Refer to http://blogs.msdn.com/b/junfeng/archive/2006/03/19/sxs-activation-context-activate-and-deactivate.aspx article.

Debugging/bypassing BSOD without source code

Hello and good day to you.
Need a bit of assitance here:
Situation:
I have an obscure DirectX 9 application (name and application details are irrelevant to the question) that causes blue screen of death on all nvidia cards (GeForce 8400GS and up) since certain driver version. I believe that the problem is indirectly caused by DirectX 9 call or a flag that triggers driver bug.
Goal:
I'd like to track down offending flag/function call (for fun, this isn't my job/homework) and bypass error condition by writing proxy dll. I already have a finished proxy dll that provides wrappers for IDirect3D9, IDirect3DDevice9, IDirect3DVertexBuffer9 and IDirect3DIndexBuffer9 and provides basic logging/tracing of Direct3D calls. However, I can't pinpoint function which causes crash.
Problems:
No source code or technical support is available. There will be no assitance, and nobody else will fix the problem.
Memory dump produced by kernel wasn't helpful - apparently an access violation happens within nv4_disp.dll, but I can't use stacktrace to go to IDirect3DDevice9 method call, plus there's a chance that bug happens asynchronously.
(Main problem) Because of large number of Direct3D9Device method calls, I can't reliably log them into file or over network:
Logging into file causes significant slowdown even without flushing, and because of that all last contents of the log are lost when system BSODs.
Logging over network (using UDP and WINSOck's sendto)also causes significant slowdown and must not be done asynchronously (asynchronous packets are lost on BSOD), plus packets (the ones around the crash) are sometimes lost even when sent synchronously.
When application is "slowed" down by logging routines, BSOD is less likely to happen, which makes tracking it down harder.
Question:
I normally don't write drivers, and don't do this level of debugging, so I have impression that I'm missing something important there's a more trivial way to track down the problem than writing IDirect3DDevice9 proxy dll with custom logging mechanism. What is it? What is the standard way of diagnosing/handling/fixing problem like this (no source code, COM interface method triggers BSOD)?
Minidump analysis(WinDBG):
Loading User Symbols
Loading unloaded module list
...........
Unable to load image nv4_disp.dll, Win32 error 0n2
*** WARNING: Unable to verify timestamp for nv4_disp.dll
*** ERROR: Module load completed but symbols could not be loaded for nv4_disp.dll
*******************************************************************************
* *
* Bugcheck Analysis *
* *
*******************************************************************************
Use !analyze -v to get detailed debugging information.
BugCheck 1000008E, {c0000005, bd0a2fd0, b0562b40, 0}
Probably caused by : nv4_disp.dll ( nv4_disp+90fd0 )
Followup: MachineOwner
---------
0: kd> !analyze -v
*******************************************************************************
* *
* Bugcheck Analysis *
* *
*******************************************************************************
KERNEL_MODE_EXCEPTION_NOT_HANDLED_M (1000008e)
This is a very common bugcheck. Usually the exception address pinpoints
the driver/function that caused the problem. Always note this address
as well as the link date of the driver/image that contains this address.
Some common problems are exception code 0x80000003. This means a hard
coded breakpoint or assertion was hit, but this system was booted
/NODEBUG. This is not supposed to happen as developers should never have
hardcoded breakpoints in retail code, but ...
If this happens, make sure a debugger gets connected, and the
system is booted /DEBUG. This will let us see why this breakpoint is
happening.
Arguments:
Arg1: c0000005, The exception code that was not handled
Arg2: bd0a2fd0, The address that the exception occurred at
Arg3: b0562b40, Trap Frame
Arg4: 00000000
Debugging Details:
------------------
EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at "0x%08lx" referenced memory at "0x%08lx". The memory could not be "%s".
FAULTING_IP:
nv4_disp+90fd0
bd0a2fd0 39b8f8000000 cmp dword ptr [eax+0F8h],edi
TRAP_FRAME: b0562b40 -- (.trap 0xffffffffb0562b40)
ErrCode = 00000000
eax=00000808 ebx=e37f8200 ecx=e4ae1c68 edx=e37f8328 esi=e37f8400 edi=00000000
eip=bd0a2fd0 esp=b0562bb4 ebp=e37e09c0 iopl=0 nv up ei pl nz na po nc
cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010202
nv4_disp+0x90fd0:
bd0a2fd0 39b8f8000000 cmp dword ptr [eax+0F8h],edi ds:0023:00000900=????????
Resetting default scope
CUSTOMER_CRASH_COUNT: 3
DEFAULT_BUCKET_ID: DRIVER_FAULT
BUGCHECK_STR: 0x8E
LAST_CONTROL_TRANSFER: from bd0a2e33 to bd0a2fd0
STACK_TEXT:
WARNING: Stack unwind information not available. Following frames may be wrong.
b0562bc4 bd0a2e33 e37f8200 e37f8200 e4ae1c68 nv4_disp+0x90fd0
b0562c3c bf8edd6b b0562cfc e2601714 e4ae1c58 nv4_disp+0x90e33
b0562c74 bd009530 b0562cfc bf8ede06 e2601714 win32k!WatchdogDdDestroySurface+0x38
b0562d30 bd00b3a4 e2601008 e4ae1c58 b0562d50 dxg!vDdDisableSurfaceObject+0x294
b0562d54 8054161c e2601008 00000001 0012c518 dxg!DxDdDestroySurface+0x42
b0562d54 7c90e4f4 e2601008 00000001 0012c518 nt!KiFastCallEntry+0xfc
0012c518 00000000 00000000 00000000 00000000 0x7c90e4f4
STACK_COMMAND: kb
FOLLOWUP_IP:
nv4_disp+90fd0
bd0a2fd0 39b8f8000000 cmp dword ptr [eax+0F8h],edi
SYMBOL_STACK_INDEX: 0
SYMBOL_NAME: nv4_disp+90fd0
FOLLOWUP_NAME: MachineOwner
MODULE_NAME: nv4_disp
IMAGE_NAME: nv4_disp.dll
DEBUG_FLR_IMAGE_TIMESTAMP: 4e390d56
FAILURE_BUCKET_ID: 0x8E_nv4_disp+90fd0
BUCKET_ID: 0x8E_nv4_disp+90fd0
Followup: MachineOwner
nv4_disp+90fd0
bd0a2fd0 39b8f8000000 cmp dword ptr [eax+0F8h],edi
This is the important part. Looking at this, it is most probable that eax is invalid, hence attempting to access an invalid memory address.
What you need to do is load nv4_disp.dll into IDA (you can get a free version), check the image base that IDA loads nv4_disp at and hit 'g' to goto address, try adding 90fd0 to the image base IDA is using, and it should take you directly to the offending instruction (depending on section structure).
From here you can analyze the control flow, and how eax is set and used. If you have a good kernel level debugger you can set a breakpoint on this address and try and get it to hit.
Analysing the function, you should attempt to figure out what the function does, what eax is meant to be pointing to at that point, what its actually pointing to, and why. This is the hard part and is a great part of the difficulty and skill of reverse engineering.
Found a solution.
Problem:
Logging is unreliable since messages (when dumped to file) disappear during bsod, packets are sometimes lost when logging over network, and there's slowdown due to logging.
Solution:
Instead of logging to file or over network, configure system to produce full physical memory dump on BSOD and log all messages into any memory buffer. It'll be faster. Once system crashed, it'll dump entire memory into file, and it'll be possible to either view contents of log-file buffer using WinDBG's dt (if you have debug symbols) command, or you'll be able to search and locate logfile stored in memory using "memory" view.
I used circular buffer of std::strings to store messages and separate array of const char* to make things easier to read in WinDBG, but you could simply create huge array of char and store all messages within it in plaintext.
Details:
Entire process on winxp:
Ensure that minimum page file size is equal or larger than total amount of RAM + 1 megabytes. (Right Click "My Computer"->Properties->Advanced->Performance->Advanced->Change)
Configure system to produce complete memory dump on BSOD (RIght click "My Computer'->Properties->Advanced->Startup and Recovery->Settings->Write Debugging Information . Select "Complete memory dump" and specify path you want).
Ensure that disk (where the file will be written) has required amount of free space (total amount of RAM on your system.
Build app/dll (the one that does logging) with debug symbol, and Trigger BSOD.
Wait till memory dump is finished, reboot. Feel free to swear at driver developer while system writes memory dump and reboots.
Copy MEMORY.DMP system produced to a safe place, so you won't lose everything if system crashes again.
Launch windbg.
Open Memory Dump (File->Open Crash Dump).
If you want to see what happened, use !analyze -v command.
Access memory buffer that stores logged messages using one of those methods:
To see contents of global variable, use dt module!variable where "module" is name of your library (without *.dll), and "variable" is name of variable. You can use wildcards. You can use address without module!variable
To see contents of one field of the global variable (if global variable is a struct), use dt module!variable field where "field" is variable member.
To see more details about varaible (content of arrays and substructures) use dt -b module!variable field or dt -b module!variable
If you don't have symbols, you'll need to search for your "logfile" using memory window.
At this point you'll be able to see contents of log that were stored in memory, plus you'll have snapshot of the entire system at the moment when it crashed.
Also...
To see info about process that crashed the system, use !process.
To see loaded modules use lm
For info about thread there's !thread id where id is hexadecimal id you saw in !process output.
It looks like the crash may either be caused by a bad pointer, or heap corruption. You can tell this because the crash occurs in a memory-freeing function (DxDdDestroySurface). Destroying surfaces is something that you absolutely need to do - you can't just stub this out, the surface will still get freed when the program exits, and if you disable it inside the kernel, you'll run out of on-card memory very quickly and crash that way, as well.
You can try to figure out what sequence of events leads up to this heap corruption, but there's no silver bullet here - as fileoffset suggested, you'll need to actually reverse engineer the driver to see why this happens (it may help to compare drivers before and after the offending driver version as well!)

GetThreadContext returns EBP = 0

I'm trying to get the value of another process' EBP register on windows7 64 bits.
for this I'm using GetThreadContext like this:
static CONTEXT threadContext;
memset(&threadContext, 0, sizeof(CONTEXT));
threadContext.ContextFlags = CONTEXT_FULL;
bool contextOk = GetThreadContext(threadHandle, &threadContext);
The EIP value seems ok, but EBP = 0.
I tried using also WOW64_GetThreadContext but it didn't help...
GetLastError() returns 0 so it's supposed to be ok.
I do suspend this thread with SuspendThread and It DOESN'T happen every time I sample the thread.
What could cause this?
One possible cause is that the register's value really is zero at the time you inspect it. It's a general-purpose register, so the program can set it to whatever value it wants.