Directx9: Reset Device after Ctrl+Alt+Del - c++

I'm jusing a borderless Window and copied the device reset code from a Youtube video and it worked there but I just get the message from there:
if(FAILED(hr)){
MessageBox(0, "Failed to reset device!", 0, 0);
return;
}
Where did I something wrong? Did I forget something at InvalidateDeviceObjects()? I can give you more code but not everything because it's just too long.
I really need help...
Reset Device:
void Render(){
if(HandleDeviceLost/*VK_F1*/){
if(DeviceLost){
Sleep(100);
if(FAILED(hr=d3ddev->TestCooperativeLevel())){
if(hr==D3DERR_DEVICELOST){
return;
}
if(hr==D3DERR_DEVICENOTRESET){
//clean
InvalidateDeviceObjects();
//reset device
hr=d3ddev->Reset(&d3dpp);
if(FAILED(hr)){
MessageBox(0, "Failed to reset device!", 0, 0);
return;
}
//restore
RestoreDeviceObjects();
}
return;
}
}
}
DeviceLost=0;
/*
Stuff
*/
hr=d3ddev->Present(NULL, NULL, NULL, NULL);
if(hr==D3DERR_DEVICELOST){
DeviceLost=1;
}
}
Release Objects:
void InvalidateDeviceObjects(){
buffShipMaterial->Release();
Wall_large->Release();
Wall_small->Release();
space_text->Release();
meshWall->Release();
menuText->Release();
menuText2->Release();
menuText3->Release();
text_cpu->Release();
text_player->Release();
text_player2->Release();
number_0->Release();
number_1->Release();
number_2->Release();
number_3->Release();
number_4->Release();
number_5->Release();
number_6->Release();
number_7->Release();
number_8->Release();
number_9->Release();
number_10->Release();
}

In the sample you linked that works, an error on Reset results in return and the render function just gets called again. This is normal - there is no reason why Reset must succeed on the first call, so it's usual to keep retrying rather than showing an error message like in your code.
If you're rendering in the message loop, like in that example, just do the same thing - don't stop when you get an error.
If you don't render in the message loop, but use the WM_PAINT method, then this is the general pattern that I use - although sometimes I set a timer rather than calling InvalidateRect, it depends on the app - but this has been robust enough for many applications. You can see how the Reset will get repeated on failure, rather than throwing an error message on the first fail. It might be an idea to adopt this pattern:
void CMyClass::DrawScene()
{
// perform all dx9 scene drawing
HRESULT hr;
// if device was lost, try to restore it
if (m_bDeviceLost)
{
// is it ok to render again yet?
if (FAILED(hr = m_pD3DDevice->TestCooperativeLevel()))
{
// the device has been lost but cannot be reset at this time
if (hr == D3DERR_DEVICELOST)
{
// request repaint and exit
InvalidateRect(NULL);
return;
}
// the device has been lost and can be reset
if (hr == D3DERR_DEVICENOTRESET)
{
// do lost/reset/restore cycle
OnLostDevice();
hr = m_pD3DDevice->Reset(&m_pD3Dpp);
if (FAILED(hr))
{
// reset failed, try again later
InvalidateRect(NULL);
return;
}
OnResetDevice();
}
}
// flag device status ok now
m_bDeviceLost = false;
}
// ... clear to background and do the drawing ...
// display scene
hr = m_pD3DDevice->Present(NULL, NULL, GetSafeHwnd(), NULL);
m_bDeviceLost = (hr == D3DERR_DEVICELOST);
// request repaint if device has been lost
if (m_bDeviceLost)
{
InvalidateRect(NULL);
}
}
Also, you must ensure that TestCooperativeLevel and Reset are called from the same thread that was used to create the device.

Related

COM+ app not continuing until some unknown condition

I have a COM+ application which I instantiate with
CoCreateInstance(CLSID_TheComponent, NULL, CLSCTX_ALL, IID_ITheComponent, &m_TheComponent);
This is followed by event initialization
CoCreateInstance(CLSID_TransientSubscription,NULL,CLSCTX_ALL,IID_ITransinetSubscription,&Trans);
...some more code that eventually registers some CLSID_Events, IID__IEvents.
I have an MFC application with following:
OnBtn1Clicked()
{
m_TheComponent->DoSomething();
}
also in the Dialog class there is
class CMFCMyDialog : public CDialogEx, _IEvents
{
...
virtual HRESULT STDMETHODCALLTYPE OnSomething(); // abstract in _IEvents
When running, after clicking Btn1 two things happen: 1.OnSomething() is fired, and 2. the COM+ does a bunch of other stuff it should do. So far so good.
The interesting thing is that 1 & 2 happen only after OnBtn1Clicked() is exited. Even if i put a sleep() after DoSomething() or if I attempt to call DoSomething() within a different thread, 1 + 2 don't happen only after OnBtn1Clicked() is cleared.
From The COM component log I see it reaches and enters it's OnSomething() call but does not exist it (and of course does not reach the client side sink) until OnBtn1Clicked() is cleared. Once cleared, the sink is reached and the COM component continues execution.
All this would not be a problem since I can wait for after the button is clicked, but I need to implement this in a console application client. When implementing in a console application I was not able to make 1 and/or 2 happen. Only after I kill the client process (!) 2 happens (the COM+ continues processing) but of course client side OnSomething() does not since the process is dead.
Any idea what happens when OnBtn1Clicked() is cleared that affects the COM+?
MyConsoleClass::MyConsoleClass()
{
new thread(&MyConsoleClass::Run, this);
}
void MyConsoleClass::Run()
{
m_ThreadId = GetCurrentThreadId();
m_IsActive = true;
MSG msg;
BOOL bRet;
while (m_IsActive)
{
if (bRet = GetMessage(&msg, NULL, 0, 0) != 0)
{
if (bRet == -1)
// error
else if (msg.message == WM_QUIT)
m_IsActive = false;
else if (msg.message == DO_SOMETHING)
DoSomething(msg.wParam);
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
}
}
void MyConsoleClass::Invoke(const actionEnum action, const void *params)
{
PostThreadMessage(m_ThreadId, action, (WPARAM)params, NULL);
}

SDL_Delay() while getting events (kinda nooby)

C++, SDL 2
Im making a disco simulator thats just loops through pictures and plays music.
(Epilepsy warning) http://imgur.com/9ePOIAw
Basically I want to run this code
while (isPartying){
currentImage = image1;
SDL_BlitSurface(currentImage, NULL, windowSurface, NULL);
SDL_Delay(25);
SDL_UpdateWindowSurface(window);
currentImage = image2;
SDL_BlitSurface(currentImage, NULL, windowSurface, NULL);
SDL_UpdateWindowSurface(window);
SDL_Delay(25);
currentImage = image3;
SDL_BlitSurface(currentImage, NULL, windowSurface, NULL);
SDL_UpdateWindowSurface(window);
SDL_Delay(25);
//image 3, 4, 5 and so on
}
while getting events all the time.
while (SDL_PollEvent(&ev) != 0){
if (ev.type == SDL_QUIT){
isPartying = false;
}
}
I want it to get events while Im in the middle of the isPartying loop. Now it only checks for events at the beginning (or the end, depends on where I put the event loop of course). Anyone know a better way to wait for the next picture then SDL_Delay()? Or maybe even have another solution
Ano
Basically, what you want to do is to achieve two things at a time.
You have two options:
Using SDL_Thread, but I prefere to not use threads when another solution is possible (because you may have to deal with semaphores or mutexes)
Using SDL_TimerCallback (some would say that too much timers kill the flow of the code, but it's what we gonna use)
Here is a code sample (NB: you must pass SDL_INIT_TIMER to SDL_Init()).
void IsPartying() {
SDL_Event ev;
while (SDL_PollEvent(&ev)){
if (ev.type == SDL_QUIT){
return false;
}
}
return true
}
uint32_t ChangeImage(uint32_t interval, void* param) {
int *imageNb = param;
(*imageNb)++
return interval
}
void Processing() {
// Store your images in SDL_Surface **imageArray
int imageNb = 0;
SDL_TimerID t = SDL_AddTimer(25, ChangeImage, &imageNb);
while (IsPartying()) {
SDL_BlitSurface(imageArray[imageNb], NULL, windowSurface, NULL);
SDL_UpdateWindowSurface(window);
SDL_Delay(1); // Release proc charge
}
SDL_RemoveTimer(SDL_TimerID);
}
Of course you still need to check if you iterated over all the images, else you'll got a nice segfault while trying to access an unallocated cell of the array.

Creating parallel offscreen OpenGL contexts on Windows

I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.

Opengl Game Loop Multithreading

I have been messing around with OpenGL lately, and I noticed that the windows message pump is blocking whenever i attempt to resize my window, so as a result rendering is halted whenever i click on the menu bar or resize the window.
To fix this, I am looking into multithreading.
I have the following:
_beginthread(RenderEntryPoint, 0, 0);
while (!done)
{
PeekMessage(&msg, NULL, NULL, NULL, PM_REMOVE);
if (msg.message == WM_QUIT)
{
done = true;
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
void RenderEntryPoint(void *args)
{
while (1)
{
//render code
}
}
However, my scene isn't being rendered, and I'm not sure why.
You need to make the OpenGL rendering context current in the rendering thread, and make sure it's not current in the windowing thread. This also means that you can't call any OpenGL functions from the windowing thread.

IMovieControl::Run fails on Windows XP?

Actually, it only fails the second time it's called. I'm using a windowless control to play video content, where the video being played could change while the control is still on screen. Once the graph is built the first time, we switch media by stopping playback, replacing the SOURCE filter, and running the graph again. This works fine under Vista, but when running on XP, the second call to Run() returns E_UNEXPECTED.
The initialization goes something like this:
// Get the interface for DirectShow's GraphBuilder
mGB.CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER);
// Create the Video Mixing Renderer and add it to the graph
ATL::CComPtr<IBaseFilter> pVmr;
pVmr.CoCreateInstance(CLSID_VideoMixingRenderer9, NULL, CLSCTX_INPROC);
mGB->AddFilter(pVmr, L"Video Mixing Renderer 9");
// Set the rendering mode and number of streams
ATL::CComPtr<IVMRFilterConfig9> pConfig;
pVmr->QueryInterface(IID_IVMRFilterConfig9, (void**)&pConfig);
pConfig->SetRenderingMode(VMR9Mode_Windowless);
pVmr->QueryInterface(IID_IVMRWindowlessControl9, (void**)&mWC);
And here's what we do when we decide to play a movie. RenderFileToVideoRenderer is borrowed from dshowutil.h in the DirectShow samples area.
// Release the source filter, if it exists, so we can replace it.
IBaseFilter *pSource = NULL;
if (SUCCEEDED(mpGB->FindFilterByName(L"SOURCE", &pSource)) && pSource)
{
mpGB->RemoveFilter(pSource);
pSource->Release();
pSource = NULL;
}
// Render the file.
hr = RenderFileToVideoRenderer(mpGB, mPlayPath.c_str(), FALSE);
// QueryInterface for DirectShow interfaces
hr = mpGB->QueryInterface(&mMC);
hr = mpGB->QueryInterface(&mME);
hr = mpGB->QueryInterface(&mMS);
// Read the default video size
hr = mpWC->GetNativeVideoSize(&lWidth, &lHeight, NULL, NULL);
if (hr != E_NOINTERFACE)
{
if (FAILED(hr))
{
return hr;
}
// Play video at native resolution, anchored at top-left corner.
RECT r;
r.left = 0;
r.top = 0;
r.right = lWidth;
r.bottom = lHeight;
hr = mpWC->SetVideoPosition(NULL, &r);
}
// Run the graph to play the media file
if (mMC)
{
hr = mMC->Run();
if (FAILED(hr))
{
// We get here the second time this code is executed.
return hr;
}
mState = Running;
}
if (mME)
{
mME->SetNotifyWindow((OAHWND)m_hWnd, WM_GRAPHNOTIFY, 0);
}
Anybody know what's going on here?
Try calling IMediaControl::StopWhenReady before removing the source filter.
When are you calling QueryInterface directly? you can use CComQIPtr<> to warp the QI for you. This way you won't have to call Release as it will be called automatically.
The syntax look like this: CComPtr<IMediaControl> mediaControl = pGraph;
In FindFilterByName() instead of passing live pointer pass a CComPtr, again so you won't have to call release explicitly.
Never got a resolution on this. The production solution was to just call IGraphBuilder::Release and rebuild the entire graph from scratch. There's a CPU spike and a slight redraw delay when switching videos, but it's less pronounced than we'd feared.