I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.
Related
I'm building an application that is used for taking and sharing screenshots in real time between multiple clients over network.
I'm using the MS Desktop Duplication API to get the image data and it's working smoothly except in some edge cases.
I have been using four games as test applications in order to test how the screencapture behaves in fullscreen and they are Heroes of the Storm, Rainbow Six Siege, Counter Strike and PlayerUnknown's Battlegrounds.
On my own machine which has a GeForce GTX 1070 graphics card; everything works fine both in and out of fullscreen for all test applications. On two other machines that runs a GeForce GTX 980 however; all test applications except PUBG works. When PUBG is running in fullscreen, my desktop duplication instead produces an all black image and I can't figure out why as the
Desktop Duplication Sample works fine for all test machines and test applications.
What I'm doing is basically the same as the sample except I'm extracting the pixel data and creating my own SDL(OpenGL) texture from that data instead of using the acquired ID3D11Texture2D directly.
Why is PUBG in fullscreen on GTX 980 the only test case that fails?
Is there something wrong with the way I'm getting the frame, handling the "DXGI_ERROR_ACCESS_LOST" error or how I'm copying the data from the GPU?
Declarations:
IDXGIOutputDuplication* m_OutputDup = nullptr;
Microsoft::WRL::ComPtr<ID3D11Device> m_Device = nullptr;
ID3D11DeviceContext* m_DeviceContext = nullptr;
D3D11_TEXTURE2D_DESC m_TextureDesc;
Initialization:
bool InitializeScreenCapture()
{
HRESULT result = E_FAIL;
if (!m_Device)
{
D3D_FEATURE_LEVEL featureLevels = D3D_FEATURE_LEVEL_11_0;
D3D_FEATURE_LEVEL featureLevel;
result = D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
0,
&featureLevels,
1,
D3D11_SDK_VERSION,
&m_Device,
&featureLevel,
&m_DeviceContext);
if (FAILED(result) || !m_Device)
{
Log("Failed to create D3DDevice);
return false;
}
}
// Get DXGI device
ComPtr<IDXGIDevice> DxgiDevice;
result = m_Device.As(&DxgiDevice);
if (FAILED(result))
{
Log("Failed to get DXGI device);
return false;
}
// Get DXGI adapter
ComPtr<IDXGIAdapter> DxgiAdapter;
result = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), &DxgiAdapter);
if (FAILED(result))
{
Log("Failed to get DXGI adapter);
return false;
}
DxgiDevice.Reset();
// Get output
UINT Output = 0;
ComPtr<IDXGIOutput> DxgiOutput;
result = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
if (FAILED(result))
{
Log("Failed to get DXGI output);
return false;
}
DxgiAdapter.Reset();
ComPtr<IDXGIOutput1> DxgiOutput1;
result = DxgiOutput.As(&DxgiOutput1);
if (FAILED(result))
{
Log("Failed to get DXGI output1);
return false;
}
DxgiOutput.Reset();
// Create desktop duplication
result = DxgiOutput1->DuplicateOutput(m_Device.Get(), &m_OutputDup);
if (FAILED(result))
{
Log("Failed to create output duplication);
return false;
}
DxgiOutput1.Reset();
DXGI_OUTDUPL_DESC outputDupDesc;
m_OutputDup->GetDesc(&outputDupDesc);
// Create CPU access texture description
m_TextureDesc.Width = outputDupDesc.ModeDesc.Width;
m_TextureDesc.Height = outputDupDesc.ModeDesc.Height;
m_TextureDesc.Format = outputDupDesc.ModeDesc.Format;
m_TextureDesc.ArraySize = 1;
m_TextureDesc.BindFlags = 0;
m_TextureDesc.MiscFlags = 0;
m_TextureDesc.SampleDesc.Count = 1;
m_TextureDesc.SampleDesc.Quality = 0;
m_TextureDesc.MipLevels = 1;
m_TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
m_TextureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
return true;
}
Screen capture:
void TeamSystem::CaptureScreen()
{
if (!m_ScreenCaptureInitialized)
{
Log("Attempted to capture screen without ScreenCapture being initialized");
return false;
}
HRESULT result = E_FAIL;
DXGI_OUTDUPL_FRAME_INFO frameInfo;
ComPtr<IDXGIResource> desktopResource = nullptr;
ID3D11Texture2D* copyTexture = nullptr;
ComPtr<ID3D11Resource> image;
int32_t attemptCounter = 0;
DWORD startTicks = GetTickCount();
do // Loop until we get a non empty frame
{
m_OutputDup->ReleaseFrame();
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
if (FAILED(result))
{
if (result == DXGI_ERROR_ACCESS_LOST) // Access may be lost when changing from/to fullscreen mode(any application); when this happens we need to reaquirce the outputdup
{
m_OutputDup->ReleaseFrame();
m_OutputDup->Release();
m_OutputDup = nullptr;
m_ScreenCaptureInitialized = InitializeScreenCapture();
if (m_ScreenCaptureInitialized)
{
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
}
else
{
Log("Failed to reinitialize screen capture after access was lost");
return false;
}
}
if (FAILED(result))
{
Log("Failed to acquire next frame);
return false;
}
}
attemptCounter++;
if (GetTickCount() - startTicks > 3000)
{
Log("Screencapture timed out after " << attemptCounter << " attempts");
return false;
}
} while(frameInfo.TotalMetadataBufferSize <= 0 || frameInfo.LastPresentTime.QuadPart <= 0); // This is how you wait for an image containing image data according to SO (https://stackoverflow.com/questions/49481467/acquirenextframe-not-working-desktop-duplication-api-d3d11)
Log("ScreenCapture succeeded after " << attemptCounter << " attempt(s)");
// Query for IDXGIResource interface
result = desktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(©Texture));
desktopResource->Release();
desktopResource = nullptr;
if (FAILED(result))
{
Log("Failed to acquire texture from resource);
m_OutputDup->ReleaseFrame();
return false;
}
// Copy image into a CPU access texture
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&m_TextureDesc, nullptr, &stagingTexture);
if (FAILED(result) || stagingTexture == nullptr)
{
Log("Failed to copy image data to access texture);
m_OutputDup->ReleaseFrame();
return false;
}
D3D11_MAPPED_SUBRESOURCE mappedResource;
m_DeviceContext->CopyResource(stagingTexture, copyTexture);
m_DeviceContext->Map(stagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
void* copy = malloc(m_TextureDesc.Width * m_TextureDesc.Height * 4);
memcpy(copy, mappedResource.pData, m_TextureDesc.Width * m_TextureDesc.Height * 4);
m_DeviceContext->Unmap(stagingTexture, 0);
stagingTexture->Release();
m_OutputDup->ReleaseFrame();
// Create a new SDL texture from the data in the copy varialbe
free(copy);
return true;
}
Some notes:
I have modified my original code to make it more readable so some cleanup and logging in the error handling is missing.
None of the error or timeout cases(except DXGI_ERROR_ACCESS_LOST) trigger in any testing scenario.
The "attemptCounter" never goes above 2 in any testing scenario.
The test cases are limited since I don't have access to a computer which produces the black image case.
The culprit was CopyResource() and how I created the CPU access texture.
CopyResource() returns void and that is why I didn't look into it before; I didn't think it could fail in any significant way since I expected it to return bool or HRESULT if that was the case.
In the documentation of CopyResource() does however disclose a couple of fail cases.
This method is unusual in that it causes the GPU to perform the copy operation (similar to a memcpy by the CPU). As a result, it has a few restrictions designed for improving performance. For instance, the source and destination resources:
Must be different resources.
Must be the same type.
Must have identical dimensions (including width, height, depth, and size as appropriate).
Must have compatible DXGI formats, which means the formats must be identical or at least from the same type group.
Can't be currently mapped.
Since the initialization code runs before the test application enters fullscreen, the CPU access texture description is set up using the desktop resolution, format etc. This caused CopyResouce() to fail silently and simply now write anything to stagingTexture in the test cases where a non native resoltuion was used for the test application.
In conclusion; I just moved the m_TextureDescription setup to CaptureScreen() and used the description of copyTexture to get the variables I didn't want to change between the textures.
// Create CPU access texture
D3D11_TEXTURE2D_DESC copyTextureDesc;
copyTexture->GetDesc(©TextureDesc);
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = copyTextureDesc.Width;
textureDesc.Height = copyTextureDesc.Height;
textureDesc.Format = copyTextureDesc.Format;
textureDesc.ArraySize = copyTextureDesc.ArraySize;
textureDesc.BindFlags = 0;
textureDesc.MiscFlags = 0;
textureDesc.SampleDesc = copyTextureDesc.SampleDesc;
textureDesc.MipLevels = copyTextureDesc.MipLevels;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
textureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&textureDesc, nullptr, &stagingTexture);
While this solved the issues I was having; I still don't know why the reinitialization in the handling of DXGI_ERROR_ACCESS_LOST didn't resolve the issue anyway. Does the DesctopDuplicationDescription not use the same dimensions and format as the copyTexture?
I also don't know why this didn't fail in the same way on computers with newer graphics cards. I did however notice that these machines were able to capture fullscreen applications using a simple BitBlt() of the desktop surface.
I am working on a game engine, with C++ and OpenGL, as a learning project. When I am at my home PC, which has an ATI graphics card, everything is ok. Up to now, all the engine can do is to display a sprite, and it does it with no problem. At work, there are two Windows PC's with NVIDIA cards, and when I run the program, it acts as if OpenGL is not here.
I checked for glGetIntegerv(GL_MAJOR_VERSION); glGetIntegerv(GL_MINOR_VERSION); and it yielded 4.5
Here is my init function:
namespace windows
{
HGLRC OpenGLWindows::hglrc;
bool OpenGLWindows::initialize(HWND& hwnd, HDC& hdc)
{
PIXELFORMATDESCRIPTOR pfd;
const int pixelFormatAttribList[] =
{
WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
WGL_COLOR_BITS_ARB, 32,
WGL_DEPTH_BITS_ARB, 24,
WGL_STENCIL_BITS_ARB, 8,
0, //End
};
//choose pixel format
int pixelFormat;
UINT numFormats;
//dummy context
hglrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hglrc);
//initialize glew
glewExperimental = TRUE;
GLenum err = glewInit();
std::cout << glewGetErrorString(err) << std::endl;
if (err != GLEW_OK)
{
std::cout << "Error: GLEW couldn't be initialized = " << glewGetErrorString(err) << std::endl;
return false;
}
assert(hwnd != NULL);
//set pixel format
if (!WGL_ARB_pixel_format) errorQuit("WGL_ARB_pixel_format not supported");
wglChoosePixelFormatARB = (PFNWGLCHOOSEPIXELFORMATARBPROC)wglGetProcAddress("wglChoosePixelFormatARB");
if (!wglChoosePixelFormatARB(hdc, pixelFormatAttribList, NULL, 1, &pixelFormat, &numFormats))
{
std::cout << "Error: pixel format could not be created" << std::endl;
}
else
{
SetPixelFormat(hdc, pixelFormat, &pfd);
}
//create real context
int contextAttribList[5] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 0, 0
};
if (!WGL_ARB_create_context) errorQuit("WGL_ARB_create_context not supported");
wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB");
hglrc = wglCreateContextAttribsARB(hdc, 0, contextAttribList);
if (!wglMakeCurrent(hdc, hglrc))
{
errorQuit("OpenGL context creation failed");
}
GLint major;
GLint minor;
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
std::cout << "GL version = " << major << "." << minor << std::endl;
std::cout << glGetString(GL_VENDOR) << std::endl;
std::cout << glGetString(GL_RENDERER) << std::endl;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
return true;
}
void OpenGLWindows::shutDown(HWND& hwnd, HDC& hdc)
{
wglMakeCurrent(hdc, NULL);
wglDeleteContext(hglrc);
}
}
I am looking at forums for hours, and topics that I found close to this one always got crashes, and their solutions didn't work. I don't get any crashes, just a window, with the color I specified while creating the window context. I wonder what I am doing wrong.
EDIT: I just created a whole dummy window, and now it works with NVIDIA too.
You are trying to use the WGL_ARB_pixel_format extension, and are running into some chicken-and-egg problem: To get the extension function pointers, you create a GL context first. But, to create a GL context, you need to set a pixel format for your window first: The reference page for wglCreateContext explicitely states:
A rendering context is not the same as a device context. Set the pixel format of the device context before creating a rendering context.
So you must use the standard ChoosePixelFormat() & SetPixelFormat() approach to reliably create your helper context to query the WGL extensions.
However, SetPixelFormat must not be used more than once on the same window:
Setting the pixel format of a window more than once can lead to significant complications for the Window Manager and for multithread applications, so it is not allowed. An application can only set the pixel format of a window one time. Once a window's pixel format is set, it cannot be changed.
This means that when the second pixel format for your "real" context differs from the pixel format for the dummy, your approach will fail. This is the reason why one typically uses a separate window for the dummy context (which is possibly never mapped to the screen), and immediately destroys it again.
I'm trying to learn SDL 2.0 and I've been following lazyfoo tutorials. The problem is that those tutorials keeps everything in one file. So I tried to split some things up when starting a new game-project. My problem is that when I seperated my texture class from the rest it sometimes crashes the program, not always but pretty often.
I have a global window and a global renderer which I'm using in my class. I get segfault at difference places in the program but always in one of the following functions.
bool myTexture::loadFromFile(string path) {
printf("In loadFromFile\n");
//Free preexisting texture
free();
//The final texture
SDL_Texture *l_newTexture = NULL;
//Load image at specified path
SDL_Surface *l_loadedSurface = IMG_Load(path.c_str());
if(l_loadedSurface == NULL) {
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
} else {
//Color key image for transparancy
//SDL_SetColorKey(l_loadedSurface, SDL_TRUE, SDL_MapRGB(l_loadedSurface->format, 0, 0xFF, 0xFF));
//Create texture from surface pixels
l_newTexture = SDL_CreateTextureFromSurface(g_renderer, l_loadedSurface);
if(l_newTexture == NULL) {
printf("Unable to create texture from %s! SDL Error: %s\n", path.c_str(), SDL_GetError());
} else {
//Get image dimensions
m_width = l_loadedSurface->w;
m_height = l_loadedSurface->h;
}
//Get rid of old loaded surface
SDL_FreeSurface(l_loadedSurface);
}
m_texture = l_newTexture;
//return success
printf("end from file \n");
return m_texture != NULL;
}
void myTexture::free() {
printf("In myTexture::free\n");
//Free texture if it exist
if(m_texture != NULL) {
cout << (m_texture != NULL) << endl;
SDL_DestroyTexture(m_texture);
printf("Destroyed m_texture\n");
m_texture = NULL;
m_width = 0;
m_height = 0;
}
printf("end free\n");
}
After reading on SDL and other stuffs I understood that there might be some thread trying to deallocate something which it isn't allowed to deallocate. However I haven't threaded anything yet.
I managed to solve this by my own. It turned out that I never created a new myTexture object which was really idiotic. But I still don't understand how it managed to render it sometimes... for me it don't make any sense at all. I never created it but I could still call its render function sometimes...
I'm jusing a borderless Window and copied the device reset code from a Youtube video and it worked there but I just get the message from there:
if(FAILED(hr)){
MessageBox(0, "Failed to reset device!", 0, 0);
return;
}
Where did I something wrong? Did I forget something at InvalidateDeviceObjects()? I can give you more code but not everything because it's just too long.
I really need help...
Reset Device:
void Render(){
if(HandleDeviceLost/*VK_F1*/){
if(DeviceLost){
Sleep(100);
if(FAILED(hr=d3ddev->TestCooperativeLevel())){
if(hr==D3DERR_DEVICELOST){
return;
}
if(hr==D3DERR_DEVICENOTRESET){
//clean
InvalidateDeviceObjects();
//reset device
hr=d3ddev->Reset(&d3dpp);
if(FAILED(hr)){
MessageBox(0, "Failed to reset device!", 0, 0);
return;
}
//restore
RestoreDeviceObjects();
}
return;
}
}
}
DeviceLost=0;
/*
Stuff
*/
hr=d3ddev->Present(NULL, NULL, NULL, NULL);
if(hr==D3DERR_DEVICELOST){
DeviceLost=1;
}
}
Release Objects:
void InvalidateDeviceObjects(){
buffShipMaterial->Release();
Wall_large->Release();
Wall_small->Release();
space_text->Release();
meshWall->Release();
menuText->Release();
menuText2->Release();
menuText3->Release();
text_cpu->Release();
text_player->Release();
text_player2->Release();
number_0->Release();
number_1->Release();
number_2->Release();
number_3->Release();
number_4->Release();
number_5->Release();
number_6->Release();
number_7->Release();
number_8->Release();
number_9->Release();
number_10->Release();
}
In the sample you linked that works, an error on Reset results in return and the render function just gets called again. This is normal - there is no reason why Reset must succeed on the first call, so it's usual to keep retrying rather than showing an error message like in your code.
If you're rendering in the message loop, like in that example, just do the same thing - don't stop when you get an error.
If you don't render in the message loop, but use the WM_PAINT method, then this is the general pattern that I use - although sometimes I set a timer rather than calling InvalidateRect, it depends on the app - but this has been robust enough for many applications. You can see how the Reset will get repeated on failure, rather than throwing an error message on the first fail. It might be an idea to adopt this pattern:
void CMyClass::DrawScene()
{
// perform all dx9 scene drawing
HRESULT hr;
// if device was lost, try to restore it
if (m_bDeviceLost)
{
// is it ok to render again yet?
if (FAILED(hr = m_pD3DDevice->TestCooperativeLevel()))
{
// the device has been lost but cannot be reset at this time
if (hr == D3DERR_DEVICELOST)
{
// request repaint and exit
InvalidateRect(NULL);
return;
}
// the device has been lost and can be reset
if (hr == D3DERR_DEVICENOTRESET)
{
// do lost/reset/restore cycle
OnLostDevice();
hr = m_pD3DDevice->Reset(&m_pD3Dpp);
if (FAILED(hr))
{
// reset failed, try again later
InvalidateRect(NULL);
return;
}
OnResetDevice();
}
}
// flag device status ok now
m_bDeviceLost = false;
}
// ... clear to background and do the drawing ...
// display scene
hr = m_pD3DDevice->Present(NULL, NULL, GetSafeHwnd(), NULL);
m_bDeviceLost = (hr == D3DERR_DEVICELOST);
// request repaint if device has been lost
if (m_bDeviceLost)
{
InvalidateRect(NULL);
}
}
Also, you must ensure that TestCooperativeLevel and Reset are called from the same thread that was used to create the device.
I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).
Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit.
What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?
Here is what i do currently:
int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)
{
if(SDL_Init( SDL_INIT_EVERYTHING ) < 0)
{
printf("SDL_Init failed: %s\n", SDL_GetError());
return 0;
}
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, vsync ? 1 : 0);
if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24,
SDL_HWSURFACE |
SDL_GL_DOUBLEBUFFER |
(fullscreen ? SDL_FULLSCREEN : 0) |
SDL_OPENGL)) == NULL)
{
printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());
return 0;
}
GLenum err = glewInit();
if (GLEW_OK != err)
return 0;
m_Running = true;
return 1;
}
int MyObj::Shutdown()
{
SDL_FreeSurface(m_SurfDisplay);
SDL_Quit();
return 1;
}
In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:
int MyObject::Run()
{
SDL_Event Event;
while(m_Running)
{
while(SDL_PollEvent(&Event))
{ OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"
ProcessFrame();
SDL_SwapBuffers();
}
return 1;
}
Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.
One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.
As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though).
Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?
m_SurfDisplay = SDL_SetVideoMode(...)
...
SDL_FreeSurface(m_SurfDisplay);
^^^^^^^^^^^^^ naughty naughty!
SDL_SetVideoMode():
The returned surface is freed by SDL_Quit and must not be freed by the caller.