Proper shutdown for SDL with OpenGL - opengl

I am using SDL 1.2 in a minimal fashion to create a cross platform OpenGL context (this is on Win7 64bit) in C++. I also use glew to have my context support OpenGL 4.2 (which my driver supports).
Things work correctly at run-time but I have been noticing lately a random crash when shutting down on calling SDL_Quit.
What is the proper sequence for SDL (1.2) with OpenGL start up and shutdown?
Here is what i do currently:
int MyObj::Initialize(int width, int height, bool vsync, bool fullscreen)
{
if(SDL_Init( SDL_INIT_EVERYTHING ) < 0)
{
printf("SDL_Init failed: %s\n", SDL_GetError());
return 0;
}
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, vsync ? 1 : 0);
if((m_SurfDisplay = SDL_SetVideoMode(width, height, 24,
SDL_HWSURFACE |
SDL_GL_DOUBLEBUFFER |
(fullscreen ? SDL_FULLSCREEN : 0) |
SDL_OPENGL)) == NULL)
{
printf("SDL_SetVideoMode failed: %s\n", SDL_GetError());
return 0;
}
GLenum err = glewInit();
if (GLEW_OK != err)
return 0;
m_Running = true;
return 1;
}
int MyObj::Shutdown()
{
SDL_FreeSurface(m_SurfDisplay);
SDL_Quit();
return 1;
}
In between the init and shutdown calls i create a number of GL resources (e.g. Textures, VBOs, VAO, Shaders, etc.) and render my scene each frame, with a SDL_GL_SwapBuffers() at the end of each frame (pretty typical). Like so:
int MyObject::Run()
{
SDL_Event Event;
while(m_Running)
{
while(SDL_PollEvent(&Event))
{ OnEvent(&Event); } //this eventually causes m_Running to be set to false on "esc"
ProcessFrame();
SDL_SwapBuffers();
}
return 1;
}
Within the ~MyObject MyObject::Shutdown() is called. Where just recently SDL_Quit crashes the app. I have also tried call Shutdown instead outside of the destructor, after my render loop returns to the same effect.
One thing that I do not do (that i didn't think I needed to do) is call the glDelete* functions for all my allocated GL resources before calling Shutdown (i thought they would automatically be cleaned up by the destruction of the context, which i assumed was happening during SDL_FreeSurface or SDL_Quit(). I of course call the glDelete* function in the dtors of there wrapping objects, which eventually get called by the tale of ~MyObject, since the wrapper objects are part of other objects that are members of MyObject.
As an experiment i trying forcing all the appropriate glDelete* calls to occur before Shutdown(), and my crash never seems to occur. Funny thing i did not need to do this a week ago, and really nothing has changed according to GIT (may be wrong though).
Is it really necessary to make sure all GL resources are freed before calling MyObject::Shutdown with SDL? Does it look like I might be doing something else wrong?

m_SurfDisplay = SDL_SetVideoMode(...)
...
SDL_FreeSurface(m_SurfDisplay);
^^^^^^^^^^^^^ naughty naughty!
SDL_SetVideoMode():
The returned surface is freed by SDL_Quit and must not be freed by the caller.

Related

SDL_CreateRGBSurfaceFrom / SDL_BlitSurface - I see old frames on my emulator

I'm working on a Space Invaders emulator and for display output I'm using SDL2.
The problem is that on the output window I see all the frames since emulation starts!
Basically the important piece of code is this:
Intel8080 mainObject; // My Intel 8080 CPU emulator
mainObject.loadROM();
//Main loop flag
bool quit = false;
//Event handler
SDL_Event e;
//While application is running
while (!quit)
{
//Handle events on queue
while (SDL_PollEvent(&e) != 0)
{
//User requests quit
if (e.type == SDL_QUIT)
{
quit = true;
}
}
if (mainObject.frameReady)
{
mainObject.frameReady = false;
gHelloWorld = SDL_CreateRGBSurfaceFrom(&mainObject.frameBuffer32, 256, 224, 32, 4 * 256, 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff);
//Apply the image
SDL_BlitSurface(gHelloWorld, NULL, gScreenSurface, NULL);
//Update the surface
SDL_UpdateWindowSurface(gWindow);
}
mainObject.executeROM();
}
where Intel8080 is my CPU emulator code and mainObject.frameBuffer32 is the Space Invaders' video RAM that I converted from 1bpp to 32bpp in order to use SDL_CreateRGBSurfaceFrom function.
Emulation's working fine but I see all the frames generated since emulator starts!
I tried to change Alpha value in the 4 bytes of each RGBA pixel but nothing changes
This is happening because it looks like you're rendering the game without first clearing the window. Basically, you should fill the entire window with a color and then render on top of it constantly. The idea is that filling the window with a specific color before rendering over it is the equivalent of erasing the previous frame (most modern computers are powerful enough to handle this).
You might want to read-up on SDL's SDL_FillRect function, it'll allow you to fill the entire screen with a specific color.
Rendering pseudocode:
while(someCondition)
{
[...]
// Note, I'm not sure if "gScreenSurface" is the proper variable to use here.
// I got it from reading your code.
SDL_FillRect(gScreenSurface, NULL, SDL_MapRGB(gScreenSurface->format, 0, 0, 0));
SDL_BlitSurface(gHelloWorld, NULL, gScreenSurface, NULL);
SDL_UpdateWindowSurface(gWindow);
[...]
}

Where is the OpenGL context stored?

I am fairly new to OpenGL and have been using GLFW combined with GLEW to create and display OpenGL contexts. The following code snippet shows how I create a window and use it for OpenGL.
GLFWwindow* window;
if (!glfwInit())
{
return -1;
}
window = glfwCreateWindow(1280, 720, "Hello OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
GLenum err = glewInit();
if (err != GLEW_OK)
{
glfwTerminate();
return -1;
}
How is glewInit able to fetch the window/context and use it to initialize without myself having to pass any additional arguments to it?
I can only imagine that when we call the glfwMakeContextCurrent function it somehow stores the context somewhere within my process for later use, but no documentation shows this.
The current OpenGL context is a global (or more to the point, thread_local) "variable" of sorts. All OpenGL functions act on whatever context is active in the current thread at the moment.
This includes the OpenGL calls that GLEW makes.

The function CreateWICTextureFromFile() will not actually load a texture (Direct3D11, C++)

I am trying to load a grass texture onto my game with the function DirectX::CreateWICTextureFromFile but everytime I do, the function won't seem to actually load anything, it just loads a black texture. The function successfully returns S_OK, and i've also called the CoInitialize(NULL) before I actually call the function. But it still doesn't work.
Down below is my usage of the function
// This is where i load the texture
void Load_Texture_for_Ground()
{
HRESULT status;
ID3D11ShaderResourceView * Texture;
CoInitialize(NULL);
status = DirectX::CreateWICTextureFromFile(device, L"AmazingGrass.jpg", NULL, &Texture);
if (Texture != NULL) // This returns true
{
MessageBox(MainWindow, L"The pointer points to the texture", L"MessageBox", MB_OK);
}
if (status == S_OK) //This returns true
{
MessageBox(MainWindow, L"The function succeeded", L"MessageBox", MB_OK);
}
CoUninitialize();
}
// This is where i actually load the texture onto an object, assuming i already declared all the variables in this function
void DrawTheGround ()
{
DevContext->VSSetShader(VS, 0, 0);
DevContext->PSSetShader(PS, 0, 0);
DevContext->IASetVertexBuffers(
0,
1,
&GroundVertexBuffer,
&strides,
&offset
);
DevContext->IASetIndexBuffer(
IndexBuffer,
DXGI_FORMAT_R32_UINT,
0
);
/* Transforming the matrices*/
TransformedMatrix = GroundWorld * CameraView * CameraProjection ;
Data.WORLDSPACE = XMMatrixTranspose(GroundWorld);
Data.TRANSFORMEDMATRIX = XMMatrixTranspose(TransformedMatrix);
/* Updating the matrix in application's Constant Buffer*/
DevContext->UpdateSubresource(
ConstantBuffer,
0,
NULL,
&Data,
0,
0
);
DevContext->VSSetConstantBuffers(0, 1, &ConstantBuffer);
DevContext->PSSetShaderResources(0, 1, &Texture);
DevContext->PSSetSamplers(0, 1, &TextureSamplerState);
DevContext->DrawIndexed(6, 0, 0);
}
What could be wrong here? Why won't the function load the texture?
A quick way to test if you have loaded the texture data correctly is to use SaveWICTextureToFile in the ScreenGrab module right after loading it. You'd only do this for debugging of course.
#include <wincodec.h>
#include <wrl/cient.h>
using Microsoft::WRL::ComPtr;
ComPtr<ID3D11Resource> Res;
ComPtr<ID3D11ShaderResourceView> Texture;
HRESULT status = DirectX::CreateWICTextureFromFile(device, L"AmazingGrass.jpg", &Res, &Texture);
if (FAILED(status))
// Error handling
#ifdef _DEBUG
status = SaveWICTextureToFile( DevContext, Res.Get(),
GUID_ContainerFormatBmp, L"SCREENSHOT.BMP" );
#endif
Then you can run the code and check that SCREENSHOT.BMP is not all black.
I strongly suggest you adopt the ComPtr smart pointer and the FAILED / SUCCEEDED macros in your coding style. Raw pointers and directly comparing HRESULT to S_OK is setting yourself up for a lot of bugs.
You should not call CoInitialize every frame. You should call it once as part of your application's initialization.
You should not be creating a new instance of SpriteBatch and SpriteFont every frame. Just create them after you create your device and hold on to them.

Should I make sure to destroy SDL 2.0 objects (renderer, window, textures, etc.) before exiting the program?

This tutorial on SDL 2.0 uses code that returns from main without first destroying any of the resource pointers:
int main(int argc, char** argv){
if (SDL_Init(SDL_INIT_EVERYTHING) == -1){
std::cout << SDL_GetError() << std::endl;
return 1;
}
window = SDL_CreateWindow("Lesson 2", SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
if (window == nullptr){
std::cout << SDL_GetError() << std::endl;
return 2; //this
}
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED
| SDL_RENDERER_PRESENTVSYNC);
if (renderer == nullptr){
std::cout << SDL_GetError() << std::endl;
return 3; //and this too
}
Should I tell my terminate function to DestroyRenderer, DestroyWindow, DestroyTexture, etc. before exiting?
Same question as 'should i free memory that i've allocated before quitting a program'. Yes, if there is no bugs in SDL/X11/GL/etc. finalize code - all will be freed anyway. But i see no reasons why you shouldn't want to do that yourself.
Of course if you crash rather then exit - there is a good chance some things wouldn't be done and e.g. you wouldn't return display to native desktop resolution.
Ive personally had problems with SDL_TEXTURE that caused a memory leak while the program was running and the display of pictures just stopped after the program leaked about 2gb of ram when normally it uses 37 mb of ram.
SDL_DestroyTexture(texture);
Just called this afer every time i used to display a different picture with renderer and the memory leak was gone

Creating parallel offscreen OpenGL contexts on Windows

I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.