I'm trying to learn SDL 2.0 and I've been following lazyfoo tutorials. The problem is that those tutorials keeps everything in one file. So I tried to split some things up when starting a new game-project. My problem is that when I seperated my texture class from the rest it sometimes crashes the program, not always but pretty often.
I have a global window and a global renderer which I'm using in my class. I get segfault at difference places in the program but always in one of the following functions.
bool myTexture::loadFromFile(string path) {
printf("In loadFromFile\n");
//Free preexisting texture
free();
//The final texture
SDL_Texture *l_newTexture = NULL;
//Load image at specified path
SDL_Surface *l_loadedSurface = IMG_Load(path.c_str());
if(l_loadedSurface == NULL) {
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
} else {
//Color key image for transparancy
//SDL_SetColorKey(l_loadedSurface, SDL_TRUE, SDL_MapRGB(l_loadedSurface->format, 0, 0xFF, 0xFF));
//Create texture from surface pixels
l_newTexture = SDL_CreateTextureFromSurface(g_renderer, l_loadedSurface);
if(l_newTexture == NULL) {
printf("Unable to create texture from %s! SDL Error: %s\n", path.c_str(), SDL_GetError());
} else {
//Get image dimensions
m_width = l_loadedSurface->w;
m_height = l_loadedSurface->h;
}
//Get rid of old loaded surface
SDL_FreeSurface(l_loadedSurface);
}
m_texture = l_newTexture;
//return success
printf("end from file \n");
return m_texture != NULL;
}
void myTexture::free() {
printf("In myTexture::free\n");
//Free texture if it exist
if(m_texture != NULL) {
cout << (m_texture != NULL) << endl;
SDL_DestroyTexture(m_texture);
printf("Destroyed m_texture\n");
m_texture = NULL;
m_width = 0;
m_height = 0;
}
printf("end free\n");
}
After reading on SDL and other stuffs I understood that there might be some thread trying to deallocate something which it isn't allowed to deallocate. However I haven't threaded anything yet.
I managed to solve this by my own. It turned out that I never created a new myTexture object which was really idiotic. But I still don't understand how it managed to render it sometimes... for me it don't make any sense at all. I never created it but I could still call its render function sometimes...
Related
Im running this on MacOS 10.12 using Xcode 8.3.3 with SDL2 installed via Homebrew as Dylibs.
Below is some slightly modified sample code from lazy foo.
I just added a second texture gTexture2 and the function loadMedia2 to be able to reproduce the issue. The second time IMG_Load is executed it crashes with the following message:
EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
Searching on how to solve a "General Protection Fault" problem did also not get me further, the crash seems to happen inside SDL. I probably really misunderstand here something that leads to this issue and would really welcome any help.
The really confusing thing is, it does not crash always, only about 2 of 3 times.
The crash seem to happen inside SDL_AllocFormat_REAL ():
Here is the code sample.
/*This source code copyrighted by Lazy Foo' Productions (2004-2015)
and may not be redistributed without written permission.*/
//Using SDL, SDL_image, standard IO, and strings
#include <SDL.h>
#include <SDL_image.h>
#include <stdio.h>
#include <string>
//Screen dimension constants
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
//Starts up SDL and creates window
bool init();
//Loads media
bool loadMedia();
//Frees media and shuts down SDL
void close();
//Loads individual image as texture
SDL_Texture* loadTexture( std::string path );
//The window we'll be rendering to
SDL_Window* gWindow = NULL;
//The window renderer
SDL_Renderer* gRenderer = NULL;
//Current displayed texture
SDL_Texture* gTexture = NULL;
SDL_Texture* gTexture2 = NULL;
bool init()
{
//Initialization flag
bool success = true;
//Initialize SDL
if( SDL_Init( SDL_INIT_VIDEO ) < 0 )
{
printf( "SDL could not initialize! SDL Error: %s\n", SDL_GetError() );
success = false;
}
else
{
//Set texture filtering to linear
if( !SDL_SetHint( SDL_HINT_RENDER_SCALE_QUALITY, "1" ) )
{
printf( "Warning: Linear texture filtering not enabled!" );
}
//Create window
gWindow = SDL_CreateWindow( "SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN );
if( gWindow == NULL )
{
printf( "Window could not be created! SDL Error: %s\n", SDL_GetError() );
success = false;
}
else
{
//Create renderer for window
gRenderer = SDL_CreateRenderer( gWindow, -1, SDL_RENDERER_ACCELERATED );
if( gRenderer == NULL )
{
printf( "Renderer could not be created! SDL Error: %s\n", SDL_GetError() );
success = false;
}
else
{
//Initialize renderer color
SDL_SetRenderDrawColor( gRenderer, 0xFF, 0xFF, 0xFF, 0xFF );
//Initialize PNG loading
int imgFlags = IMG_INIT_PNG;
if( !( IMG_Init( imgFlags ) & imgFlags ) )
{
printf( "SDL_image could not initialize! SDL_image Error: %s\n", IMG_GetError() );
success = false;
}
}
}
}
return success;
}
bool loadMedia()
{
//Loading success flag
bool success = true;
//Load PNG texture
gTexture = loadTexture( "../assets/player.png" );
if( gTexture == NULL )
{
printf( "Failed to load texture image!\n" );
success = false;
}
return success;
}
bool loadMedia2()
{
//Loading success flag
bool success = true;
//Load PNG texture
gTexture2 = loadTexture( "../assets/scene_main/background.png" );
if( gTexture == NULL )
{
printf( "Failed to load texture image!\n" );
success = false;
}
return success;
}
void close()
{
//Free loaded image
SDL_DestroyTexture( gTexture );
SDL_DestroyTexture( gTexture2 );
gTexture = NULL;
gTexture2 = NULL;
//Destroy window
SDL_DestroyRenderer( gRenderer );
SDL_DestroyWindow( gWindow );
gWindow = NULL;
gRenderer = NULL;
//Quit SDL subsystems
IMG_Quit();
SDL_Quit();
}
SDL_Texture* loadTexture( std::string path )
{
//The final texture
SDL_Texture* newTexture = NULL;
//Load image at specified path
SDL_Surface* loadedSurface = IMG_Load( path.c_str() );
if( loadedSurface == NULL )
{
printf( "Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError() );
}
else
{
//Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface( gRenderer, loadedSurface );
if( newTexture == NULL )
{
printf( "Unable to create texture from %s! SDL Error: %s\n", path.c_str(), SDL_GetError() );
}
//Get rid of old loaded surface
SDL_FreeSurface( loadedSurface );
}
return newTexture;
}
int main( int argc, char* args[] )
{
//Start up SDL and create window
if( !init() )
{
printf( "Failed to initialize!\n" );
}
else
{
//Load media
if( !loadMedia() || !loadMedia2() )
{
printf( "Failed to load media!\n" );
}
else
{
//Main loop flag
bool quit = false;
//Event handler
SDL_Event e;
//While application is running
while( !quit )
{
//Handle events on queue
while( SDL_PollEvent( &e ) != 0 )
{
//User requests quit
if( e.type == SDL_QUIT )
{
quit = true;
}
}
//Clear screen
SDL_RenderClear( gRenderer );
//Render texture to screen
SDL_RenderCopy( gRenderer, gTexture, NULL, NULL );
//Update screen
SDL_RenderPresent( gRenderer );
}
}
}
//Free resources and close SDL
close();
return 0;
}
Little Update:
I've tried it on windows, there it runs completely fine. So I guess the issue is related to MacOs.
I already tried to reinstall all libraries.
I'm using C++14.
The solution
Well its just half of a solution its more a workaround.
Thanks to #Sahib Yar he pointed out to try to put the images in the same directory. Which resolves the issue.
But I think this is really weird, you should be able to load resources from different directories or at least subdirectory.
The final question
Now I would really love an explanation why we can't load images from multiple directories using SDL on MacOS. Is that just a bug, known thing or did I make a big mistake?
It seems that you are not destroying texture2 that is not needed.
SDL_DestroyTexture( gTexture );
SDL_DestroyTexture( gTexture2 );
gTexture = NULL;
gTexture2 = NULL;
In this lazyfoo tutorial, it is mentioned that
In our clean up function, we have to remember to deallocate our
textures using SDL_DestroyTexture.
Edit 1:
Try to put all your images in the same directory.
Edit 2:
It is not related to directory in MacOS From this tutorial, it seems like compiler is doing some optimization with std::string path as the std::string is mutable.
Try to clear the std::string path object at end of function to clear up all the memory reserved by its objects.
add this line.
std::string().swap(path);
Your issue is a dangling pointer. EXC_BAD_ACCESS is the CPU moaning that you are addressing non-existent memory or memory which is outside of your access rights area. The cause is a lack of reatainment of an object which causes early deallocation and then is overwritten. At which time (which may be delayed), the pointer will point to garbage whose dereference causes an EXC_BAD_ACCESS to be thrown.
Edit 3:
It is not something related to SDL2. After Googling, I have found that in Xcode, everything is eventually packed into 1 single directory. I have found multiple questions regarding this. It may be something related to folder reference and groups. To my guess it could be something related to blue folders. If this is the case you can consult this answer and use accordingly for SDL.
We want to create an SDL surface by loading an image with SDL_Image and if the dimensions exceed a limit resize the surface.
The reason we need to do this is on Raspbian SDL throws an error creating a texture from the surface ('Texture dimensions are limited to 2048x2048'). Whilst that's a very large image we don't want users to be concerned about image size, we want to resize it for them. Although we haven't encountered this limit on Windows, we're trying to develop the solution on windows and having issues resizing the texture.
Looking for a solution there have been similar questions...:
2008 not SDL2 custom blitting
2010 use SDL_gfx
2008 can't be done use SDL_gfx, 2015 use SDL_BlitScaled, 2015 use SDL_RenderCopyEx
Is a custom blitter or SDL_gfx necessary with current SDL2 (those answers pre-date SDL2's 2013 release)? SDLRenderCopyEx doesn't help as you need to generate the texture which is where our problem occurs.
So we tried some of the available blitting functions like SDL_BlitScaled, below is a simple program to render a 2500x2500 PNG with no scaling:
#include <SDL.h>
#include <SDL_image.h>
#include <sstream>
#include <string>
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
int main(int argc, char* args[]) {
SDL_Window * pWindow = NULL;
SDL_Renderer * pRenderer = NULL;
// set up
SDL_Init(SDL_INIT_VIDEO);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "1");
SDL_Rect screenDimensions;
screenDimensions.x = 0;
screenDimensions.y = 0;
screenDimensions.w = 640;
screenDimensions.h = 480;
pWindow = SDL_CreateWindow("Resize Test",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
screenDimensions.w,
screenDimensions.h,
SDL_WINDOW_SHOWN);
pRenderer = SDL_CreateRenderer(pWindow,
-1,
SDL_RENDERER_ACCELERATED);
IMG_Init(IMG_INIT_PNG);
// render
SDL_SetRenderDrawColor(
pRenderer,
0,
0,
0,
0);
SDL_RenderClear(pRenderer);
SDL_Texture * pTexture = get_texture(
pRenderer,
"2500x2500.png");
if (pTexture != NULL) {
SDL_RenderCopy(
pRenderer,
pTexture,
NULL,
&screenDimensions);
SDL_DestroyTexture(pTexture);
pTexture = NULL;
}
SDL_RenderPresent(pRenderer);
// wait for quit
bool quit = false;
while (!quit)
{
// poll input for quit
SDL_Event inputEvent;
while (SDL_PollEvent(&inputEvent) != 0) {
if ((inputEvent.type == SDL_KEYDOWN) &&
(inputEvent.key.keysym.sym == 113)) {
quit = true;
}
}
}
IMG_Quit();
SDL_DestroyRenderer(pRenderer);
pRenderer = NULL;
SDL_DestroyWindow(pWindow);
pWindow = NULL;
return 0;
}
Changing the get_texture function so it identifies a limit and tries to create a new surface:
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
const int limit = 1024;
int width = pSurface->w;
int height = pSurface->h;
if ((width > limit) ||
(height > limit)) {
SDL_Rect sourceDimensions;
sourceDimensions.x = 0;
sourceDimensions.y = 0;
sourceDimensions.w = width;
sourceDimensions.h = height;
float scale = (float)limit / (float)width;
float scaleH = (float)limit / (float)height;
if (scaleH < scale) {
scale = scaleH;
}
SDL_Rect targetDimensions;
targetDimensions.x = 0;
targetDimensions.y = 0;
targetDimensions.w = (int)(width * scale);
targetDimensions.h = (int)(height * scale);
SDL_Surface *pScaleSurface = SDL_CreateRGBSurface(
pSurface->flags,
targetDimensions.w,
targetDimensions.h,
pSurface->format->BitsPerPixel,
pSurface->format->Rmask,
pSurface->format->Gmask,
pSurface->format->Bmask,
pSurface->format->Amask);
if (SDL_BlitScaled(pSurface, NULL, pScaleSurface, &targetDimensions) < 0) {
printf("Error did not scale surface: %s\n", SDL_GetError());
SDL_FreeSurface(pScaleSurface);
pScaleSurface = NULL;
}
else {
SDL_FreeSurface(pSurface);
pSurface = pScaleSurface;
width = pSurface->w;
height = pSurface->h;
}
}
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
SDL_BlitScaled fails with an error 'Blit combination not supported' other variations have a similar error:
SDL_BlitScaled(pSurface, NULL, pScaleSurface, NULL)
SDL_BlitScaled(pSurface, &sourceDimensions, pScaleSurface, &targetDimensions)
SDL_LowerBlitScaled(pSurface, &sourceDimensions, pScaleSurface, &targetDimensions) // from the wiki this is the call SDL_BlitScaled makes internally
Then we tried a non-scaled blit... which didn't throw an error but just shows white (not the clear colour or a colour in the image).
SDL_BlitSurface(pSurface, &targetDimensions, pScaleSurface, &targetDimensions)
With that blitting function not working we then tried it with the same image as a bitmap (just exporting the .png as .bmp), still loading the file with SDL_Image and both those functions work with SDL_BlitScaled scaling as expected 😐
Not sure what's going wrong here (we expect and need support for major image file formats like .png) or if this is the recommended approach, any help appreciated!
TL;DR The comment from #kelter pointed me in the right direction and another stack overflow question gave me a solution: it works if you first Blit to a 32bpp surface and then BlitScaled to another 32bpp surface. That worked for 8 and 24 bit depth pngs, 32 bit were invisible again another stack overflow question suggested first filling the surface before blitting.
An updated get_texture function:
SDL_Texture * get_texture(
SDL_Renderer * pRenderer,
std::string image_filename) {
SDL_Texture * result = NULL;
SDL_Surface * pSurface = IMG_Load(image_filename.c_str());
if (pSurface == NULL) {
printf("Error image load: %s\n", IMG_GetError());
}
else {
const int limit = 1024;
int width = pSurface->w;
int height = pSurface->h;
if ((width > limit) ||
(height > limit)) {
SDL_Rect sourceDimensions;
sourceDimensions.x = 0;
sourceDimensions.y = 0;
sourceDimensions.w = width;
sourceDimensions.h = height;
float scale = (float)limit / (float)width;
float scaleH = (float)limit / (float)height;
if (scaleH < scale) {
scale = scaleH;
}
SDL_Rect targetDimensions;
targetDimensions.x = 0;
targetDimensions.y = 0;
targetDimensions.w = (int)(width * scale);
targetDimensions.h = (int)(height * scale);
// create a 32 bits per pixel surface to Blit the image to first, before BlitScaled
// https://stackoverflow.com/questions/33850453/sdl2-blit-scaled-from-a-palettized-8bpp-surface-gives-error-blit-combination/33944312
SDL_Surface *p32BPPSurface = SDL_CreateRGBSurface(
pSurface->flags,
sourceDimensions.w,
sourceDimensions.h,
32,
pSurface->format->Rmask,
pSurface->format->Gmask,
pSurface->format->Bmask,
pSurface->format->Amask);
if (SDL_BlitSurface(pSurface, NULL, p32BPPSurface, NULL) < 0) {
printf("Error did not blit surface: %s\n", SDL_GetError());
}
else {
// create another 32 bits per pixel surface are the desired scale
SDL_Surface *pScaleSurface = SDL_CreateRGBSurface(
p32BPPSurface->flags,
targetDimensions.w,
targetDimensions.h,
p32BPPSurface->format->BitsPerPixel,
p32BPPSurface->format->Rmask,
p32BPPSurface->format->Gmask,
p32BPPSurface->format->Bmask,
p32BPPSurface->format->Amask);
// 32 bit per pixel surfaces (loaded from the original file) won't scale down with BlitScaled, suggestion to first fill the surface
// 8 and 24 bit depth pngs did not require this
// https://stackoverflow.com/questions/20587999/sdl-blitscaled-doesnt-work
SDL_FillRect(pScaleSurface, &targetDimensions, SDL_MapRGBA(pScaleSurface->format, 255, 0, 0, 255));
if (SDL_BlitScaled(p32BPPSurface, NULL, pScaleSurface, NULL) < 0) {
printf("Error did not scale surface: %s\n", SDL_GetError());
SDL_FreeSurface(pScaleSurface);
pScaleSurface = NULL;
}
else {
SDL_FreeSurface(pSurface);
pSurface = pScaleSurface;
width = pSurface->w;
height = pSurface->h;
}
}
SDL_FreeSurface(p32BPPSurface);
p32BPPSurface = NULL;
}
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer, pSurface);
if (pTexture == NULL) {
printf("Error image load: %s\n", SDL_GetError());
}
else {
SDL_SetTextureBlendMode(
pTexture,
SDL_BLENDMODE_BLEND);
result = pTexture;
}
SDL_FreeSurface(pSurface);
pSurface = NULL;
}
return result;
}
The comment from #kelter had me look more closely at the surface pixel formats, bitmaps were working at 24bpp, pngs were being loaded at 8bpp and not working. Tried changing the target surface to 24 or 32 bpp but that didn't help. We had generated the png with auto-detected bit depth, setting it to 8 or 24 and performing the BlitScaled on a surface with the same bits-per-pixel worked although it didn't work for 32. Googling the blit conversion error lead to the question and answer from #Petruza.
Update Was a bit quick writing up this answer, the original solution handled bmp and 8 and 24 bit pngs but 32 bit pngs weren't rendering. #Retired Ninja answer to another question about Blit_Scaled suggested filling the surface before calling the function and that sorts it, there's another question related to setting alpha on new surfaces that may be relevant to this (particularily if you needed transparency) but filling with a solid colour is enough for me... for now.
Alrighty, from what I have researched, it appears that the Invalid Renderer error applies to a variety of cases and I'm lost onto why my code is creating it.
I have narrowed it down to a specific area of code
//If existing texture is there, free's and sets to NULL. Along with iWidth & iHeight = 0;
freetexture();
//final image texture
SDL_Texture* niTexture = NULL;
//Loads image at specified path
SDL_Surface* loadedSurface = IMG_Load(path.c_str());
if (loadedSurface == NULL)
{
printf("Unable to load image %s! SDL_image Error: %s\n", path.c_str(), IMG_GetError());
}
else
{
printf("SpriteSheet :: Loaded\n");
Init mkey;
//Color key DOUBLE CHECK IF ERROR CHANGE TO ORIGINAL 0, 0xFF, 0xFF
SDL_SetColorKey(loadedSurface, SDL_TRUE, SDL_MapRGB(loadedSurface->format, 50, 96, 166));
//create texture from surface pixels
niTexture = SDL_CreateTextureFromSurface(mkey.Renderer, loadedSurface);
if (niTexture == NULL)
{
printf("Unable to create texture from %s! SDL Error: %s\n", path.c_str(), SDL_GetError());
}
Specifically at the line,
niTexture = SDL_CreateTextureFromSurface(mkey.Renderer, loadedSurface);
is causing SDL to return an Invalid renderer error. In my init class, the renderer initializes perfectly, only when I attempt to use it to load an image am I getting the Invalid Renderer error. Any help on how to fix this error is appreciated.
Edit::
Here's some code from the Init Class relating to the renderer,
printf("Linear Texture Filtering :: Enabled\n");
//Create Window
Window = SDL_CreateWindow("Test", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, sw, sh, SDL_WINDOW_SHOWN);
if (Window == NULL)
{
printf("Window failed to be created\n");
SDLSuccess = false;
}
else
{
printf("Window :: Created\n");
//Create VYNC'd renderer for the window
Renderer = SDL_CreateRenderer(Window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
if (Renderer == NULL)
{
printf("Renderer failed to be created\n");
SDLSuccess = false;
}
Hope this helps with finding the issue.
It looks like your Renderer isn't initialized, unless the code you posted is in the constructor of your Init class.
Do you already have an instance of Init somewhere in your code that you mean to be referencing in your texture method? Check the value of your Renderer before you try to use it, something like:
if (mkey.Renderer) {
niTexture = SDL_CreateTextureFromSurface(mkey.Renderer, loadedSurface);
if (niTexture == NULL)
{
printf("Unable to create texture from %s! SDL Error: %s\n", path.c_str(), SDL_GetError());
}
} else {
printf("Renderer is not initialized");
}
I have the following piece of code where among with lot of other stuff (which i didn't include in this topic), i'm trying to start up sdl, create a render and load some sprites.
Everything compiles just fine but when i run my application a break is caused saying: Unhandled exception at 0x681252D5 (SDL.dll) in Carribean World SDL.exe: 0xC0000005: Access violation reading location 0x16161804
The break occurs and the point where i use the SDL_ConvertSurface() function
Can anyone help me out, i can't see what's wrong
Declerations:
SDL_Texture* background = NULL;
SDL_Surface* tmp = NULL;
SDL_Surface* surface = NULL;
SDL_Window *window = SDL_CreateWindow("Carribean World",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
1360, 768,
SDL_WINDOW_RESIZABLE);
SDL_Surface* screen = SDL_GetWindowSurface(window);
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
SDL_PixelFormat* fmt = screen->format;
IN MAIN:
Initialize all SDL subsystems
if (SDL_Init(SDL_INIT_EVERYTHING) == -1)
{
return 0;
}
Load images to surfaces
if ((tmp = IMG_Load("images/water.jpg")) == NULL)
{
cout << "SDL_SetVideoMode() Failed: " << SDL_GetError() << endl;
return 0;
}
Right here a break is caused
if ((surface = SDL_ConvertSurface(tmp, fmt, 0)) == NULL)
{
cout << "SDL_ConvertSurface() Failed: " << SDL_GetError() << endl;
}
background = SDL_CreateTextureFromSurface(renderer, tmp);
You haven't checked return value of SDL_GetWindowSurface. But anyway, SDL documentation for this function says 'You may not combine this with 3D or the rendering API on this window.'. So either you exclusively use SDL_Renderer API, or using SDL_BlitSurface and alike and after that calling SDL_UpdateWindowSurface, but you can't use both.
I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.