Reliably creating an OpenGL context? - c++

Summary:
An OpenGL context is created successfully on the development computer, but when trying to distribute the application, the screen only shows black. What kind of issues need to be considered when distributing an OpenGL application?
Details:
I am using SDL2 to create a OpenGL 3.1 context. The context has to be at least 3.1 to work.
I have not thoroughly tested the issue, so I do not have information such as the graphics cards in use. However, I am more interested in the general question asked in the summary about what needs to be considered when distributing an OpenGL application.
Here is the context creation code.
// CREATE SDL
U32 flags;
flags |= SDL_INIT_VIDEO;
flags |= SDL_INIT_EVENTS;
if(!SDL_WasInit(0)) // Make sure SDL is initialized.
SDL_Init(0);
CHECK(!SDL_InitSubSystem(flags));
// SET OPENGL ATTRIBUTES
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, config.glVersionMajor);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, config.glVersionMinor);
if(config.glCoreProfile)
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
else
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
//SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, config.glDepthBuffer);
SDL_GL_SetSwapInterval(0);
// CREATE WINDOW
flags = SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN;
if(config.fullscreen)
flags = flags | SDL_WINDOW_FULLSCREEN_DESKTOP;
else if(config.maximized)
flags = flags | SDL_WINDOW_MAXIMIZED;
if(config.resizable)
flags = flags | SDL_WINDOW_RESIZABLE;
mainWindow = SDL_CreateWindow(config.programName, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
config.windowWidth, config.windowHeight, flags);
SDL_GetWindowSize(mainWindow, (int*)&windowWidth, (int*)&windowHeight);
CHECK(mainWindow != NULL);
// CREATE OPENGL CONTEXT
mainContext = SDL_GL_CreateContext(mainWindow);
CHECK(mainContext != NULL);
// INIT GLEW
#ifdef _WIN32
CHECK(GLEW_OK == glewInit());
#endif
glEnable(GL_DEPTH_TEST);
glViewport(0,0,windowWidth,windowHeight);
glClearColor(0,0,0,1);
//glEnable(GL_PRIMITIVE_RESTART);
glEnable(GL_CULL_FACE);
//glPrimitiveRestartIndex(0xFFFFFFFF);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
TTF_Init();

Make sure you know what your application is dependent on, and demand it from the platform. Saying "Core profile" means little in my experience. Better query each and every extension your application need and shut the application down (gracefully and kindly in the eyes of the user) if something is missing. And extensions are not everything. Check maximum sizes of all buffers, too. Of that I have real life experience.
Never rely on standard compliance. Yes, the standard says that GL_DEPTH is intially disabled. And no, thou shalt never rely on that the driver is compliant to that rule. And yes, that was a real life scenario.
Run proper tests on a variaty of hardwares. Vendors implement drivers differently. Some may think negative vertex index is perfectly fine. Some may not. Real life experience there as well...
Never accept a silent OpenGL error. "Something went wrong and it is probably nothing to worry about. For this hardware. For this driver. For this version. For this OS. Maybe."
Do the math. Floating point precision is not that strict by the OpenGL standard (more strict in OpenGL) and behavior for operations with undefined behavior, such as division by zero or any operations based on NaN, is never something you want to rely on.
Read the standard. Yes it is a pain, but trust me it will pay out. You won´t feel it of course since you will never experience problems you will never have.
As a side note, nobody really follow this practice. Everybody code first and then debug forever.

Related

What does glGenBuffers indicate by returning zero?

My program creates many vertex buffer just after startup as soon as vertex data is loaded over a network, and then occasionally deletes or create vertex buffers during hot loop. It works as expected almost always, but sometimes on some machines buffer creation in hot loop produces zero names.
It doesn't look like an invalid state, because it would fire much earlier. Also, documentation and spec is not clear enough about such type of errors. Does it mean that implementation run out of buffer names?
I also found this thread. Topicstarter says that initializing names before passing them to glGenBuffers fixed his problem. Is it necessary to initialize those values?
Since it seems to work on some machines, glGenBuffer returning 0 could be because of an improperly set up context. Here
davek20 had the same problem with glGenBuffers. He solved it by fixing his incorrect context setup.
As stated on here on GLFW 'Getting started' page, under 'Creating a window and context' they state
"If the required minimum version is not supported on the machine, context (and window) creation fails."
and these machines of yours might have correct drivers but probably doesn't support all or some versions of OpenGL, as the documentation states.
If you are using GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR consider changing these. I also recommend checking the context creation for returning NULL (0).
Example from GLFW's documentation page:
GLFWwindow* window;
if (!glfwInit())
return -1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(960, 540, "OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}

glfwCreateWindow() does not work after setting window hints

I'm following a tutorial which uses glfwWindowHint() to set the version of GLFW that he is using. He's on OSX and I'm on Windows. I have the exact same code as his. When I do this:
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
And then this:
GLFWwindow* window = glfwCreateWindow(640, 360, "Modern OpenGL", NULL, NULL);
It always returns NULL. But in the tutorial he said that setting the window hints was necessary to use the code that the program uses. When I take out the window hints the window is created sucessfully, but then it crashes (because of the other code that probably required the window hint changes).
I'm on Windows XP. How do I fix this?
Interestingly, this code should always return NULL on OSX, as OSX supports OpenGL >=3 only in core profiles, while this code requests a compatibility profile.
On Windows, a compatibility profile of that version might be supported. But this will depend on the GPU and the drivers you have installed. It might very well be the case that your system does simply not support GL3.2. Make sure you are using recent drivers. And check what GL version your GPU actually supports. One thing you could try though is setting the GLFW_OPENGL_FORWARD_COMPAT hint to GL_FALSE.

Trouble with vsync using glut in OpenGL

I'm struggling desperately to get Vsync to work in my OpenGL application. Here's the vital stats:
I'm using Windows, coding in C++ OpenGL and I'm using FreeGLUT for my OpenGL context (double buffering). I'm aware that for the swap buffer to wait for vertical sync in Windows you are required to call wglSwapIntervalEXT().
My code does call this (as you'll see below), yet I am still getting vertical tearing. The only way I've managed to stop it is by calling glFinish() which of course has a significant performance penalty associated with it.
The relevant parts of my main() function look like this:
//Initiating glut window
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowSize (initial_window_width, initial_window_height);
glutInitWindowPosition (100, 100);
int glut_window_hWnd = glutCreateWindow(window_title.c_str());
//Setting up swap intervals
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC)wglGetProcAddress("wglGetSwapIntervalEXT");
}
wglSwapIntervalEXT (1)
init ();
glutMainLoop(); ///Starting the glut loop
return(0);
I should point out that the return from the wglInitSwapControlARB function is true, so the extension is supported.
Ok - between the great help I've received here, and my own hours of research and messing around with it I've discovered a few things, including a solution that works for me (in case others come across this problem).
Firstly, I was using freeGLUT, I converted my code to GLFW and the result was the same, so this was NOT an API issue, don't waste your time like I did!
In my program at least, using wglSwapIntervalEXT(1) DOES NOT stop vertical tearing, and this was what led to it being such a headache to solve.
With my NVIDIA driver set to VSYNC = ON I was still getting tearing (because this is equivalent to SwapInterval(1) which doesn't help) - but it was set correctly, the driver was doing what it should have been I just didn't know it because I was still getting tearing.
So I set my NVIDIA driver to VSYNC = 'Application preference' and used wglSwapIntervalEXT(60) instead of 1 which I had always been using, and found that this was actually working because it was giving me a refresh rate of about 1Hz.
I don't know why wglSwapIntervalEXT(1) doesn't Vsync my screen, but wglSwapIntervalEXT(2) has the desired effect, though obviously I'm now rendering every other frame which is inefficient.
I found that with VSYNC disabled glFinish DOES NOT help with tearing, but with it enabled it does (If anyone can explain why that would be great).
So in summary, with wglSwapIntervalEXT(1) set and glFinish() enabled I no longer have tearing, but I don't understand still why.
Here's some performance stats in my app (deliberately loaded to have FPS's below 60):
wglSwapIntervalEXT(0) = Tearing = 58 FPS
wglSwapIntervalEXT(1) = Tearing = 58 FPS
wglSwapIntervalEXT(2) = No tearing = 30 FPS
wglSwapIntervalEXT(1) + glFinish = No Tearing = 52 FPS
I hope this helps someone in the future. Thanks for all your help.
i am also having problem with vsync and freeglut.
previously i used glut, and i was able to selectivly enable vsync for multiple GLUT-windows.
Now with freeglut, it seems like that the wglSwapIntervalEXT() has no effect.
What has effect is the global vsync option in the NVIDIA control panel. If I enable the vsync there, i have vsync in both of my freeglut windows, and if i disable i dont have. It does not matter what i set specificly for my application (in the nvidia control panel).
Also this confirms what i observe:
if(wglSwapIntervalEXT(0))
printf("VSYNC set\n");
int swap=wglGetSwapIntervalEXT();
printf("Control window vsync: %i\n",swap);
the value of swap is always the value that is set in the NVIDIA control panel.
Does not matter what is want to set with wglSwapIntervalEXT().
Otherwise here you can read what the GLFinish() is good for:
http://www.opengl.org/wiki/Swap_Interval
I use it because i need to know when my monitor is updated, so i can synchronously execute tasks after that (capture with a camera something).
As described in this question here, no "true" VSync exists. Using GLFinish is the correct approach. This will cause your Card to finish processing everything it has been sent before continuing and rendering the next frame.
You should keep track of your FPS and the time to render a frame, you might find GLFinish is simply exposing another bottleneck in your code.

Opengl Context Loss

So interestingly enough I have never had an Opengl context lost (where all buffer resources are wiped) until now. I currently am using OpenGL 4.2, via SDL 1.2 and GLEW on Win7 64, also my application is windowed without the ability to switch to fullscreen while running (only allowed on start up).
On my dev machine context never seems to be lost on re-size, but on other machines my application can lose the OpenGL context (it seems rare). Due to memory constraints (I have alot of memory being used by other parts of the application) I do not back up my gl buffer contents (VBOs, FBOs, Textures, etc) in system memory, oddly this hasn't been a problem for me in the past because the context never got wiped.
Its hard to discern from googling under what circumstances an OpenGL context will be lost (where all GPU memory buffers are wiped), other than maybe toggleing between fullscreen and windowed.
Back in my DX days, context lost could happen for many reasons, and I would be notified when it happened and reload my buffers from system memory backups. I was under the assumption (and I was perhaps wrong in that assumption) that OpenGL (or a managing library like SDL) would handle this buffer reload for me. Is this in any way even partially true?
One of the issues I have is that losing context on a resize, is pretty darn inconvenient, I am using ALOT of GPU memory, and having to reload everything could pause the app for while (well longer than I would like).
Is this a device dependent thing or driver dependent? Is it some combination of device, driver, and SDL version? How can a context loss like this be detected so that I can react to it?
Is it standard practice to keep system memory contents of all gl buffer contents, so that they may be reloaded on context loss? Or is a context loss rare enough that it isn't standard practice?
Context resets (loss) in OpenGL are ordinarily handled behind the scenes completely transparently. Literally nobody keeps GL resources around in application memory in order to handle a lost context because unless you are using a very new extension to OpenGL (robust context) there is no way to ever know when a context reset occurs in OpenGL in order to handle lost state. The driver does all that for you ordinarily, but you can receive notifications and define behavior related to context resets as described in heading 2.6 - "Graphics Reset Recovery".
But be aware that a lost context in OpenGL is very different from a lost context in D3D. In GL, a lost context occurs because some catastrophic error occurred (e.g. shader taking too long or memory access violation) and are most useful in something like WebGL, which has stricter security/reliability constraints than regular GL. In D3D you can lose your context simply by Alt + Tabbing or switching from windowed mode to fullscreen mode. In any event, I believe this is an SDL issue and not at all related to GL's notion of a context reset.
You're using SDL-1.2. With SDL-1.2 it's perfectly possible that the OpenGL context gets recreated (i.e. properly shut down and reinitialized) when the window gets resized. This is a known limitation of SDL and has been addressed in SDL-2.
So either use SDL-2 or use a different framework that's been tailored specifically for OpenGL, like GLFW.
Or is a context loss rare enough that it isn't standard practice?
OpenGL context's are not "lost". They're deallocated and that's what SDL-1.2 is doing in certain conditions.

Creating an OpenGL 3.2/3.x context in SDL 1.3

I'm facing a problem where SDL says it does not support OpenGL 3.x contexts. I am trying to follow this tutorial: Creating a Cross Platform OpenGL 3.2 Context in SDL (C / SDL). I am using GLEW in this case, but I couldn't get gl3.h to work with this either. This is the code I ended up with:
#include <glew.h>
#include <SDL.h>
int Testing::init()
{
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
DEBUGLINE("Error initializing SDL.");
printSDLError();
system("pause");
return 1; // Error
}
//Request OpenGL 3.2 context.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
//set double buffer
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//Create window
window = SDL_CreateWindow("OpenGL 3.2 test",
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
600, 400, SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
if(window == NULL) return 3; // Error
//Print errors to console if there are any
printSDLError(__LINE__);
//Set up OpenGL context.
glContext = SDL_GL_CreateContext(window);
printSDLError(__LINE__);
if(glContext == NULL)
{
DEBUGLINE("OpenGL context could not be created.");
system("pause");
return 4;
}
//Initialize glew
GLenum err = glewInit();
if(err != GLEW_OK)
{
DEBUGLINE("GLEW unable to be initialized: " << glewGetErrorString(err));
system("pause");
return 2;
}
return 0; // OK code, no error.
}
The only problem that is reported is after trying to call SDL_GL_CreateContext(window), where SDL reports "GL 3.x is not supported". However, both the tutorial and this sample pack (which I have not bothered to test with) report success in combining SDL 1.3 and OpenGL 3.2. I am aware that SDL 1.3 is in the middle of development, but I somewhat doubt that even unintentional support would be removed.
A context is still created, and GLEW is able to initialize just fine. (I can't figure out for the life of me how to see the version of the context that was created, since it's supposed to be the core profile, and I don't know how to find that either. According to the tutorial, SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3) doesn't actually do anything, in which case I have no clue how to get the appropriate context created or change the default context.)
EDIT: After some testing thanks to the helpful function Nicol gave me, I have found that, regardless of the parameters I pass to SDL_GL_SetAttribute, the context is always version 1.1. However, putting in any version below 3.0 doesn't spit out an error saying it is not supported. So the problem is that the "core" version SDL sees is only 1.1.
For the record, I am using Visual C++ 2010 express, GLEW 1.7.0, and the latest SDL 1.3 revision. I am fairly new to using all three of these, and I had to manually build the SDL libraries for both 32 and 64 bit versions, so there's a lot that could go wrong. So far however, the 32 and 64 bit versions are doing the exact same thing.
EDIT: I am using an nVidia 360M GPU with the latest driver, which OpenGL Extension Viewer 4.04 reports to have full compatibility up to OpenGL 3.3.
Any help is appreciated.
UPDATE: I have managed to get SDL to stop yelling at me that it doesn't support 3.x contexts. The problem was that the SDL_GL_SetAttribute must be set BEFORE SDL_Init is called:
//Request OpenGL 3.2 context.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
//Initialize SDL
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
DEBUGLINE("Error initializing SDL.");
return 1; // Error
}
Unfortunately, GLEW still refuses to acknowledge anything higher than OpenGL 1.1 (only GLEW_VERSION_1_1 returns true), which still has me puzzled. glGetString(GL_VERSION) also reports 1.1.0. It seems that my program simply doesn't know of any higher versions, as if I don't have them installed at all.
since I don't know if you already found a solution, here is mine:
I struggled around a lot today and yesterday with this stuff. Advanced GL functions couldn't be used, so I even debugged into opengl32.dll just to see it really works and wraps the calls into the hardware-specific OpenGL DLL (nvoglnt.dll). So there must have been another cause. There were even tips in the internet to link to opengl32.lib before all other libraries, because ChoosePixelFormat and some other functions are overwritten by each other.
But that wasn't the cause, too. My solution was to enable the accelerated visuals here:
// init SDL
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_HAPTIC | SDL_INIT_TIMER) < 0) {
fprintf(stderr, "Could not init SDL");
return 1;
}
// we must wish our OpenGL Version!!
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
because in the current SDL revision (Dec 15, 2011) he checks for it in SDL_windowsopengl.c
if (_this->gl_config.accelerated >= 0) {
*iAttr++ = WGL_ACCELERATION_ARB;
*iAttr++ = (_this->gl_config.accelerated ? WGL_FULL_ACCELERATION_ARB :
WGL_NO_ACCELERATION_ARB);
}
and this attribute is initialized to -1 if you did not define it on your own.
And: Never set the version attributes before initializing SDL, because settings attributes needs the video backend to be initialized properly!
I hope this helps.
I followed this tutorial. Everything works fine on windowz and linux.
http://people.cs.uct.ac.za/~aflower/tutorials.html