So interestingly enough I have never had an Opengl context lost (where all buffer resources are wiped) until now. I currently am using OpenGL 4.2, via SDL 1.2 and GLEW on Win7 64, also my application is windowed without the ability to switch to fullscreen while running (only allowed on start up).
On my dev machine context never seems to be lost on re-size, but on other machines my application can lose the OpenGL context (it seems rare). Due to memory constraints (I have alot of memory being used by other parts of the application) I do not back up my gl buffer contents (VBOs, FBOs, Textures, etc) in system memory, oddly this hasn't been a problem for me in the past because the context never got wiped.
Its hard to discern from googling under what circumstances an OpenGL context will be lost (where all GPU memory buffers are wiped), other than maybe toggleing between fullscreen and windowed.
Back in my DX days, context lost could happen for many reasons, and I would be notified when it happened and reload my buffers from system memory backups. I was under the assumption (and I was perhaps wrong in that assumption) that OpenGL (or a managing library like SDL) would handle this buffer reload for me. Is this in any way even partially true?
One of the issues I have is that losing context on a resize, is pretty darn inconvenient, I am using ALOT of GPU memory, and having to reload everything could pause the app for while (well longer than I would like).
Is this a device dependent thing or driver dependent? Is it some combination of device, driver, and SDL version? How can a context loss like this be detected so that I can react to it?
Is it standard practice to keep system memory contents of all gl buffer contents, so that they may be reloaded on context loss? Or is a context loss rare enough that it isn't standard practice?
Context resets (loss) in OpenGL are ordinarily handled behind the scenes completely transparently. Literally nobody keeps GL resources around in application memory in order to handle a lost context because unless you are using a very new extension to OpenGL (robust context) there is no way to ever know when a context reset occurs in OpenGL in order to handle lost state. The driver does all that for you ordinarily, but you can receive notifications and define behavior related to context resets as described in heading 2.6 - "Graphics Reset Recovery".
But be aware that a lost context in OpenGL is very different from a lost context in D3D. In GL, a lost context occurs because some catastrophic error occurred (e.g. shader taking too long or memory access violation) and are most useful in something like WebGL, which has stricter security/reliability constraints than regular GL. In D3D you can lose your context simply by Alt + Tabbing or switching from windowed mode to fullscreen mode. In any event, I believe this is an SDL issue and not at all related to GL's notion of a context reset.
You're using SDL-1.2. With SDL-1.2 it's perfectly possible that the OpenGL context gets recreated (i.e. properly shut down and reinitialized) when the window gets resized. This is a known limitation of SDL and has been addressed in SDL-2.
So either use SDL-2 or use a different framework that's been tailored specifically for OpenGL, like GLFW.
Or is a context loss rare enough that it isn't standard practice?
OpenGL context's are not "lost". They're deallocated and that's what SDL-1.2 is doing in certain conditions.
Related
For a few years now, indirect GLX (IGLX) has been disabled by default in xorg and other X Servers. I'm writing an application that will use OpenGL if available, but can fall back to other graphics if it is not. Is there a standard way to detect that it is going to fail, other than trying it and responding to the errors?
My current test (written about 20 years ago) just checks if XOpenDisplay and glXQueryExtension work, but that's not sufficient: things fail later when calling glXCreateContext and other functions.
I'd prefer not to try to open a window and check for success, because at the time I want to do the test I don't know if the user is going to need one. My preference is to do an invisible test at startup so I can warn the user that they're going to be using the backup graphics methods.
Creating a OpenGL context with GLX doesn't require a window. Neiter glxCreateContext nor glxCreateNewContext take a drawable paramter. And even if they did, you can create a window without ever mapping it, i.e. make it visible, or even trigger some action from the window manager.
In X11 creating windows is a rather cheap operation, especially if the initial size of the window is 0×0 and the window is never mapped. You can still perform the whole range of X11 and GLX operations.
The upshot of all of this is, that to test if the OpenGL capabilities are available, the usual approach is to actually attempt to create an window and OpenGL context with the desired attributes and see, if this succeeds.
Since the X11 resources used for probing don't have to be mapped, this will not create any visible output; and apart from constantly polling the X server for the window tree, not even a window manager will take notice (since this depends on mapping the window).
Of course to keep thins cheap and fast, such tests should be programmed directly against X11 / Xlib, without any toolkits inbetween (since GLX is written against Xlib, even if Xcb is used, you'll have to use Xlib, for at least that part, but you'd have to do that anyway).
There is an interesting browser framework called Awesomium, which is basically a wrapper around the Chromium browser engine.
I'm interested in using it to redistribute WebGL-based games for the desktop. However Awesomium only supports rendering using a pixel buffer sent to the CPU, even though the WebGL context itself is based on a real hardware-accelerated OpenGL context. This is inefficient for real-time high-performance games and can kill the framerate on low-end machines.
Awesomium may eventually fix this, but it got me thinking: is it possible to search a process for an offscreen OpenGL context and render it directly to a window? This would avoid the inefficient rendering method, keeping rendering entirely on the GPU. I'm using a native C++ app on Windows, so presumably this will involve WGL specifics. Also since Chromium is a multithreaded browser engine it may involve finding an OpenGL context on a different thread or event a different process. Is it possible?
is it possible to search a process for an offscreen OpenGL context and render it directly to a window?
No, it it not possible. If the opengl context is created for the OS buffer, then it is not possible to redirect it to other buffer and other opengl context.
Maybe you can use shared opengl resources (if both opengl contexts are created using such option).
I am writing a game in C++ using SDL 1.2.14 and the OpenGL bindings included with it.
However, if the game is in fullscreen and I Alt - Tab out then back into the game, the results are unpredictable. The game logic still runs. However, rendering stops. I only see the last frame of the game that was drawn before the Alt-tab
I've made sure to re-initialize the OpenGL context and reload all textures when I get an SDL_APPACTIVE = 1 event and that seems to work for only one Alt - Tab, then all subsequent Alt - Tabs will stop rendering (I've made sure SDL_APPACTIVE is properly being handled each time and setting the context accordingly.)
I'd hazard a guess that SDL does something under the hood when minimizing the application that I'm not aware of.
Any ideas?
It's a good pratice to "slow down" your fullscreen application when it looses the focus. Two reasons:
User may need to Alt-Tab and do something important (like closing a heavy application that's hogging the resources). When he switches, the new application takes control, and the OS must release resources from your app as needed
Modern OS uses a lot of GPU - this means it needs to release some graphics memory to work.
Try shutting down every GL resource you use when APPACTIVE=0 and alloc them again on APPACTIVE=1. If this solves, it was "your fault". If it does not solves, it's SDL (or GL or OS) bug.
EDIT: s/SO/OS/g
When trying to implement a simple OpenGL application, I was surprised that while it is easy to find plenty of examples and documentation on advanced rendering stuff, the Win32 framework is poorly documented and even most samples and tutorials do not implement this properly even for basic cases, not mentioning advanced stuff like multiple monitors. Despite of several hours of searching I was unable to find a way which would handle Alt-Tab reliably.
How should OpenGL fullscreen application respond to Alt-Tab? Which messages should the app react to (WM_ACTIVATE, WM_ACTIVATEAPP)? What should the reaction be? (Change the display resolution, destroy / create the rendering context, or destroy / create some OpenGL resources?)
If the application uses some animation loop, suspend the loop, then just minimize the window. If it changed display resolution and gamma revert to the settings before changing them.
There's no need to destroy OpenGL context or resources; OpenGL uses an abstract resource model: If another program requires RAM of the GPU or other resources, your programs resources will be swapped out transparently.
EDIT:
Changing the window visibility status may require to reset the OpenGL viewport, so it's a good idea to either call glViewport apropriately in the display/rendering function, or at least set it in the resize handler, followed by a complete redraw.
We have an OpenGL Application (using Ogre3d and SDL, not directly calling OpenGL) and we are trying to change the Resolution at runtime. It seems that we need to re-initialize our OpenGL context with the new Resolution but a number of items are breaking along the way. On Linux it seems to work for a while, then we get graphical corruption on screen. On Windows it simply crashes the next time we try to render a frame. We have forced the reloading of textures in Ogre, and if we rendering nothing but textures (no 3d models) then this works fine, but any 3d models cause a crash and reloading before rendering them has no effect.
Here is a link to an in depth explanation of Ogre3d calls we are doing: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=62825
All we really need to know is, When re-initializing an Opengl context what resources need to be restored?
Why does adjusting an OpenGL context affect other resources? Is it the way OpenGL works, or did one of the libraries we use introduce this issue? Could we have added this issue without knowing it?
Did you have a look at this forum thread ?
SDL seems to destroy the OpenGL when changing resolution. In this case, all you GL resources are destroyed with the context.
One possible solution would be to create another 'dummy' GL context, sharing resources with you 'real' GL context, and to keep it alive with SDL destroys the 'main' context. This way most of your resources should survive.
Note that some resources can't be shared, textures and VBO are fine, but VAO can't.
OpenGL support was added SDL after its surface code had been established. That's why changing the size of a SDL window is destructive. You were pointed to OpenGL context sharing and its caveats. However I'd avoid the problem alltogether by not using SDL for creating an OpenGL window. You can use all the other facilities SDL provides without a window managed by SDL, so the only thing that would change is input event processing and how the window's created. Instead of SDL I'd use GLFW, which like SDL requires you to implement your own event processing loop, so using GLFW as a drop-in replacement for OpenGL window and context creation is straightforward.