Do I need to call SDL_GL_DeleteContext before SDL_DestroyWindow? - c++

In SDL if I destroy a window anyway, do I need to delete the OpenGL context beforehand or does it delete it automatically? I don't want a memory leak.
Also when do I need to call SDL_GL_MakeCurrent? Is this only required if I have multiple windows with a GLcontext each?
Couldn't find anything in the documentation.

Well I call
SDL_GL_DeleteContext
SDL_DestroyWindow
SDL_QuitSubSystem
in that order. I once read the documentation very carefully and I remember vaguely that somewhere in the documentation this was mentioned. Although I have to warn that I read the one for SDL2. However this should be the same in SDL1.
Because all that SDL internals are easy to forget, I wrote a nice C++ wrapper:
https://github.com/Superlokkus/CG1/blob/master/src/sdl2_opengl_helper.cpp#L68

SDL doesn't delete the contexts automatically, you should do it manually.
Usually, my call stack goes like:
SDL_GL_DeleteContext(m_context);
SDL_DestroyWindow(m_window);
SDL_Quit();
Keeping track of the pointer shouldn't be that much of an issue either, since you could wrap the window system in a simple class/struct and just pass that around, like so:
class Window
{
public:
SDL_Window* window;
SDL_GLContext context;
};
As for your second question, each context you make is tied to the corresponding SDL window you specify when making the context current. Selecting/Making another context current and rendering on that context will draw on the window and context you make current.
You need to call SDL_GL_MakeCurrent once you make the window to be able to use it. When making multiple windows, make the context you want current and that will be rendered to. You should also use MakeCurrent if you're wanting to access OpenGL resources in another thread - but keep in mind that a context can only be active in ONE thread at a time, and you will have to recall the function in your main thread upon next use.

Related

Capture OpenGL output of child process?

Is there any possibility of capturing opengl output of child process?
Child should not have a different window. Output should be captured and displayed by parent instead.
I know that i can create a layer that my child could use to create opengl callbacks in my parent application. And send data by socket or pipe.
Edit:
I write main and child applications.
OK, here's a rundown of how I think this can be done. This is totally untested, so YMMV.
Create your window in the parent process. According to this page, you need to create it with the CS_OWNDC style, which means it has the same HDC permanently associated with it.
Launch your child process. You can pass the HWND to it as a command line parameter, converted to hex-ascii, say, or you can devise some other method.
In the child process, call GetDC to retrieve the HDC of the parent's window and pass it to wglCreateContext (I imagine you know all about doing that sort of thing).
In the child process, draw, draw, draw.
Before exiting the child process, make sure you call ReleaseDC to free up any resources allocated by GetDC.
This ought to work. I know that Chrome uses a separate process for each browser tab, for example (so that if the code rendering into any particular tab should crash, it affects that tab only).
Also, if you're thinking of jumping through all these hoops just because you want to reload some (different) DLLs, maybe you're looking for LoadLibrary and GetProcAddress instead. Hmmm, maybe I didn't need to write all that :)

Why am I unable to use CreateWICTextureFromFileEx after shutting down SDL

I am trying to shutdown my DX12 renderer, and restart it within the same process.
Said application is heavily based on the microsoft MiniEngine example code, now with some modification to allow re-initialisation of global variables. I am using SDL for window and event management.
The last stumbling block for a clean shutdown and re-initialisation, it appears, is the loading of textures in a texture manager class, which in turn uses the DirectXTK12 code to load textures, via CreateWICTextureFromFileEx for .png files.
To summarise what I'm trying to do:
start up application
initialise rendering into SDL window
render in rendering loop
shut down all rendering and window handling (remove all resources, release device handle) - calls SDL_Quit
re-initialise rendering into new SDL window (get new device handle, etc)
render in rendering loop
The texture management class is shut down as part of the rendering shutdown, removing all traces of textures and their handles to resources etc.
As part of the rendering re-initialisation, the default textures are created via CreateWICTextureFromFileEx. (see here) and crash when trying to do this.
EDIT: since first posting, I can say that this crashing starts directly after calling SDL_Quit() (and persists after the call to restart with SDL_Init(SDL_INIT_VIDEO))
I am now fairly confident it's an issue with this area specifically rather than some other part of the renderer that isn't being re-initialised correctly, since I can force it to use .DDS textures instead of .pngs, and the system fires up again nicely. In this case it uses DDSTextureLoader without any (obvious) problems.
I've added the source to my project and can see the crash is occurring when trying to do this:
ComPtr<IWICBitmapDecoder> decoder;
HRESULT hr = pWIC->CreateDecoderFromFilename(fileName,
nullptr,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
decoder.GetAddressOf());
The failure reported is
Unhandled exception thrown: read access violation.
pWIC->**** was 0x7FFAC0B0C610.
pWIC here is obtained via _GetWIC() which is using a singleton initialisation:
IWICImagingFactory2* _GetWIC() noexcept
{
static INIT_ONCE s_initOnce = INIT_ONCE_STATIC_INIT;
IWICImagingFactory2* factory = nullptr;
if (!InitOnceExecuteOnce(
&s_initOnce,
InitializeWICFactory,
nullptr,
reinterpret_cast<LPVOID*>(&factory)))
{
return nullptr;
}
return factory;
}
Since first posting, I can now say that this crashing starts directly after calling SDL_Quit(). I have test code that starts the graphics up enough to get a device and get a texture, which will complete successfully at any point until SDL_Quit.
Re-initialising SDL with SDL_Init(SDL_INIT_VIDEO) doesn't help.
I also note that the WICTextureLoader comments state "Assumes application has already called CoInitializeEx". Is this an area that SDL_Quit could be messing with?
I'll post an answer here but will remove it if #ChuckWalbourn wants to post his own.
This was due to letting SDL call CoInitialize for me. When it cleaned up through SDL_Quit, it called CoUninitialize which then (presumably) invalidated the IWICImagingFactory2 set up by WICTextureLoader . By adding a call to CoInitialize in my rendering startup, the internal count in CoInitialize keeps the IWICImagingFactory2 alive and all is well.

QT Managing OpenGL context in a separate thread

I have learned about setting up separate rendering thread for Qt QGLWidget here ,here and here .
I also managed to get a kind of "working" setup: clearing color in the viewport.Seems to be ok.But I am getting the following warning:
QOpenGLContext::swapBuffers() called with non-exposed window, behavior
is undefined
I first create a widget that inherits from QGLWidget.Where I also setup OpenGL Format:
In the Widget constructor:
QGLFormat format;
format.setProfile(QGLFormat::CompatibilityProfile);
format.setVersion(4,3);
format.setDoubleBuffer(true);
format.setSwapInterval(1);
setFormat(format);
setAutoBufferSwap(false);
Then I init the rendering thread in the same Widget:
void GLThreadedWidget::initRenderThread(void){
doneCurrent();
context()->moveToThread(&m_renderThread);
m_renderThread.start();
}
and from that point the whole rendering is done inside that thread:
RenderThread constructor:
RenderThread::RenderThread(GLThreadedWidget *parent)
:QThread(),glWidget(parent)
{
doRendering = true;
}
RenderThread run() method:
void RenderThread::run(){
glWidget->makeCurrent();
GLenum err = glewInit();
if (GLEW_OK != err) {
printf("GLEW error: %s\n", glewGetErrorString(err));
} else {
printf("Glew loaded; using version %s\n", glewGetString(GLEW_VERSION));
}
glInit();
while (doRendering){
glWidget->makeCurrent();
glClear(GL_COLOR_BUFFER_BIT );
paintGL(); // render actual frame
glWidget->swapBuffers();
glWidget->doneCurrent();
msleep(16);
}
}
Anyone can point out where is the issue?And if that message can be discarded? Also a straightforward and concise explanation on render thread setup in Qt would be extremely helpful.Using Qt 5.2 (Desktop OpenGL build)
With what you've shown, it looks like that message handler warning you were getting was because you started triggering buffer swaps "too soon" in the window setup sequence, either directly through QGLContext::/QOpenGLContext::swapBuffers() or indirectly through a number of possible ways, none of which are really detectable outside of manual debugging. What I mean by too soon is before the widget's parent window was marked "exposed" (before it was being displayed by the windowing system).
As far as whether the message can be discarded, it can...but it's not safe to do, as in it's possible to get undefined behavior for the 1st few frames or so where you do it and the window's not ready (especially if you're immediately resizing to different extents at startup than your .ui file specifies). Qt documentation says that before your window's exposed, Qt has to basically tell OpenGL to paint according to what are effectively non-trustworthy extents. I'm not sure that's all that can happen though personally.
With the code you showed, there's an easy fix--avoid even starting your render logic until your window says it's exposed. Detecting exposure using QGLWidget isn't obvious though. Here's an example roughly like what I use, assuming your subclass from QGLWidget was something like 'OGLRocksWidget', it was a child of a central widget, and that central widget was a child of your implementation of QMainWindow (so that your widget would have to call parentWidget()->parentWidget() to get at its QMainWindow):
OGLRocksWidget::paintGL()
{
QMainWindow *window_ptr =
dynamic_cast<QMainWindow *>(parentWidget() ? parentWidget()->parentWidget() : 0);
QWindow *qwindow_ptr = (window_ptr ? window_ptr->windowHandle() : 0);
if (qwindow_ptr && qwindow_ptr->isExposed())
{
// don't start rendering until you can get in here, just return...
// probably even better to make sure QGLWidget::isVisible() too
}
}
Of course you don't have to do this in your implementation of QGLWidget::paintGL(), but in your particular setup you're better off not even starting your render thread until your window tells you it's exposed.
It looks like you have might have slightly bigger problems than that though. You weren't hooking the right GL activity into the right places in your code vs QGLWidget's intent. I feel for the position you were in because the documentation on this is a little spotty and scattered. For that part, QGLWidget's detailed description down where it says "Here is a rough outline of how a QGLWidget subclass might look" is a good place to start getting the idea. You'll want to override any of the key virtuals in there that you have related code for and move them into those calls.
So for example, your widget's constructor is doing setup work that is probably safer to put in an initializeGL() override, since QGLWidget's intent is to signal you when it's safely time to do that through that call. What I mean by safer whenever I say that here is that you won't get seemingly random debug exceptions (that in release builds can silently wreak havok on your runtime stability).
Side advice: install Qt source, point your debugger at it, and watch your code run, including into Qt. Your setFormat() call, last time I watched it, actually deletes the current underlying QOpenGLContext. That's probably good to know because you'll want to create a new one soon after or at least test out your options.
The risk of instability is why I'm trying to put together at least some kind of answer here a year later. I just learned this through a lot (too much) debugging. I love what the Qt team's done with it, but Qt will be much better off when they finish migrating everything over to QOpenGL* calls (or wherever they see a final proper place for their OpenGL support including permanent considerations for it and windowing support together).
A QOpenglWidget comes with its own context. If you want a background thread to do the rendering, you have to pass a shared context to the thread and do a few steps correct.
Details in: https://stackoverflow.com/a/50368372/3082081

Get OpenGL (WGL) context from QOpenGLContext

I'm trying to get the OpenGL context (HGLRC) from the QQuickView window.
I need to pass it to a non-Qt library. I can get a QOpenGLContext easily enough:
m_qtContext = QOpenGLContext::currentContext();
How do you obtain the OpenGL context from the Qt class? (QOpenGLContext)
There's not exactly a public API for this, as far as I know. Note that Qt 5 removed most of the native handles from the APIs. This should do the trick:
QPlatformNativeInterface *iface = QGuiApplication::platformNativeInterface();
HGLRC ctx = (HGLRC)iface->nativeResourceForContext("renderingContext", context);
(not sure about the last cast, but that looks correct according to the relevant source).
You can get the current OpenGL context from WGL in any framework if you call wglGetCurrentContext (...) while your thread has the context bound. Keep in mind that frameworks will usually change the current context whenever they invoke a window's draw callback / event handler, and may even set it to NULL after completing the callback.
WGL has a strict one-to-one mapping for contexts and threads, so in a single-threaded application that renders to multiple windows you will probably have to call this function in a window's draw callback / event handler to get the proper handle.
In simplest terms, any time you have a valid context in which to issue GL commands under Win32, you can get a handle to that particular context by calling wglGetCurrentContext (...).
If your framework has a portable way of acquiring a native handle, then by all means use it. But that is definitely not your only option on Microsoft Windows.

Windowless OpenGL

I would like to have a windowless OpenGL context (on both GNU/linux with Xorg and Windows). I'm not going to render anything but only call functions like glGetString, glCompileShader and similar.
I've done some goggling but not come up with anything useful, except creating a hidden window; which seems like a hack to me.
So does anyone have a better idea (for any platform)?
EDIT: With Xorg I was able to create and attach an OpenGL context to the root-window:
#include<stdio.h>
#include<stdlib.h>
#include<X11/X.h>
#include<X11/Xlib.h>
#include<GL/gl.h>
#include<GL/glx.h>
int main(int argc, const char* argv[]){
Display *dpy;
Window root;
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo *vi;
GLXContext glc;
dpy = XOpenDisplay(NULL);
if ( !dpy ) {
printf("\n\tcannot connect to X server\n\n");
exit(0);
}
root = DefaultRootWindow(dpy);
vi = glXChooseVisual(dpy, 0, att);
if (!vi) {
printf("\n\tno appropriate visual found\n\n");
exit(0);
}
glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);
glXMakeCurrent(dpy, root, glc);
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
return 0;
}
EDIT2: I've written a short article about windowless opengl (with sample code) based on the accepted answer.
Actually, it is necessary to have a window handle to create a "traditional" rendering context (the root window on X11 or the desktop window on Windows are good for this). It is used to fetch OpenGL information and extentions availability.
Once you got that information, you can destroy the render context and release the "dummy" window!
You should test for the extensions ARB_extensions_string and ARB_create_context_profile, (described in these page: ARB_create_context). Then, you can create a render context by calling CreateContextAttribs, in a platform independent way, without having a system window associated and requiring only the system device context:
int[] mContextAttrib = new int[] {
Wgl.CONTEXT_MAJOR_VERSION, REQUIRED_OGL_VERSION_MAJOR,
Wgl.CONTEXT_MINOR_VERSION, REQUIRED_OGL_VERSION_MINOR,
Wgl.CONTEXT_PROFILE_MASK, (int)(Wgl.CONTEXT_CORE_PROFILE_BIT),
Wgl.CONTEXT_FLAGS, (int)(Wgl.CONTEXT_FORWARD_COMPATIBLE_BIT),
0
};
if ((mRenderContext = Wgl.CreateContextAttribs(mDeviceContext, pSharedContext, mContextAttrib)) == IntPtr.Zero)
throw new Exception("unable to create context");
Then, you could associate a frame buffer object or a system window to the created render context, if you wish to render (but as I understand, you want to compile only shaders).
Using CreateContextAttribs has many advantages:
It is platform independent
It's possible to request specific OpenGL implementation
It's possible to request a > 3.2 OpenGL implementation
It's possible to force the forward compatibility option (shader only rendering, that's the future way)
It's possible to select (in a forward compatible context only) a specific OpenGL implementation profile (actually there is only the CORE profile, but there could be more in the future.
It's possible to enable a debugging option, even if it isn't defined how this option could be used by the actual driver implementation
However, older hardware/drivers could not implements this extension, indeed I suggest to write a fallback code in order to create a backward compatible context.
Until you create a window, OpenGL has no idea what implementation you use. For example, there's a very different driver (and different hardware acceleration) for OpenGL in a remote X-Windows session vs OpenGL in an DRI X-Windows session. Shader language support might be different between these cases, and the output of the shader compiler is definitely going to be implementation-dependent, as well as any errors generated based on resource exhaustion.
So while actually creating a window may not be 100% necessary, you have to associate your context with the graphics hardware (or lack thereof) somehow, and since this can be done with a window no one bothered implementing an alternate method.
You need a window to host the context and you need a context to be able to do anything.
Source
If you don't want to display anything make the window invisible.
If there was another way to do this, it would be documented somewhere and easily found as it's not an uncommon problem.
One of the things I have done - which is admittedly a bit of a hack - to avoid the overhead of creating my own GL window - is to leverage open process windows.
The key to understanding OpenGL is this: All you needs to create a GL context with the call to wglCreateContext is a valid DC.
There's NOTHING in the documentation which says it has to be one you own.
For testing this out, I popped up Worlds Of Warcraft, and leveraging SPY++ to obtain a window handle, I then manually plugged that handle into a call to GetDC, which returns a valid Device Context, and from there, I ran the rest of my GL code like normal.
No GL window creation of my own.
Here's what happened when I did this with both Worlds of Warcraft and Star Trek Online https://universalbri.wordpress.com/2015/06/05/experiment-results
So to answer your question, YES you do need a window, but there's nothing in the documentation which states that window needs to be owned by you.
Now be advised: I couldn't get this method to provide valid visual output using the desktop window, but I was able to successfully create a DC using getDeskTopWindow API for the HWND and then a call to GetDC. So if there's non visual processing you want to use OpenGL for - let me know what you're doing, i am curious, and if you DO happen to get the GetDesktopWindow method working with the visuals - PLEASE repost on this thread what you did.
Good luck.
And don't let anyone tell you it can't be done.
When there's a will there's a way.
With GLFW, you can do this by setting a single <VISIBLE: FALSE> window hint