I am trying to shutdown my DX12 renderer, and restart it within the same process.
Said application is heavily based on the microsoft MiniEngine example code, now with some modification to allow re-initialisation of global variables. I am using SDL for window and event management.
The last stumbling block for a clean shutdown and re-initialisation, it appears, is the loading of textures in a texture manager class, which in turn uses the DirectXTK12 code to load textures, via CreateWICTextureFromFileEx for .png files.
To summarise what I'm trying to do:
start up application
initialise rendering into SDL window
render in rendering loop
shut down all rendering and window handling (remove all resources, release device handle) - calls SDL_Quit
re-initialise rendering into new SDL window (get new device handle, etc)
render in rendering loop
The texture management class is shut down as part of the rendering shutdown, removing all traces of textures and their handles to resources etc.
As part of the rendering re-initialisation, the default textures are created via CreateWICTextureFromFileEx. (see here) and crash when trying to do this.
EDIT: since first posting, I can say that this crashing starts directly after calling SDL_Quit() (and persists after the call to restart with SDL_Init(SDL_INIT_VIDEO))
I am now fairly confident it's an issue with this area specifically rather than some other part of the renderer that isn't being re-initialised correctly, since I can force it to use .DDS textures instead of .pngs, and the system fires up again nicely. In this case it uses DDSTextureLoader without any (obvious) problems.
I've added the source to my project and can see the crash is occurring when trying to do this:
ComPtr<IWICBitmapDecoder> decoder;
HRESULT hr = pWIC->CreateDecoderFromFilename(fileName,
nullptr,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
decoder.GetAddressOf());
The failure reported is
Unhandled exception thrown: read access violation.
pWIC->**** was 0x7FFAC0B0C610.
pWIC here is obtained via _GetWIC() which is using a singleton initialisation:
IWICImagingFactory2* _GetWIC() noexcept
{
static INIT_ONCE s_initOnce = INIT_ONCE_STATIC_INIT;
IWICImagingFactory2* factory = nullptr;
if (!InitOnceExecuteOnce(
&s_initOnce,
InitializeWICFactory,
nullptr,
reinterpret_cast<LPVOID*>(&factory)))
{
return nullptr;
}
return factory;
}
Since first posting, I can now say that this crashing starts directly after calling SDL_Quit(). I have test code that starts the graphics up enough to get a device and get a texture, which will complete successfully at any point until SDL_Quit.
Re-initialising SDL with SDL_Init(SDL_INIT_VIDEO) doesn't help.
I also note that the WICTextureLoader comments state "Assumes application has already called CoInitializeEx". Is this an area that SDL_Quit could be messing with?
I'll post an answer here but will remove it if #ChuckWalbourn wants to post his own.
This was due to letting SDL call CoInitialize for me. When it cleaned up through SDL_Quit, it called CoUninitialize which then (presumably) invalidated the IWICImagingFactory2 set up by WICTextureLoader . By adding a call to CoInitialize in my rendering startup, the internal count in CoInitialize keeps the IWICImagingFactory2 alive and all is well.
Related
I'm trying to build an application that can spawn a window on a separate thread. Let me explain a simple version. My main creates an object that has a window, let's call this object a menu. From the menu you can select what to do, for example open up an image to a new window. This whole object, or the object's "game loop" needs to be on a separate thread so that I can still keep interacting with the menu. I also need to interact with the image viewer.
My question is, what is the proper way of doing this?
I haven't really used threads a lot before. But from what I understand I need to detach the thread to create a daemon thread.
I tried to play around with the thread to create this but I kept getting these errors:
Failed to activate the window's context
Failed to activate OpenGL context: The requested resource is in use.
I'm not certain what causes this, all objects, like my windows are different instances. The application will still run fine even with these errors.
My application is quite big so here's an extremely simplified version of the code I've tried.
int main()
{
Menu menu; // this spawns a window
menu.run(); // let's say for simplicity this doesn't do anything else other than
// create a new window (the image viewer)
}
...
void caller(Image_view *img_view)
{
img_view->run();
}
void Menu::run()
{
Image_view *img_view = new Image_view(); // This creates the window
this->thread = new std::thread(caller, img_view);
this->thread->detach();
while (1); // This is here to keep the application running,
// in a real application this method would look different.
// This whole thread call would be in an event handler instead,
// but for this example I tried to make it as simple as possible
}
...
void Image_view::run()
{
while (running)
{
update(); // Event handler and whatever
render(); // Renders the image and whatever
}
this->window->close();
}
I mostly want to know if I'm using the thread correctly or not in an application like this. Also if you have any insight as to what the error message means, explaining it would be greatly appreciated. I should also mention that I'm using SFML for rendering and creating the window instance.
The tutorials I found about the threads are always something extremely simple which doesn't involve any window or anything that could for example cause that error message. So I figured someone smarter here might know the proper use of the thread in my case.
Thanks in advance!
I have a native target application that renders something by using Direct3D11. I want to extend the functionality of the target by injecting a DLL and hooking some APIs(not important to mention but it is XInputGetState). When DLL is injected, it also creates a window and provides some useful information. To render the information in the window, I use Direct2D, but after injecting the DLL, in another process's address space, calling the ID2D1Factory::CreateHwndRenderTarget fails with the error code D2DERR_NO_HARDWARE_DEVICE and doesn't create the ID2D1HwndRenderTarget object. The Factory object is created successfully and it is not NULL.
When I change the project type from Dynamic Link Library(.dll) to Application(.exe) and the entry point from DllMain to main and run it as a separate console application, the ID2D1Factory::CreateHwndRenderTarget succeeds.
I think that the problem causes by the existence of a created Direct3D11 Device already, but I am not sure.
Is there documentation about that? How can I solve the issue?
Put directly the D2D creating function into new thread by CreateThread in DllMain.
In SDL if I destroy a window anyway, do I need to delete the OpenGL context beforehand or does it delete it automatically? I don't want a memory leak.
Also when do I need to call SDL_GL_MakeCurrent? Is this only required if I have multiple windows with a GLcontext each?
Couldn't find anything in the documentation.
Well I call
SDL_GL_DeleteContext
SDL_DestroyWindow
SDL_QuitSubSystem
in that order. I once read the documentation very carefully and I remember vaguely that somewhere in the documentation this was mentioned. Although I have to warn that I read the one for SDL2. However this should be the same in SDL1.
Because all that SDL internals are easy to forget, I wrote a nice C++ wrapper:
https://github.com/Superlokkus/CG1/blob/master/src/sdl2_opengl_helper.cpp#L68
SDL doesn't delete the contexts automatically, you should do it manually.
Usually, my call stack goes like:
SDL_GL_DeleteContext(m_context);
SDL_DestroyWindow(m_window);
SDL_Quit();
Keeping track of the pointer shouldn't be that much of an issue either, since you could wrap the window system in a simple class/struct and just pass that around, like so:
class Window
{
public:
SDL_Window* window;
SDL_GLContext context;
};
As for your second question, each context you make is tied to the corresponding SDL window you specify when making the context current. Selecting/Making another context current and rendering on that context will draw on the window and context you make current.
You need to call SDL_GL_MakeCurrent once you make the window to be able to use it. When making multiple windows, make the context you want current and that will be rendered to. You should also use MakeCurrent if you're wanting to access OpenGL resources in another thread - but keep in mind that a context can only be active in ONE thread at a time, and you will have to recall the function in your main thread upon next use.
I have learned about setting up separate rendering thread for Qt QGLWidget here ,here and here .
I also managed to get a kind of "working" setup: clearing color in the viewport.Seems to be ok.But I am getting the following warning:
QOpenGLContext::swapBuffers() called with non-exposed window, behavior
is undefined
I first create a widget that inherits from QGLWidget.Where I also setup OpenGL Format:
In the Widget constructor:
QGLFormat format;
format.setProfile(QGLFormat::CompatibilityProfile);
format.setVersion(4,3);
format.setDoubleBuffer(true);
format.setSwapInterval(1);
setFormat(format);
setAutoBufferSwap(false);
Then I init the rendering thread in the same Widget:
void GLThreadedWidget::initRenderThread(void){
doneCurrent();
context()->moveToThread(&m_renderThread);
m_renderThread.start();
}
and from that point the whole rendering is done inside that thread:
RenderThread constructor:
RenderThread::RenderThread(GLThreadedWidget *parent)
:QThread(),glWidget(parent)
{
doRendering = true;
}
RenderThread run() method:
void RenderThread::run(){
glWidget->makeCurrent();
GLenum err = glewInit();
if (GLEW_OK != err) {
printf("GLEW error: %s\n", glewGetErrorString(err));
} else {
printf("Glew loaded; using version %s\n", glewGetString(GLEW_VERSION));
}
glInit();
while (doRendering){
glWidget->makeCurrent();
glClear(GL_COLOR_BUFFER_BIT );
paintGL(); // render actual frame
glWidget->swapBuffers();
glWidget->doneCurrent();
msleep(16);
}
}
Anyone can point out where is the issue?And if that message can be discarded? Also a straightforward and concise explanation on render thread setup in Qt would be extremely helpful.Using Qt 5.2 (Desktop OpenGL build)
With what you've shown, it looks like that message handler warning you were getting was because you started triggering buffer swaps "too soon" in the window setup sequence, either directly through QGLContext::/QOpenGLContext::swapBuffers() or indirectly through a number of possible ways, none of which are really detectable outside of manual debugging. What I mean by too soon is before the widget's parent window was marked "exposed" (before it was being displayed by the windowing system).
As far as whether the message can be discarded, it can...but it's not safe to do, as in it's possible to get undefined behavior for the 1st few frames or so where you do it and the window's not ready (especially if you're immediately resizing to different extents at startup than your .ui file specifies). Qt documentation says that before your window's exposed, Qt has to basically tell OpenGL to paint according to what are effectively non-trustworthy extents. I'm not sure that's all that can happen though personally.
With the code you showed, there's an easy fix--avoid even starting your render logic until your window says it's exposed. Detecting exposure using QGLWidget isn't obvious though. Here's an example roughly like what I use, assuming your subclass from QGLWidget was something like 'OGLRocksWidget', it was a child of a central widget, and that central widget was a child of your implementation of QMainWindow (so that your widget would have to call parentWidget()->parentWidget() to get at its QMainWindow):
OGLRocksWidget::paintGL()
{
QMainWindow *window_ptr =
dynamic_cast<QMainWindow *>(parentWidget() ? parentWidget()->parentWidget() : 0);
QWindow *qwindow_ptr = (window_ptr ? window_ptr->windowHandle() : 0);
if (qwindow_ptr && qwindow_ptr->isExposed())
{
// don't start rendering until you can get in here, just return...
// probably even better to make sure QGLWidget::isVisible() too
}
}
Of course you don't have to do this in your implementation of QGLWidget::paintGL(), but in your particular setup you're better off not even starting your render thread until your window tells you it's exposed.
It looks like you have might have slightly bigger problems than that though. You weren't hooking the right GL activity into the right places in your code vs QGLWidget's intent. I feel for the position you were in because the documentation on this is a little spotty and scattered. For that part, QGLWidget's detailed description down where it says "Here is a rough outline of how a QGLWidget subclass might look" is a good place to start getting the idea. You'll want to override any of the key virtuals in there that you have related code for and move them into those calls.
So for example, your widget's constructor is doing setup work that is probably safer to put in an initializeGL() override, since QGLWidget's intent is to signal you when it's safely time to do that through that call. What I mean by safer whenever I say that here is that you won't get seemingly random debug exceptions (that in release builds can silently wreak havok on your runtime stability).
Side advice: install Qt source, point your debugger at it, and watch your code run, including into Qt. Your setFormat() call, last time I watched it, actually deletes the current underlying QOpenGLContext. That's probably good to know because you'll want to create a new one soon after or at least test out your options.
The risk of instability is why I'm trying to put together at least some kind of answer here a year later. I just learned this through a lot (too much) debugging. I love what the Qt team's done with it, but Qt will be much better off when they finish migrating everything over to QOpenGL* calls (or wherever they see a final proper place for their OpenGL support including permanent considerations for it and windowing support together).
A QOpenglWidget comes with its own context. If you want a background thread to do the rendering, you have to pass a shared context to the thread and do a few steps correct.
Details in: https://stackoverflow.com/a/50368372/3082081
I would like to have a windowless OpenGL context (on both GNU/linux with Xorg and Windows). I'm not going to render anything but only call functions like glGetString, glCompileShader and similar.
I've done some goggling but not come up with anything useful, except creating a hidden window; which seems like a hack to me.
So does anyone have a better idea (for any platform)?
EDIT: With Xorg I was able to create and attach an OpenGL context to the root-window:
#include<stdio.h>
#include<stdlib.h>
#include<X11/X.h>
#include<X11/Xlib.h>
#include<GL/gl.h>
#include<GL/glx.h>
int main(int argc, const char* argv[]){
Display *dpy;
Window root;
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo *vi;
GLXContext glc;
dpy = XOpenDisplay(NULL);
if ( !dpy ) {
printf("\n\tcannot connect to X server\n\n");
exit(0);
}
root = DefaultRootWindow(dpy);
vi = glXChooseVisual(dpy, 0, att);
if (!vi) {
printf("\n\tno appropriate visual found\n\n");
exit(0);
}
glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);
glXMakeCurrent(dpy, root, glc);
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
return 0;
}
EDIT2: I've written a short article about windowless opengl (with sample code) based on the accepted answer.
Actually, it is necessary to have a window handle to create a "traditional" rendering context (the root window on X11 or the desktop window on Windows are good for this). It is used to fetch OpenGL information and extentions availability.
Once you got that information, you can destroy the render context and release the "dummy" window!
You should test for the extensions ARB_extensions_string and ARB_create_context_profile, (described in these page: ARB_create_context). Then, you can create a render context by calling CreateContextAttribs, in a platform independent way, without having a system window associated and requiring only the system device context:
int[] mContextAttrib = new int[] {
Wgl.CONTEXT_MAJOR_VERSION, REQUIRED_OGL_VERSION_MAJOR,
Wgl.CONTEXT_MINOR_VERSION, REQUIRED_OGL_VERSION_MINOR,
Wgl.CONTEXT_PROFILE_MASK, (int)(Wgl.CONTEXT_CORE_PROFILE_BIT),
Wgl.CONTEXT_FLAGS, (int)(Wgl.CONTEXT_FORWARD_COMPATIBLE_BIT),
0
};
if ((mRenderContext = Wgl.CreateContextAttribs(mDeviceContext, pSharedContext, mContextAttrib)) == IntPtr.Zero)
throw new Exception("unable to create context");
Then, you could associate a frame buffer object or a system window to the created render context, if you wish to render (but as I understand, you want to compile only shaders).
Using CreateContextAttribs has many advantages:
It is platform independent
It's possible to request specific OpenGL implementation
It's possible to request a > 3.2 OpenGL implementation
It's possible to force the forward compatibility option (shader only rendering, that's the future way)
It's possible to select (in a forward compatible context only) a specific OpenGL implementation profile (actually there is only the CORE profile, but there could be more in the future.
It's possible to enable a debugging option, even if it isn't defined how this option could be used by the actual driver implementation
However, older hardware/drivers could not implements this extension, indeed I suggest to write a fallback code in order to create a backward compatible context.
Until you create a window, OpenGL has no idea what implementation you use. For example, there's a very different driver (and different hardware acceleration) for OpenGL in a remote X-Windows session vs OpenGL in an DRI X-Windows session. Shader language support might be different between these cases, and the output of the shader compiler is definitely going to be implementation-dependent, as well as any errors generated based on resource exhaustion.
So while actually creating a window may not be 100% necessary, you have to associate your context with the graphics hardware (or lack thereof) somehow, and since this can be done with a window no one bothered implementing an alternate method.
You need a window to host the context and you need a context to be able to do anything.
Source
If you don't want to display anything make the window invisible.
If there was another way to do this, it would be documented somewhere and easily found as it's not an uncommon problem.
One of the things I have done - which is admittedly a bit of a hack - to avoid the overhead of creating my own GL window - is to leverage open process windows.
The key to understanding OpenGL is this: All you needs to create a GL context with the call to wglCreateContext is a valid DC.
There's NOTHING in the documentation which says it has to be one you own.
For testing this out, I popped up Worlds Of Warcraft, and leveraging SPY++ to obtain a window handle, I then manually plugged that handle into a call to GetDC, which returns a valid Device Context, and from there, I ran the rest of my GL code like normal.
No GL window creation of my own.
Here's what happened when I did this with both Worlds of Warcraft and Star Trek Online https://universalbri.wordpress.com/2015/06/05/experiment-results
So to answer your question, YES you do need a window, but there's nothing in the documentation which states that window needs to be owned by you.
Now be advised: I couldn't get this method to provide valid visual output using the desktop window, but I was able to successfully create a DC using getDeskTopWindow API for the HWND and then a call to GetDC. So if there's non visual processing you want to use OpenGL for - let me know what you're doing, i am curious, and if you DO happen to get the GetDesktopWindow method working with the visuals - PLEASE repost on this thread what you did.
Good luck.
And don't let anyone tell you it can't be done.
When there's a will there's a way.
With GLFW, you can do this by setting a single <VISIBLE: FALSE> window hint