QT Managing OpenGL context in a separate thread - c++

I have learned about setting up separate rendering thread for Qt QGLWidget here ,here and here .
I also managed to get a kind of "working" setup: clearing color in the viewport.Seems to be ok.But I am getting the following warning:
QOpenGLContext::swapBuffers() called with non-exposed window, behavior
is undefined
I first create a widget that inherits from QGLWidget.Where I also setup OpenGL Format:
In the Widget constructor:
QGLFormat format;
format.setProfile(QGLFormat::CompatibilityProfile);
format.setVersion(4,3);
format.setDoubleBuffer(true);
format.setSwapInterval(1);
setFormat(format);
setAutoBufferSwap(false);
Then I init the rendering thread in the same Widget:
void GLThreadedWidget::initRenderThread(void){
doneCurrent();
context()->moveToThread(&m_renderThread);
m_renderThread.start();
}
and from that point the whole rendering is done inside that thread:
RenderThread constructor:
RenderThread::RenderThread(GLThreadedWidget *parent)
:QThread(),glWidget(parent)
{
doRendering = true;
}
RenderThread run() method:
void RenderThread::run(){
glWidget->makeCurrent();
GLenum err = glewInit();
if (GLEW_OK != err) {
printf("GLEW error: %s\n", glewGetErrorString(err));
} else {
printf("Glew loaded; using version %s\n", glewGetString(GLEW_VERSION));
}
glInit();
while (doRendering){
glWidget->makeCurrent();
glClear(GL_COLOR_BUFFER_BIT );
paintGL(); // render actual frame
glWidget->swapBuffers();
glWidget->doneCurrent();
msleep(16);
}
}
Anyone can point out where is the issue?And if that message can be discarded? Also a straightforward and concise explanation on render thread setup in Qt would be extremely helpful.Using Qt 5.2 (Desktop OpenGL build)

With what you've shown, it looks like that message handler warning you were getting was because you started triggering buffer swaps "too soon" in the window setup sequence, either directly through QGLContext::/QOpenGLContext::swapBuffers() or indirectly through a number of possible ways, none of which are really detectable outside of manual debugging. What I mean by too soon is before the widget's parent window was marked "exposed" (before it was being displayed by the windowing system).
As far as whether the message can be discarded, it can...but it's not safe to do, as in it's possible to get undefined behavior for the 1st few frames or so where you do it and the window's not ready (especially if you're immediately resizing to different extents at startup than your .ui file specifies). Qt documentation says that before your window's exposed, Qt has to basically tell OpenGL to paint according to what are effectively non-trustworthy extents. I'm not sure that's all that can happen though personally.
With the code you showed, there's an easy fix--avoid even starting your render logic until your window says it's exposed. Detecting exposure using QGLWidget isn't obvious though. Here's an example roughly like what I use, assuming your subclass from QGLWidget was something like 'OGLRocksWidget', it was a child of a central widget, and that central widget was a child of your implementation of QMainWindow (so that your widget would have to call parentWidget()->parentWidget() to get at its QMainWindow):
OGLRocksWidget::paintGL()
{
QMainWindow *window_ptr =
dynamic_cast<QMainWindow *>(parentWidget() ? parentWidget()->parentWidget() : 0);
QWindow *qwindow_ptr = (window_ptr ? window_ptr->windowHandle() : 0);
if (qwindow_ptr && qwindow_ptr->isExposed())
{
// don't start rendering until you can get in here, just return...
// probably even better to make sure QGLWidget::isVisible() too
}
}
Of course you don't have to do this in your implementation of QGLWidget::paintGL(), but in your particular setup you're better off not even starting your render thread until your window tells you it's exposed.
It looks like you have might have slightly bigger problems than that though. You weren't hooking the right GL activity into the right places in your code vs QGLWidget's intent. I feel for the position you were in because the documentation on this is a little spotty and scattered. For that part, QGLWidget's detailed description down where it says "Here is a rough outline of how a QGLWidget subclass might look" is a good place to start getting the idea. You'll want to override any of the key virtuals in there that you have related code for and move them into those calls.
So for example, your widget's constructor is doing setup work that is probably safer to put in an initializeGL() override, since QGLWidget's intent is to signal you when it's safely time to do that through that call. What I mean by safer whenever I say that here is that you won't get seemingly random debug exceptions (that in release builds can silently wreak havok on your runtime stability).
Side advice: install Qt source, point your debugger at it, and watch your code run, including into Qt. Your setFormat() call, last time I watched it, actually deletes the current underlying QOpenGLContext. That's probably good to know because you'll want to create a new one soon after or at least test out your options.
The risk of instability is why I'm trying to put together at least some kind of answer here a year later. I just learned this through a lot (too much) debugging. I love what the Qt team's done with it, but Qt will be much better off when they finish migrating everything over to QOpenGL* calls (or wherever they see a final proper place for their OpenGL support including permanent considerations for it and windowing support together).

A QOpenglWidget comes with its own context. If you want a background thread to do the rendering, you have to pass a shared context to the thread and do a few steps correct.
Details in: https://stackoverflow.com/a/50368372/3082081

Related

How to use threads with multiple windows in C++?

I'm trying to build an application that can spawn a window on a separate thread. Let me explain a simple version. My main creates an object that has a window, let's call this object a menu. From the menu you can select what to do, for example open up an image to a new window. This whole object, or the object's "game loop" needs to be on a separate thread so that I can still keep interacting with the menu. I also need to interact with the image viewer.
My question is, what is the proper way of doing this?
I haven't really used threads a lot before. But from what I understand I need to detach the thread to create a daemon thread.
I tried to play around with the thread to create this but I kept getting these errors:
Failed to activate the window's context
Failed to activate OpenGL context: The requested resource is in use.
I'm not certain what causes this, all objects, like my windows are different instances. The application will still run fine even with these errors.
My application is quite big so here's an extremely simplified version of the code I've tried.
int main()
{
Menu menu; // this spawns a window
menu.run(); // let's say for simplicity this doesn't do anything else other than
// create a new window (the image viewer)
}
...
void caller(Image_view *img_view)
{
img_view->run();
}
void Menu::run()
{
Image_view *img_view = new Image_view(); // This creates the window
this->thread = new std::thread(caller, img_view);
this->thread->detach();
while (1); // This is here to keep the application running,
// in a real application this method would look different.
// This whole thread call would be in an event handler instead,
// but for this example I tried to make it as simple as possible
}
...
void Image_view::run()
{
while (running)
{
update(); // Event handler and whatever
render(); // Renders the image and whatever
}
this->window->close();
}
I mostly want to know if I'm using the thread correctly or not in an application like this. Also if you have any insight as to what the error message means, explaining it would be greatly appreciated. I should also mention that I'm using SFML for rendering and creating the window instance.
The tutorials I found about the threads are always something extremely simple which doesn't involve any window or anything that could for example cause that error message. So I figured someone smarter here might know the proper use of the thread in my case.
Thanks in advance!

Do I need to call SDL_GL_DeleteContext before SDL_DestroyWindow?

In SDL if I destroy a window anyway, do I need to delete the OpenGL context beforehand or does it delete it automatically? I don't want a memory leak.
Also when do I need to call SDL_GL_MakeCurrent? Is this only required if I have multiple windows with a GLcontext each?
Couldn't find anything in the documentation.
Well I call
SDL_GL_DeleteContext
SDL_DestroyWindow
SDL_QuitSubSystem
in that order. I once read the documentation very carefully and I remember vaguely that somewhere in the documentation this was mentioned. Although I have to warn that I read the one for SDL2. However this should be the same in SDL1.
Because all that SDL internals are easy to forget, I wrote a nice C++ wrapper:
https://github.com/Superlokkus/CG1/blob/master/src/sdl2_opengl_helper.cpp#L68
SDL doesn't delete the contexts automatically, you should do it manually.
Usually, my call stack goes like:
SDL_GL_DeleteContext(m_context);
SDL_DestroyWindow(m_window);
SDL_Quit();
Keeping track of the pointer shouldn't be that much of an issue either, since you could wrap the window system in a simple class/struct and just pass that around, like so:
class Window
{
public:
SDL_Window* window;
SDL_GLContext context;
};
As for your second question, each context you make is tied to the corresponding SDL window you specify when making the context current. Selecting/Making another context current and rendering on that context will draw on the window and context you make current.
You need to call SDL_GL_MakeCurrent once you make the window to be able to use it. When making multiple windows, make the context you want current and that will be rendered to. You should also use MakeCurrent if you're wanting to access OpenGL resources in another thread - but keep in mind that a context can only be active in ONE thread at a time, and you will have to recall the function in your main thread upon next use.

QPainter and paintEvent : what is the use of QPaintEvent *event?

I have a school project involving creating a simple GUI and coloring graphs using a minimal number of colors. I am working with a classmate, and so far, we have laid out different ideas regarding how we will store the graphs in memory, and how to implement different coloration algorithms.
To create the GUI, we are using Qt, as I used it for another project before, it is free, and I generally find the documentation generally well detailed. Besides, I knew it had a drawing module, although I never used it.
After reading and the documentation and some examples, I was able to draw some basic shapes where I wished inside of a set area of a widget, and get them to correctly respond to resizing the widget.
To draw what I wish, I can write the paintEvent method this way, and just never use *event
void DrawArea::paintEvent(QPaintEvent *event)
{
//method body
}
Or I can write it this way, and it works too
void DrawArea::paintEvent(QPaintEvent *)
{
//method body
}
So, i have two questions :
How does the widget knows when to call the paintEvent method ?
If I'm not mistaken, every widget has a paint event, and I am
overwriting it ? If it's wrong please correct me, maybe that is the
reason why I don't really understand the way this pointer work.
What is the QPaintEvent pointer ? (I mean, what does it represent ?)
Thanks for any insight you may give me
So much text and so little questions...
You should learn about events handling in window systems (keywords are event loop, event queue and so on; in terms of Windows OS events are named "messages"). It is simple and useful thing to know.
In short, your program asks OS for new tasks time after time. If they exist, some information about it is provided, and you should handle them. Otherwise OS stops the program until such tasks will appear.
It means that OS notifies you to handle paint events when you are ready to do it.
QPaintEvent provides additional information about the event. At present it can give you a region to redraw. It may be used for painting optimization in some cases. But in simple cases it is not used.

QWidget will not close in full screen mode on OS X (Yosemite)

I have a child class of QWidget, and I'm trying to fix a bug where the the window that it is in cannot be programmatically hidden/closed using the QWidget::hide() or close() methods.
Here are some of the things that I tried:
if(widget->isFullScreen())
{
widget->showNormal(); //Makes the window normal-sized before closing it
widget->hide();
}
Here's another way I have tried:
if(widget->isFullScreen())
{
widget->setWindowState(Qt::WindowMinimized);
widget->hide();
}
I also tried setting up a slot/signal system:
if(netcam->isFullScreen())
{
connect(this, SIGNAL(fullScreenExited()),
this, SLOT(onFullScreenExited()));
widget->showNormal();
this->fullScreenExited(); //just hides the widget (or closes it)
}
else
{
widget->hide();
}
The result every time is that the window freezes and must be closed by hand. My suspicion is that the first showNormal() is happening asynchronously, and the second close()/hide() never successfully executes.
I also tried this, in hopes that it would complete showNormal() before going on to hide()/close():
if(widget->isFullScreen())
{
widget->showNormal();
QApplication::processEvents();
widget->hide();
}
THE MAIN QUESTION:
Does anybody have any suggestions for how to deal with closing a full screen QWidget from Qt code?
Question that could also help:
Is there a way to ensure that things run synchronously?
Thanks!
EDIT:
The only way that I got this to work was to call showNormal() further up in the process, which prevents overlap in the execution of showNormal() and hide(). I'll try to remember to come back later and give a good, basic example with a regular QWidget.
I should also add that the window is put into the fullscreen state with the + (full screen) button, which is located at the top of each window in OS X.
This is a known bug.
The workarounds of showNormal() or showMinimized() are not working because the window state change is not synchronous. And a single processEvent() is not enough. You need to wait for the corresponding QEvent::WindowStateChange event to know when the window has fully moved out of fullscreen and can receive a new window state change.

Windowless OpenGL

I would like to have a windowless OpenGL context (on both GNU/linux with Xorg and Windows). I'm not going to render anything but only call functions like glGetString, glCompileShader and similar.
I've done some goggling but not come up with anything useful, except creating a hidden window; which seems like a hack to me.
So does anyone have a better idea (for any platform)?
EDIT: With Xorg I was able to create and attach an OpenGL context to the root-window:
#include<stdio.h>
#include<stdlib.h>
#include<X11/X.h>
#include<X11/Xlib.h>
#include<GL/gl.h>
#include<GL/glx.h>
int main(int argc, const char* argv[]){
Display *dpy;
Window root;
GLint att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo *vi;
GLXContext glc;
dpy = XOpenDisplay(NULL);
if ( !dpy ) {
printf("\n\tcannot connect to X server\n\n");
exit(0);
}
root = DefaultRootWindow(dpy);
vi = glXChooseVisual(dpy, 0, att);
if (!vi) {
printf("\n\tno appropriate visual found\n\n");
exit(0);
}
glc = glXCreateContext(dpy, vi, NULL, GL_TRUE);
glXMakeCurrent(dpy, root, glc);
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
return 0;
}
EDIT2: I've written a short article about windowless opengl (with sample code) based on the accepted answer.
Actually, it is necessary to have a window handle to create a "traditional" rendering context (the root window on X11 or the desktop window on Windows are good for this). It is used to fetch OpenGL information and extentions availability.
Once you got that information, you can destroy the render context and release the "dummy" window!
You should test for the extensions ARB_extensions_string and ARB_create_context_profile, (described in these page: ARB_create_context). Then, you can create a render context by calling CreateContextAttribs, in a platform independent way, without having a system window associated and requiring only the system device context:
int[] mContextAttrib = new int[] {
Wgl.CONTEXT_MAJOR_VERSION, REQUIRED_OGL_VERSION_MAJOR,
Wgl.CONTEXT_MINOR_VERSION, REQUIRED_OGL_VERSION_MINOR,
Wgl.CONTEXT_PROFILE_MASK, (int)(Wgl.CONTEXT_CORE_PROFILE_BIT),
Wgl.CONTEXT_FLAGS, (int)(Wgl.CONTEXT_FORWARD_COMPATIBLE_BIT),
0
};
if ((mRenderContext = Wgl.CreateContextAttribs(mDeviceContext, pSharedContext, mContextAttrib)) == IntPtr.Zero)
throw new Exception("unable to create context");
Then, you could associate a frame buffer object or a system window to the created render context, if you wish to render (but as I understand, you want to compile only shaders).
Using CreateContextAttribs has many advantages:
It is platform independent
It's possible to request specific OpenGL implementation
It's possible to request a > 3.2 OpenGL implementation
It's possible to force the forward compatibility option (shader only rendering, that's the future way)
It's possible to select (in a forward compatible context only) a specific OpenGL implementation profile (actually there is only the CORE profile, but there could be more in the future.
It's possible to enable a debugging option, even if it isn't defined how this option could be used by the actual driver implementation
However, older hardware/drivers could not implements this extension, indeed I suggest to write a fallback code in order to create a backward compatible context.
Until you create a window, OpenGL has no idea what implementation you use. For example, there's a very different driver (and different hardware acceleration) for OpenGL in a remote X-Windows session vs OpenGL in an DRI X-Windows session. Shader language support might be different between these cases, and the output of the shader compiler is definitely going to be implementation-dependent, as well as any errors generated based on resource exhaustion.
So while actually creating a window may not be 100% necessary, you have to associate your context with the graphics hardware (or lack thereof) somehow, and since this can be done with a window no one bothered implementing an alternate method.
You need a window to host the context and you need a context to be able to do anything.
Source
If you don't want to display anything make the window invisible.
If there was another way to do this, it would be documented somewhere and easily found as it's not an uncommon problem.
One of the things I have done - which is admittedly a bit of a hack - to avoid the overhead of creating my own GL window - is to leverage open process windows.
The key to understanding OpenGL is this: All you needs to create a GL context with the call to wglCreateContext is a valid DC.
There's NOTHING in the documentation which says it has to be one you own.
For testing this out, I popped up Worlds Of Warcraft, and leveraging SPY++ to obtain a window handle, I then manually plugged that handle into a call to GetDC, which returns a valid Device Context, and from there, I ran the rest of my GL code like normal.
No GL window creation of my own.
Here's what happened when I did this with both Worlds of Warcraft and Star Trek Online https://universalbri.wordpress.com/2015/06/05/experiment-results
So to answer your question, YES you do need a window, but there's nothing in the documentation which states that window needs to be owned by you.
Now be advised: I couldn't get this method to provide valid visual output using the desktop window, but I was able to successfully create a DC using getDeskTopWindow API for the HWND and then a call to GetDC. So if there's non visual processing you want to use OpenGL for - let me know what you're doing, i am curious, and if you DO happen to get the GetDesktopWindow method working with the visuals - PLEASE repost on this thread what you did.
Good luck.
And don't let anyone tell you it can't be done.
When there's a will there's a way.
With GLFW, you can do this by setting a single <VISIBLE: FALSE> window hint