share OpenGL context between multiple threads - c++

I'm working on an OpenGL project where there are many scenes. I have successfully implemented the functionality of switching scenes at runtime, so the user can change to another scene by choosing a scene name through the ImGui menu. When that happens, the current scene is deleted to clean up dirty OpenGL internal states, then the new scene will be loaded from a factory pattern. Everything works fine, except that the window will freeze for a few seconds during the transition because unloading/loading scenes takes quite a while to finish.
What I want to do now is to create a loading screen, which will be displayed in between. The task of unloading/loading scenes is scheduled asynchronously using std::async and std::future, so that the caller is non-blocking and my loading screen can show up. However, since I'm creating the new scene in background in another thread, that thread cannot see the OpenGL context in the main thread, as a result, any glxxxx() calls would cause access violation so the new scene cannot be created.
I know that OpenGL is a global state machine which does not support multithreading quite well. I've also read somewhere that it is driver-dependent. Most threads on this topic is old, I wonder if it is still difficult to use multithreading in OpenGL as of 2021. From what I see, loading screen and switching scenes are just very basic functionalities, many applications are able to do so, and I believe there're a bunch of them using OpenGL, why is this problem still not commonly addressed today?
Does anyone know of any external libraries in this regard? Or is there another workaround without using multiple threads?

Related

How to load mesh in new thread

I am making my first game and I am stuck with one problem. I have a world where you can walk free, but then when you meet the enemy you will switch to battle and when you are switching to battle, I need to load all the models that will be rendered in the battle scene. The loading takes about ~5 seconds and I want to make the loading screen. So, I rendered the loading screen in the main thread, but how can I load 3d models and build different VAO and VBO at the same time? I made a new thread for this loading, but I read online "don't use threads for generating VAOs". What is the best solution to make this loading? Should I just preload all the models in the main thread before the game starts? Personally, for me it seems not right to load all the 3d models in the beginning of the game.
Assuming you have two windows, you can bind each context of the window to separate threads. Problems will arise if you share data between them (proper locking is mandatory).
See glfwMakeContextCurrent:
This function makes the OpenGL or OpenGL ES context of the specified window current on the calling thread. A context must only be made current on a single thread at a time and each thread can have only a single current context at a time.
Thread safety:This function may be called from any thread.
See glfwSwapBuffers:
This function swaps the front and back buffers of the specified window when rendering with OpenGL or OpenGL ES.
Thread safety:This function may be called from any thread.
Some functions in GLFW can only be called from the 'main' thread (nor from callbacks), e.g. glfwPollEvents, but other than that, bind the context to a thread, perform your OpenGl calls and swap the buffers. As said before, as long as you don't share any buffers, there should be no problem.

Render to same target from multiple threads

I am working on an old C++ project on Windows, where I am upgrading the app's DirectX rendering engine (using DX11). The new engine that I made currently coexists with the old one, but it runs in a different thread and has its own DX device and context. Until now, I've been using two separate windows to display the results of the old and new rendering processes at the same time, which allows me to verify that the new engine correctly replicates the behavior of the old one.
Now I want to merge these windows and swap between them at runtime, i.e I would hook a button press, or some other control to pause one render thread and (re)start the other. Both would draw their results to the same window (i.e using the same HWND to create the render target and swap chain), but only one thread would be active at any one time. A good analogy would be some remastered games where they let you swap between the old and new graphics.
What is the most ideal way to achieve this? I was thinking of doing a "hot swap", i.e letting one thread start rendering without even waiting for the other thread to finish, since we know it will stop in the next frame, and I don't need them to be working together. Does this carry the risk of crashing, or any other problems? If all it would cause is a few incorrect frames during the swap, then I don't mind.
I tested it, no ill effects as far as I can tell, besides some flickering if both threads happen to try to draw at the same time.

How do you control a SFML window from a separate thread?

I'm currently working on a game where I wanted to create a loading screen that basically shows the process of loading all the resources. To do this, I decided to create a separate thread that handles the window. I'm aware that there could be more efficient solutions, but I wanted to create a special mouse cursor and that way was the only way that allowed me to do that without having a buggy mouse when the application is loading a big file.
I read up on the threads on the SFML tutorial page and I learned that I have to do window.setActive(false) in the main thread and then window.setActive(true) in the separate thread in order to have access to the window in the separate thread without getting any problems. This works fine, it doesn't throw any errors and it displays the loading screen very nicely. However, I can't move the window around or interact with it in any way. The mouse cursor is covered by the blue ring from the mouse when it's loading, and I can neither close nor move nor resize the window even though I used sf::Style::Default, so it should be possible.
Can anyone help me out here?
You have it backwards. You blocked the main thread with loading your resources and created a new thread to keep the UI responsive. Not only is that not going to go well in the long term, but in the short term, your operating system still thinks your app is blocked, because the main thread is unresponsive. The OS does not know you created a second thread to keep the user entertained.
You should instead keep the responsive UI on the main thread and create an extra thread for doing the heavy lifting and blocking work. This way you don't have to struggle with your graphics library all the way (and it does not matter whether that's SFML, because they all do this) plus your operating system will not behave as if you blocked your application.

Is it feasible to split Qt GUI into multiple threads for GUI, simulation, and OpenGL?

I am experimenting with Qt for a new layout for an instrument simulation program at work. Our current sim is running everything in a single window (we've used both glut (old) and fltk), it uses glViewport(...) and glScissor(...) to split instrument readouts into their own views, and then it uses some form of "ortho2D" calls to create their own virtual pixel space. The simulator currently updates the instruments and then draws each in their own viewport one by one, all in the same thread.
We want to find a better approach, and we settled on Qt. I am working under a few big constraints:
Each of the instrument panels still need to be in their OpenGL viewport. There are a lot of buttons and a lot of instruments. My tentative solution is to use a QOpenGLWidget for each. I have made progress on this.
The sim is not just a pretty readout, but also simulates many of the instruments as feedback for the instrument designers, so it sometimes has a hefty CPU load. It isn't a full hardware emulator, but it does simulate the logic. I don't think that it's feasible to tell the instruments to update themselves at the beginning of its associated widget's paintEvent(...) method, so I want simulation updates to run in a separate thread.
Our customers may have old computers and thus more recent versions of OpenGL have been ruled out. We are still using glBegin() and glEnd() and everything in between, and the instruments draw a crap ton of variable symbols, so drawing is takes a lot of time and I want to split drawing off into it's own thread. I don't yet know if OpenGL 3 is on the table, which will be necessary (I think) for rendering to off-screen buffers.
Problem: QOpenGLWidget does not have on overrideable "update" method, and it only draws during the widgets' paintEvent(...) and paintGL(...) calls.
Tentative Solution: Split the simulator into three threads:
GUI: Runs user input, paintEvent(...), and paintGL(...).
Simulator: Runs all instrument logic and updates values for symbology.
Drawing: Renders latest symbology to an offscreen buffer (will use a frame buffer object (FBO)).
In this design, cross-thread talking is cyclic and one-way, with the GUI thread providing input, the simulator thread taking that input into account on its next loop, the drawing thread reading the latest symbology and rendering it to the FBO and setting a "next frame available" flag to true (or maybe emitting a signal), and then the paintGL(...) method will take that FBO and spit it out to the widget, thus keeping event processing down and GUI responsiveness up. Continue this cycle.
Bottom line question: I've read here that GUI operations cannot be done in a separate thread, so is my approach even feasible?
If feasible, any other caution or suggestions would be appreciated.
Each OpenGL widget has its own OpenGL context, and these contexts are QObjects and thus can be moved to other threads. As with any otherwise non-threadsafe object, you should only access them from their thread().
Additionally - and this is also portable to QML - you could use worker functors to compute display lists that are then submitted to the render thread to be converted into draw calls. The render thread doesn't do any logic and doesn't compute anything: it takes data (vertex arrays, etc.) and submits it for drawing. The worker functors would be submitted for execution on a thread pool using QtConcurrent::run.
You can thus have a main thread, a render thread (perhaps one per widget, but not necessarily), and functors that run your simulation steps.
In any case, convoluting logic and rendering is a very bad idea. Whether you're doing drawing using QPainter on a raster widget, or using QPainter on an QOpenGLWidget, or using direct OpenGL calls, the thread that does the drawing should not have to compute what's to be drawn.
If you don't want to mess with OpenGL calls, and you can represent most of your work as array-based QPainter calls (e.g. drawRects, drawPolygons), these translate almost directly into OpenGL draw calls and the OpenGL backend will render them just as quickly as if you hand-coded the draw calls. QPainter does all this for you if you use it on a QOpenGLWidget!

Handle Alt Tab in fullscreen OpenGL application properly

When trying to implement a simple OpenGL application, I was surprised that while it is easy to find plenty of examples and documentation on advanced rendering stuff, the Win32 framework is poorly documented and even most samples and tutorials do not implement this properly even for basic cases, not mentioning advanced stuff like multiple monitors. Despite of several hours of searching I was unable to find a way which would handle Alt-Tab reliably.
How should OpenGL fullscreen application respond to Alt-Tab? Which messages should the app react to (WM_ACTIVATE, WM_ACTIVATEAPP)? What should the reaction be? (Change the display resolution, destroy / create the rendering context, or destroy / create some OpenGL resources?)
If the application uses some animation loop, suspend the loop, then just minimize the window. If it changed display resolution and gamma revert to the settings before changing them.
There's no need to destroy OpenGL context or resources; OpenGL uses an abstract resource model: If another program requires RAM of the GPU or other resources, your programs resources will be swapped out transparently.
EDIT:
Changing the window visibility status may require to reset the OpenGL viewport, so it's a good idea to either call glViewport apropriately in the display/rendering function, or at least set it in the resize handler, followed by a complete redraw.