How to load mesh in new thread - c++

I am making my first game and I am stuck with one problem. I have a world where you can walk free, but then when you meet the enemy you will switch to battle and when you are switching to battle, I need to load all the models that will be rendered in the battle scene. The loading takes about ~5 seconds and I want to make the loading screen. So, I rendered the loading screen in the main thread, but how can I load 3d models and build different VAO and VBO at the same time? I made a new thread for this loading, but I read online "don't use threads for generating VAOs". What is the best solution to make this loading? Should I just preload all the models in the main thread before the game starts? Personally, for me it seems not right to load all the 3d models in the beginning of the game.

Assuming you have two windows, you can bind each context of the window to separate threads. Problems will arise if you share data between them (proper locking is mandatory).
See glfwMakeContextCurrent:
This function makes the OpenGL or OpenGL ES context of the specified window current on the calling thread. A context must only be made current on a single thread at a time and each thread can have only a single current context at a time.
Thread safety:This function may be called from any thread.
See glfwSwapBuffers:
This function swaps the front and back buffers of the specified window when rendering with OpenGL or OpenGL ES.
Thread safety:This function may be called from any thread.
Some functions in GLFW can only be called from the 'main' thread (nor from callbacks), e.g. glfwPollEvents, but other than that, bind the context to a thread, perform your OpenGl calls and swap the buffers. As said before, as long as you don't share any buffers, there should be no problem.

Related

How to share frame rendering between two proceses

Im designing a game menu for an arcade machine as a personal project and to get more familiar with IPC. The project runs on a raspberry Pi rendering to a LED matrix using Hzeller's led matrix library. The menu would present the user a list of games rendered on the matrix. When the user selects a game, the main process will fork and spawn the game in a new process. The game will start and render to the matrix. At any point the user can exit the game which will return the user back to the game menu. Presumably there will be communication between processes so each aren't simultaneously rendering to the matrix at the same time.
Now where I am uncertain is how to actually share resources that are needed to render to the matrix. The library has a background updater thread that cannot be running in both process. Im not certain how this is typically done but here are the solutions I came up with:
Resources needed for rendering are cleaned up and reinitialized before switching contexts between parent and child processes
The Child will serialize the underlying framebuffer data and send it to the main process for rendering.
The first solution seems a little hacky and requires me to make small changes to the library. I also want to create an interface to decouple the menu/game processes from where graphics are actually rendered. This way I can make a separate implementation of the interface to render to a GUI on a computer screen. Choosing this option would be synonymous of creating an entirely new window for the child process (not sharing the same window).
The second solution would require copying the framebuffer data twice per frame. Once to shared memory and then again when the parent process copies out of shared memory and rendered. I worry about latency and how to manage frame rate in this solution. Ive looked into initializing the framebuffer in shared memory, but the library handles the initiation of this buffer. In addition, I am utilizing the library's vsync capability and am unsure how the child would utilize this if rendering were done in the parent process
My questions are:
Are there any other solutions? If not which of the two are better
design wise.
If I go option two, how would the child take advantage of the vsync functionality and how would I manage framerate in this scenario?
A game will typically involve real-time rendering to the LED matrix, and will have other requirements for user interaction to that are specific to the game and a lot more demanding than the requirements of your simple menu. It would not be good design practice to delegate these things to the menu process via some IPC mechanism. That mechanism could only get in the way, and you'd be forced to implement many of the game's requirements inside the menu process. The game should be able to control the display directly, while the menu should only have to do its easy menu things.
This should be very simple to arrange. Your menu process should shut down its display loop before it launches the game process, and should then resume its display loop when the game process ends.
The best way to do detect the completion of the child process is usually to pipe stdout on the child process into a parent process file handle. The parent process should read from this handle until it is automatically closed by the child process exiting. This is a universally reliable signal that works on all linux/unix/windows platforms.

share OpenGL context between multiple threads

I'm working on an OpenGL project where there are many scenes. I have successfully implemented the functionality of switching scenes at runtime, so the user can change to another scene by choosing a scene name through the ImGui menu. When that happens, the current scene is deleted to clean up dirty OpenGL internal states, then the new scene will be loaded from a factory pattern. Everything works fine, except that the window will freeze for a few seconds during the transition because unloading/loading scenes takes quite a while to finish.
What I want to do now is to create a loading screen, which will be displayed in between. The task of unloading/loading scenes is scheduled asynchronously using std::async and std::future, so that the caller is non-blocking and my loading screen can show up. However, since I'm creating the new scene in background in another thread, that thread cannot see the OpenGL context in the main thread, as a result, any glxxxx() calls would cause access violation so the new scene cannot be created.
I know that OpenGL is a global state machine which does not support multithreading quite well. I've also read somewhere that it is driver-dependent. Most threads on this topic is old, I wonder if it is still difficult to use multithreading in OpenGL as of 2021. From what I see, loading screen and switching scenes are just very basic functionalities, many applications are able to do so, and I believe there're a bunch of them using OpenGL, why is this problem still not commonly addressed today?
Does anyone know of any external libraries in this regard? Or is there another workaround without using multiple threads?

OpenGL asynchronous render of scene that takes several seconds

I've implemented a Mandelbrot fractal generator using wxWidgets and OpenGL. The computation is performed inside a fragment shader, and I'm using a wxGLCanvas widget to display the result. It's working well, but when I want to do a high-res export, the thread locks up for a few seconds which freezes the UI.
Initially I tried moving all rendering code (and context creation) into a separate render thread, but what I found was that it wasn't just the render thread that would lock up, but ALL threads. This could be easily demonstrated by spawning a new thread prior to doing the render that just prints a message to stdout in a loop. It would get as far as printing 1 message before freezing, then resuming once the render was complete.
To perform the file export, I first render to a texture, then I read the pixels into main memory with glGetTexImage. The render occurs asynchronously as you would expect, but the glGetTexImage function will block (again, that's expected). I therefore tried using glFenceSync in combination with glGetSynciv to only call glGetTexImage once the fence had been reached indicating a completed render.
I could confirm the draw call was returning immediately, but the moment I returned to the wxWidgets event loop to wait for the render to finish, all threads in the application would freeze. I figure maybe wxWidgets is making an OpenGL call that's forcing a sync (prob something in wxGLCanvas) - I'm fairly sure it wasn't something in my code.
I'm not sure why all threads were blocking on the glGetTexImage call, rather than just the render thread. I thought it might be a quirk of my setup (hardware, driver, OS, etc.), but got the same result on a completely different platform.
The only remaining option I can think of is to do the export render in another process with its own OpenGL context. If I'm still to use wxGLCanvas to set up the context I would probably need a visible window on the screen, which isn't ideal. Also, if the user tries to close the window during the render, it would be unresponsive.
Any ideas?
Not so much a solution, but a workaround (suggested by derhass in the comments) is to split the render into several smaller renders. I'm drawing the image in horizontal strips, each 10 pixels high, and calling wxSafeYield() between each one so the UI remains somewhat responsive (and the user can abort the render if it's taking too long).
An additional advantage is that there's less danger of overloading the GPU. Prior to implementing this I once kicked off a long render that caused my displays to shut off and I had to do a hard reboot.

Is it feasible to split Qt GUI into multiple threads for GUI, simulation, and OpenGL?

I am experimenting with Qt for a new layout for an instrument simulation program at work. Our current sim is running everything in a single window (we've used both glut (old) and fltk), it uses glViewport(...) and glScissor(...) to split instrument readouts into their own views, and then it uses some form of "ortho2D" calls to create their own virtual pixel space. The simulator currently updates the instruments and then draws each in their own viewport one by one, all in the same thread.
We want to find a better approach, and we settled on Qt. I am working under a few big constraints:
Each of the instrument panels still need to be in their OpenGL viewport. There are a lot of buttons and a lot of instruments. My tentative solution is to use a QOpenGLWidget for each. I have made progress on this.
The sim is not just a pretty readout, but also simulates many of the instruments as feedback for the instrument designers, so it sometimes has a hefty CPU load. It isn't a full hardware emulator, but it does simulate the logic. I don't think that it's feasible to tell the instruments to update themselves at the beginning of its associated widget's paintEvent(...) method, so I want simulation updates to run in a separate thread.
Our customers may have old computers and thus more recent versions of OpenGL have been ruled out. We are still using glBegin() and glEnd() and everything in between, and the instruments draw a crap ton of variable symbols, so drawing is takes a lot of time and I want to split drawing off into it's own thread. I don't yet know if OpenGL 3 is on the table, which will be necessary (I think) for rendering to off-screen buffers.
Problem: QOpenGLWidget does not have on overrideable "update" method, and it only draws during the widgets' paintEvent(...) and paintGL(...) calls.
Tentative Solution: Split the simulator into three threads:
GUI: Runs user input, paintEvent(...), and paintGL(...).
Simulator: Runs all instrument logic and updates values for symbology.
Drawing: Renders latest symbology to an offscreen buffer (will use a frame buffer object (FBO)).
In this design, cross-thread talking is cyclic and one-way, with the GUI thread providing input, the simulator thread taking that input into account on its next loop, the drawing thread reading the latest symbology and rendering it to the FBO and setting a "next frame available" flag to true (or maybe emitting a signal), and then the paintGL(...) method will take that FBO and spit it out to the widget, thus keeping event processing down and GUI responsiveness up. Continue this cycle.
Bottom line question: I've read here that GUI operations cannot be done in a separate thread, so is my approach even feasible?
If feasible, any other caution or suggestions would be appreciated.
Each OpenGL widget has its own OpenGL context, and these contexts are QObjects and thus can be moved to other threads. As with any otherwise non-threadsafe object, you should only access them from their thread().
Additionally - and this is also portable to QML - you could use worker functors to compute display lists that are then submitted to the render thread to be converted into draw calls. The render thread doesn't do any logic and doesn't compute anything: it takes data (vertex arrays, etc.) and submits it for drawing. The worker functors would be submitted for execution on a thread pool using QtConcurrent::run.
You can thus have a main thread, a render thread (perhaps one per widget, but not necessarily), and functors that run your simulation steps.
In any case, convoluting logic and rendering is a very bad idea. Whether you're doing drawing using QPainter on a raster widget, or using QPainter on an QOpenGLWidget, or using direct OpenGL calls, the thread that does the drawing should not have to compute what's to be drawn.
If you don't want to mess with OpenGL calls, and you can represent most of your work as array-based QPainter calls (e.g. drawRects, drawPolygons), these translate almost directly into OpenGL draw calls and the OpenGL backend will render them just as quickly as if you hand-coded the draw calls. QPainter does all this for you if you use it on a QOpenGLWidget!

using qglcontext from other threads

Is there a way to use the qglcontext of the glwidget from other threads. Because I need to do some texture uploading from other threads. However after the texture upload or even during it context must be also in the service of my rendering glwidget. Is there a documentation or a solid (assumption free) answer for this?
OpenGL does not support multithreaded rendering, all OpenGL calls must be performed from the thread where context was created. But if you whant to just load textures, you may load it from other threads, than post the results to that thread from wich OpenGL context was created for example to glTexImage2D, as image info. To do so must be add some thread management (signals e.t.c...).
For more information look at Concurrency and OpenGL.
also QGLWidget multithreaded example?.
To work from other threads you must create separate contexts with them or perform some sharing context management.
From official Qt documentation:
As of Qt version 4.8, support for doing threaded GL rendering has been improved. There are three scenarios that we currently support:
Buffer swapping in a thread.
Swapping buffers in a double buffered context may be a synchronous, locking call that may be a costly operation in some GL implementations. Especially so on embedded devices. It's not optimal to have the CPU idling while the GPU is doing a buffer swap. In those cases it is possible to do the rendering in the main thread and do the actual buffer swap in a separate thread. This can be done with the following steps:
Call doneCurrent() in the main thread when the rendering is finished.
Call QGLContext::moveToThread(swapThread) to transfer ownership of the context to the swapping thread.
Notify the swapping thread that it can grab the context.
Make the rendering context current in the swapping thread with makeCurrent() and then call swapBuffers().
Call doneCurrent() in the swapping thread.
Call QGLContext::moveToThread(qApp->thread()) and notify the main thread that swapping is done.
Doing this will free up the main thread so that it can continue with, for example, handling UI events or network requests. Even if there is a context swap involved, it may be preferable compared to having the main thread wait while the GPU finishes the swap operation. Note that this is highly implementation dependent.
Texture uploading in a thread.
Doing texture uploads in a thread may be very useful for applications handling large amounts of images that needs to be displayed, like for instance a photo gallery application. This is supported in Qt through the existing bindTexture() API. A simple way of doing this is to create two sharing QGLWidgets. One is made current in the main GUI thread, while the other is made current in the texture upload thread. The widget in the uploading thread is never shown, it is only used for sharing textures with the main thread. For each texture that is bound via bindTexture(), notify the main thread so that it can start using the texture.
Using QPainter to draw into a QGLWidget in a thread.
In Qt 4.8, it is possible to draw into a QGLWidget using a QPainter in a separate thread. Note that this is also possible for QGLPixelBuffers and QGLFramebufferObjects. Since this is only supported in the GL 2 paint engine, OpenGL 2.0 or OpenGL ES 2.0 is required.
QGLWidgets can only be created in the main GUI thread. This means a call to doneCurrent() is necessary to release the GL context from the main thread, before the widget can be drawn into by another thread. You then need to call QGLContext::moveToThread() to transfer ownership of the context to the thread in which you want to make it current. Also, the main GUI thread will dispatch resize and paint events to a QGLWidget when the widget is resized, or parts of it becomes exposed or needs redrawing. It is therefore necessary to handle those events because the default implementations inside QGLWidget will try to make the QGLWidget's context current, which again will interfere with any threads rendering into the widget. Reimplement QGLWidget::paintEvent() and QGLWidget::resizeEvent() to notify the rendering thread that a resize or update is necessary, and be careful not to call the base class implementation. If you are rendering an animation, it might not be necessary to handle the paint event at all since the rendering thread is doing regular updates. Then it would be enough to reimplement QGLWidget::paintEvent() to do nothing.
As a general rule when doing threaded rendering: be aware that binding and releasing contexts in different threads have to be synchronized by the user. A GL rendering context can only be current in one thread at any time. If you try to open a QPainter on a QGLWidget and the widget's rendering context is current in another thread, it will fail.
In addition to this, rendering using raw GL calls in a separate thread is supported.