I've implemented a Mandelbrot fractal generator using wxWidgets and OpenGL. The computation is performed inside a fragment shader, and I'm using a wxGLCanvas widget to display the result. It's working well, but when I want to do a high-res export, the thread locks up for a few seconds which freezes the UI.
Initially I tried moving all rendering code (and context creation) into a separate render thread, but what I found was that it wasn't just the render thread that would lock up, but ALL threads. This could be easily demonstrated by spawning a new thread prior to doing the render that just prints a message to stdout in a loop. It would get as far as printing 1 message before freezing, then resuming once the render was complete.
To perform the file export, I first render to a texture, then I read the pixels into main memory with glGetTexImage. The render occurs asynchronously as you would expect, but the glGetTexImage function will block (again, that's expected). I therefore tried using glFenceSync in combination with glGetSynciv to only call glGetTexImage once the fence had been reached indicating a completed render.
I could confirm the draw call was returning immediately, but the moment I returned to the wxWidgets event loop to wait for the render to finish, all threads in the application would freeze. I figure maybe wxWidgets is making an OpenGL call that's forcing a sync (prob something in wxGLCanvas) - I'm fairly sure it wasn't something in my code.
I'm not sure why all threads were blocking on the glGetTexImage call, rather than just the render thread. I thought it might be a quirk of my setup (hardware, driver, OS, etc.), but got the same result on a completely different platform.
The only remaining option I can think of is to do the export render in another process with its own OpenGL context. If I'm still to use wxGLCanvas to set up the context I would probably need a visible window on the screen, which isn't ideal. Also, if the user tries to close the window during the render, it would be unresponsive.
Any ideas?
Not so much a solution, but a workaround (suggested by derhass in the comments) is to split the render into several smaller renders. I'm drawing the image in horizontal strips, each 10 pixels high, and calling wxSafeYield() between each one so the UI remains somewhat responsive (and the user can abort the render if it's taking too long).
An additional advantage is that there's less danger of overloading the GPU. Prior to implementing this I once kicked off a long render that caused my displays to shut off and I had to do a hard reboot.
Related
I am making my first game and I am stuck with one problem. I have a world where you can walk free, but then when you meet the enemy you will switch to battle and when you are switching to battle, I need to load all the models that will be rendered in the battle scene. The loading takes about ~5 seconds and I want to make the loading screen. So, I rendered the loading screen in the main thread, but how can I load 3d models and build different VAO and VBO at the same time? I made a new thread for this loading, but I read online "don't use threads for generating VAOs". What is the best solution to make this loading? Should I just preload all the models in the main thread before the game starts? Personally, for me it seems not right to load all the 3d models in the beginning of the game.
Assuming you have two windows, you can bind each context of the window to separate threads. Problems will arise if you share data between them (proper locking is mandatory).
See glfwMakeContextCurrent:
This function makes the OpenGL or OpenGL ES context of the specified window current on the calling thread. A context must only be made current on a single thread at a time and each thread can have only a single current context at a time.
Thread safety:This function may be called from any thread.
See glfwSwapBuffers:
This function swaps the front and back buffers of the specified window when rendering with OpenGL or OpenGL ES.
Thread safety:This function may be called from any thread.
Some functions in GLFW can only be called from the 'main' thread (nor from callbacks), e.g. glfwPollEvents, but other than that, bind the context to a thread, perform your OpenGl calls and swap the buffers. As said before, as long as you don't share any buffers, there should be no problem.
I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!
I am experimenting with Qt for a new layout for an instrument simulation program at work. Our current sim is running everything in a single window (we've used both glut (old) and fltk), it uses glViewport(...) and glScissor(...) to split instrument readouts into their own views, and then it uses some form of "ortho2D" calls to create their own virtual pixel space. The simulator currently updates the instruments and then draws each in their own viewport one by one, all in the same thread.
We want to find a better approach, and we settled on Qt. I am working under a few big constraints:
Each of the instrument panels still need to be in their OpenGL viewport. There are a lot of buttons and a lot of instruments. My tentative solution is to use a QOpenGLWidget for each. I have made progress on this.
The sim is not just a pretty readout, but also simulates many of the instruments as feedback for the instrument designers, so it sometimes has a hefty CPU load. It isn't a full hardware emulator, but it does simulate the logic. I don't think that it's feasible to tell the instruments to update themselves at the beginning of its associated widget's paintEvent(...) method, so I want simulation updates to run in a separate thread.
Our customers may have old computers and thus more recent versions of OpenGL have been ruled out. We are still using glBegin() and glEnd() and everything in between, and the instruments draw a crap ton of variable symbols, so drawing is takes a lot of time and I want to split drawing off into it's own thread. I don't yet know if OpenGL 3 is on the table, which will be necessary (I think) for rendering to off-screen buffers.
Problem: QOpenGLWidget does not have on overrideable "update" method, and it only draws during the widgets' paintEvent(...) and paintGL(...) calls.
Tentative Solution: Split the simulator into three threads:
GUI: Runs user input, paintEvent(...), and paintGL(...).
Simulator: Runs all instrument logic and updates values for symbology.
Drawing: Renders latest symbology to an offscreen buffer (will use a frame buffer object (FBO)).
In this design, cross-thread talking is cyclic and one-way, with the GUI thread providing input, the simulator thread taking that input into account on its next loop, the drawing thread reading the latest symbology and rendering it to the FBO and setting a "next frame available" flag to true (or maybe emitting a signal), and then the paintGL(...) method will take that FBO and spit it out to the widget, thus keeping event processing down and GUI responsiveness up. Continue this cycle.
Bottom line question: I've read here that GUI operations cannot be done in a separate thread, so is my approach even feasible?
If feasible, any other caution or suggestions would be appreciated.
Each OpenGL widget has its own OpenGL context, and these contexts are QObjects and thus can be moved to other threads. As with any otherwise non-threadsafe object, you should only access them from their thread().
Additionally - and this is also portable to QML - you could use worker functors to compute display lists that are then submitted to the render thread to be converted into draw calls. The render thread doesn't do any logic and doesn't compute anything: it takes data (vertex arrays, etc.) and submits it for drawing. The worker functors would be submitted for execution on a thread pool using QtConcurrent::run.
You can thus have a main thread, a render thread (perhaps one per widget, but not necessarily), and functors that run your simulation steps.
In any case, convoluting logic and rendering is a very bad idea. Whether you're doing drawing using QPainter on a raster widget, or using QPainter on an QOpenGLWidget, or using direct OpenGL calls, the thread that does the drawing should not have to compute what's to be drawn.
If you don't want to mess with OpenGL calls, and you can represent most of your work as array-based QPainter calls (e.g. drawRects, drawPolygons), these translate almost directly into OpenGL draw calls and the OpenGL backend will render them just as quickly as if you hand-coded the draw calls. QPainter does all this for you if you use it on a QOpenGLWidget!
I have a strange problem rendering OpenGL to QGLWidget from a different thread than the main thread.
There are a lot of official statements from Qt Developers that it is "perfectly possible" to do rendering from a different thread. I followed the explanation in:
http://doc.qt.nokia.com/qq/qq06-glimpsing.html#writingmultithreadedglapplications
I implemented it nearly the same way. The only difference is, that I dont use QWorkspace with different GLWidgets but instead I just create a MainWindow with GLWidget as central widget.
When I start the application, the rendering thread starts rendering frames with a triangle at a random position. After a while (sometimes 2 seconds, sometimes 10 seconds) the thread starts to block on the swapBuffers() call for a very long time. Sometimes swapBuffers() returns spontanously after several seconds. When I move the mouse pointer over the widget or the main window, the swapBuffers returns immediately and the as long as I move the mouse pointer swapBuffers() does not block. After moving the mouse out of the widget or just stop moving the mouse, rendering continues for some seconds and then swapBuffers start blocking again.
I have absolutely no explanation for this behaviour. I am aware that swapBuffers() regulary blocks until a frame is completed and it's also clear to me, that a wait for vsync also might happen during OpenGL buffer swap call. But that should happen in some milliseconds and not block for several seconds. The environment is X11 with GLX.
Does anybody has an idea wtf is going on here?
I dont even have an idea how to find out what the problem might be..
Does anyone tried to implement the rendering from different thread as explained in the document that I linked above?
I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).