OpenGL render loop - c++

I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.

So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!

Related

OpenGL asynchronous render of scene that takes several seconds

I've implemented a Mandelbrot fractal generator using wxWidgets and OpenGL. The computation is performed inside a fragment shader, and I'm using a wxGLCanvas widget to display the result. It's working well, but when I want to do a high-res export, the thread locks up for a few seconds which freezes the UI.
Initially I tried moving all rendering code (and context creation) into a separate render thread, but what I found was that it wasn't just the render thread that would lock up, but ALL threads. This could be easily demonstrated by spawning a new thread prior to doing the render that just prints a message to stdout in a loop. It would get as far as printing 1 message before freezing, then resuming once the render was complete.
To perform the file export, I first render to a texture, then I read the pixels into main memory with glGetTexImage. The render occurs asynchronously as you would expect, but the glGetTexImage function will block (again, that's expected). I therefore tried using glFenceSync in combination with glGetSynciv to only call glGetTexImage once the fence had been reached indicating a completed render.
I could confirm the draw call was returning immediately, but the moment I returned to the wxWidgets event loop to wait for the render to finish, all threads in the application would freeze. I figure maybe wxWidgets is making an OpenGL call that's forcing a sync (prob something in wxGLCanvas) - I'm fairly sure it wasn't something in my code.
I'm not sure why all threads were blocking on the glGetTexImage call, rather than just the render thread. I thought it might be a quirk of my setup (hardware, driver, OS, etc.), but got the same result on a completely different platform.
The only remaining option I can think of is to do the export render in another process with its own OpenGL context. If I'm still to use wxGLCanvas to set up the context I would probably need a visible window on the screen, which isn't ideal. Also, if the user tries to close the window during the render, it would be unresponsive.
Any ideas?
Not so much a solution, but a workaround (suggested by derhass in the comments) is to split the render into several smaller renders. I'm drawing the image in horizontal strips, each 10 pixels high, and calling wxSafeYield() between each one so the UI remains somewhat responsive (and the user can abort the render if it's taking too long).
An additional advantage is that there's less danger of overloading the GPU. Prior to implementing this I once kicked off a long render that caused my displays to shut off and I had to do a hard reboot.

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

3d object wont update in for loop

I am trying to rotate a 3d object but it doesnt update when applying transforms in a for loop.
The object jumps to the last position.
How does one update a 3d object's position in a sequence of updates if it wont update in a for loop?
Just calling glTranslate, glRotate or such won't change things on the screen. Why? Because OpenGL is a plain drawing API, not a scene graph. All it knows about are points, lines and triangles that draws to a pixel framebuffer. That's it. You want to change something on the screen, you must redraw it, i.e. clear the picture, and draw it again, with the changes.
BTW: You should not use a dedicated loop to implement animations (neither for, nor while, nor do while). Instead perform animation in the idle handler and issue a redraw event.
I reckon you have a wrong understanding what OpenGL does for you.
I'll try to outline:
- Send vertex data to the GPU (once)
(this does only specify the (standard) shape of the object)
- Create matrices to rotate, translate or transform the object (per update)
- Send the matrices to the shader (per update)
(The shader then calculates the screen position using the original
vertex position and the transformation matrix)
- Tell OpenGL to draw the bound vertices (per update)
Imagine programming with OpenGL like being a web client - only specifying the request (changing the matrix and binding stuff) is not enough, you need to explicitly send the request (send the transformation data and tell OpenGL to draw) to receive the answer (having objects on the screen.)
It is possible to draw an animation from a loop.
for ( ...) {
edit_transformation();
draw();
glFlush(); // maybe glutSwapBuffers() if you use GLUT
usleep(100); // not standard C, bad
}
You draw, you flush/swap to make sure that what you just drew is sent to the screen, and you sleep.
However, it is not recommended to do this in an interactive application. The main reason is that while you are in this loop, nothing else can run. Your application will be unresponsive.
That's why window systems are event-based. Every few miliseconds, the window system pings your app so you can update your state, for example do animation. This is the idle function. When the state of your program changed, you tell the window system that you would like to draw again. It is then up the the window system to call your display function. You do your OpenGL calls when the system tells you to.
If you use GLUT for communicating with the window system, this looks like the code below. Other libraries like GLFW have equivalent functions.
int main() {
... // Create window, set everything up.
glutIdleFunc(update); // Register idle function
glutDisplayFunc(display); // Register display function
glutMainLoop(); // The window system is in charge from here on.
}
void update() {
edit_transformation(); // Update your models
glutPostRedisplay(); // Tell the window system that something changed.
}
void display() {
draw(); // Your OpenGL code here.
glFlush(); // or glutSwapBuffers();
}

Qt QGLWidget OpenGL rendering from thread blocks on swapBuffers()

I have a strange problem rendering OpenGL to QGLWidget from a different thread than the main thread.
There are a lot of official statements from Qt Developers that it is "perfectly possible" to do rendering from a different thread. I followed the explanation in:
http://doc.qt.nokia.com/qq/qq06-glimpsing.html#writingmultithreadedglapplications
I implemented it nearly the same way. The only difference is, that I dont use QWorkspace with different GLWidgets but instead I just create a MainWindow with GLWidget as central widget.
When I start the application, the rendering thread starts rendering frames with a triangle at a random position. After a while (sometimes 2 seconds, sometimes 10 seconds) the thread starts to block on the swapBuffers() call for a very long time. Sometimes swapBuffers() returns spontanously after several seconds. When I move the mouse pointer over the widget or the main window, the swapBuffers returns immediately and the as long as I move the mouse pointer swapBuffers() does not block. After moving the mouse out of the widget or just stop moving the mouse, rendering continues for some seconds and then swapBuffers start blocking again.
I have absolutely no explanation for this behaviour. I am aware that swapBuffers() regulary blocks until a frame is completed and it's also clear to me, that a wait for vsync also might happen during OpenGL buffer swap call. But that should happen in some milliseconds and not block for several seconds. The environment is X11 with GLX.
Does anybody has an idea wtf is going on here?
I dont even have an idea how to find out what the problem might be..
Does anyone tried to implement the rendering from different thread as explained in the document that I linked above?

non-blocking SwapBuffers() with VSync=on

I am looking for a portable way to make a non-blocking SwapBuffers() even if VSync is activated.
In other words, is it possible to to be notified by an event or to know the delay until the next VSync ?
IIRC this extension helps: http://www.opengl.org/registry/specs/SGI/video_sync.txt, but it is very poorly supported with current drivers.
Firstly, why don't you just call SwapBuffers() at the start of the frame? Or somehow change the pipeline to
Render();
Update(); //Update before swapping buffers
SwapBuffers();
While OpenGL is working away at all of the commands you just threw at it, you can do all of your update logic.
Otherwise there's a few ways to solve this problem.
I know that XNA has a ScanLine Property, which tells you which scanline the screen is currently up to. I don't know if OpenGL exposes this too, but I'm pretty sure it must. (Right?)
Use multithreaded rendering. Many modern engines dedicate a whole thread just for rendering. If it blocks, it's fine, it doesn't disturb the main thread. Alternitavly an easier way is to just handle input etc. on a new thread, this avoids complications with graphics contexts.
Use triple buffering. Using triple buffering means that you have 2 back buffers. Afer you call SwapBuffers, the screen can continue to scan the front buffer, with your newly finished buffer waiting, and the third buffer for you to render the next frame to. Of course, if you have already prerendered two frames, SwapBuffers() will block.