non-blocking SwapBuffers() with VSync=on - opengl

I am looking for a portable way to make a non-blocking SwapBuffers() even if VSync is activated.
In other words, is it possible to to be notified by an event or to know the delay until the next VSync ?

IIRC this extension helps: http://www.opengl.org/registry/specs/SGI/video_sync.txt, but it is very poorly supported with current drivers.

Firstly, why don't you just call SwapBuffers() at the start of the frame? Or somehow change the pipeline to
Render();
Update(); //Update before swapping buffers
SwapBuffers();
While OpenGL is working away at all of the commands you just threw at it, you can do all of your update logic.
Otherwise there's a few ways to solve this problem.
I know that XNA has a ScanLine Property, which tells you which scanline the screen is currently up to. I don't know if OpenGL exposes this too, but I'm pretty sure it must. (Right?)
Use multithreaded rendering. Many modern engines dedicate a whole thread just for rendering. If it blocks, it's fine, it doesn't disturb the main thread. Alternitavly an easier way is to just handle input etc. on a new thread, this avoids complications with graphics contexts.
Use triple buffering. Using triple buffering means that you have 2 back buffers. Afer you call SwapBuffers, the screen can continue to scan the front buffer, with your newly finished buffer waiting, and the third buffer for you to render the next frame to. Of course, if you have already prerendered two frames, SwapBuffers() will block.

Related

OpenGL render loop

I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!

OpenGL asynchronous render of scene that takes several seconds

I've implemented a Mandelbrot fractal generator using wxWidgets and OpenGL. The computation is performed inside a fragment shader, and I'm using a wxGLCanvas widget to display the result. It's working well, but when I want to do a high-res export, the thread locks up for a few seconds which freezes the UI.
Initially I tried moving all rendering code (and context creation) into a separate render thread, but what I found was that it wasn't just the render thread that would lock up, but ALL threads. This could be easily demonstrated by spawning a new thread prior to doing the render that just prints a message to stdout in a loop. It would get as far as printing 1 message before freezing, then resuming once the render was complete.
To perform the file export, I first render to a texture, then I read the pixels into main memory with glGetTexImage. The render occurs asynchronously as you would expect, but the glGetTexImage function will block (again, that's expected). I therefore tried using glFenceSync in combination with glGetSynciv to only call glGetTexImage once the fence had been reached indicating a completed render.
I could confirm the draw call was returning immediately, but the moment I returned to the wxWidgets event loop to wait for the render to finish, all threads in the application would freeze. I figure maybe wxWidgets is making an OpenGL call that's forcing a sync (prob something in wxGLCanvas) - I'm fairly sure it wasn't something in my code.
I'm not sure why all threads were blocking on the glGetTexImage call, rather than just the render thread. I thought it might be a quirk of my setup (hardware, driver, OS, etc.), but got the same result on a completely different platform.
The only remaining option I can think of is to do the export render in another process with its own OpenGL context. If I'm still to use wxGLCanvas to set up the context I would probably need a visible window on the screen, which isn't ideal. Also, if the user tries to close the window during the render, it would be unresponsive.
Any ideas?
Not so much a solution, but a workaround (suggested by derhass in the comments) is to split the render into several smaller renders. I'm drawing the image in horizontal strips, each 10 pixels high, and calling wxSafeYield() between each one so the UI remains somewhat responsive (and the user can abort the render if it's taking too long).
An additional advantage is that there's less danger of overloading the GPU. Prior to implementing this I once kicked off a long render that caused my displays to shut off and I had to do a hard reboot.

Qt QGLWidget OpenGL rendering from thread blocks on swapBuffers()

I have a strange problem rendering OpenGL to QGLWidget from a different thread than the main thread.
There are a lot of official statements from Qt Developers that it is "perfectly possible" to do rendering from a different thread. I followed the explanation in:
http://doc.qt.nokia.com/qq/qq06-glimpsing.html#writingmultithreadedglapplications
I implemented it nearly the same way. The only difference is, that I dont use QWorkspace with different GLWidgets but instead I just create a MainWindow with GLWidget as central widget.
When I start the application, the rendering thread starts rendering frames with a triangle at a random position. After a while (sometimes 2 seconds, sometimes 10 seconds) the thread starts to block on the swapBuffers() call for a very long time. Sometimes swapBuffers() returns spontanously after several seconds. When I move the mouse pointer over the widget or the main window, the swapBuffers returns immediately and the as long as I move the mouse pointer swapBuffers() does not block. After moving the mouse out of the widget or just stop moving the mouse, rendering continues for some seconds and then swapBuffers start blocking again.
I have absolutely no explanation for this behaviour. I am aware that swapBuffers() regulary blocks until a frame is completed and it's also clear to me, that a wait for vsync also might happen during OpenGL buffer swap call. But that should happen in some milliseconds and not block for several seconds. The environment is X11 with GLX.
Does anybody has an idea wtf is going on here?
I dont even have an idea how to find out what the problem might be..
Does anyone tried to implement the rendering from different thread as explained in the document that I linked above?

Best Drawing approach

I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).

Many OpenGL drawing areas swapping buffers slowdown problem

I'm having a slowdown problem when using openGL with gtk (though gtkglext) and doing animations.
Essentially I have an application that does certain displays using OpenGL in a GTK app. Many windows can be open at once (and certain windows can have multiple drawing areas). So its possible to have say 20-30 openGL drawing areas on the screen at once. None of the drawing is too heavy and openGL does that very fast.
My problem comes when all these displays are animating it really slows down the application. After much research into the problem I have determined it is the swap buffer call to openGL that is causing my problems. When drawing in GTK you mush do all your drawing in the widgets expose event. So when you want to draw you call gtk_widget_queue_draw on the drawing area widget and then when GTK is processing its events it will call the expose event serially on all the widgets that need drawing. The problem comes in when after the drawing is done, I need to call swap buffers to paint the actual openGL on the screen (Because of double-buffering). This call seems to block (because vysnc is on) until the monitor refreshes. This isn't a problem when there is say 3 drawing areas on the screen, but when there is a ton, there is a ton of swap buffer calls all blocking and really slowing down the app because each of these swap buffer calls are called in their own expose event and none are in sync.
My question is then is there some way to sync all the swap buffer calls so there isn't so much blocking. Turning off vsync (Ugly in itself because its OS/openGL implementation specific) fixes the speed problem but then there is tearing issues. I'm not sure how multi-threads will help because I have to do the swapbuffers in the GTK expose event so the drawing is in sync with GTK, unless there is something I'm not thinking of.
Any help would be appreciated!
If you have 20+ windows, what do you expect? Each one is vsynced to the same timing: the screen refresh. Each one will have to do a bunch of memory operations during that time. All at the same time. Of course there's going to be slowdown. Unless you have 20+ processors, they're going to have to get in line one behind the other.
Really, there's not much you can do besides limit the number of GL windows shown to the user.
The typical approach to tackle this problem would be using a own thread for each OpenGL context to swap.
However OpenGL implementors could (and they should I say) introduce a new extension that introduces "coordinated swaps" or something like that. There are some synchronization extensions, most notably http://www.opengl.org/registry/specs/OML/glx_sync_control.txt