In an OpenGL application using EGL on X11 or Wayland, is there a way to wait for the next v-blank without swapping buffers?
I'm writing an OpenGL app and I only want to redraw when the content actually changes and I still want to have a way to run animations at the screen refresh rate even if I'm not drawing anything.
Related
Say, my application has a 3D window rendering a tin model of millions of triangles using OpenGL.
Goal: For some operations of users, there is no need to update the 3D window. The 3D view can just stay idle with previously rendered content, without repeatly calulate the rotation/translation/scaling/texture things. I assume this will save lots of CPU and GPU time.
Current design: I have a rendering while loop running all the time. If I stop the while loop, then the content is rendered once and disappear.
Question: is there a way to achieve the goal? Can anyone give a direction?
Instead of having a continuous rendering loop, you can use OpenGL only to render your window when the system sends you an event to repaint the window. Additionally you invalidate your own window if you know that the contents of it changed (e.g. due to a reaction to a mouse click). In fact this is the proper way to draw in your window for anything other than latency sensitive applications.
The exact specifics highly depend on the API you use to create your window. For example,
With WinAPI you render on WM_PAINT and invalidate with InvalidateRect.
With Xlib you render on Expose and invalidate with IDK.
With Qt you render in QOpenGLWindow::paintGL and invalidate with update().
With GLUT you render in glutDisplayFunc and invalidate with glutPostRedisplay.
... and so on. This is not OpenGL specific in any way.
I am working on a graphical application that supports multiple operating systems and graphical back ends. The window is created with GLFW and the graphics API is chosen at runtime. When running the program on windows and using OpenGL, Vsync seems to be broken. The frame rate is locked at 60 fps, however, screen tearing artifacts appear. Following GLFW documentation, glfwSwapInterval(0); should unlock the frame rate from the default of using VSync. That works as expected. Using glfwSwapInterval(1); should lock the frame rate to match the monitors refresh rate. Not calling glfwSwapInterval(); at all should default to using VSync. While frame rate is correcly locked / unlocked using these calls, I experienced extremely interesting behaviours.
When glfwSwapInterval(); isn't called at all, VSync is set as default. But the wait for the next frame happens at the first draw call! One would think that the delay for the next frame would happen at glfwSwapBuffers(). No screen artifacts are visible what so ever.
When calling glfwSwapInterval(1);, Vsync is set and the delay for the next frame happens at glfwSwapBuffers()! That's great, however, when explicitly setting VSync, screen tearing artifacts appear.
Right now, not calling glfwSwapInterval() for using VSync seems to be a hacky solution, but :
The user wouldn't be able to disable VSync without window reconstruction,
The profiler identifies the first draw call taking way too long, as VSync wait time is somehow happening there.
I have tried fiddling with GPU driver settings and testing the code on multiple machines. The problem is persistent across machines if using windows and OpenGL.
If anyone can make any sense of this, please share, or if I am misunderstanding something, I would greatly appreciate some pointers in the right direction.
EDIT:
Some other detail: the tearing happens at a specific horizontal line. The rest of the frame seems to work properly.
After doing some more tests, it seems that everything is working as intended on integrated graphics. Correct me if I am wrong, but it looks like it is a graphics driver issue.
My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.
I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.
I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).