I am working on a graphical application that supports multiple operating systems and graphical back ends. The window is created with GLFW and the graphics API is chosen at runtime. When running the program on windows and using OpenGL, Vsync seems to be broken. The frame rate is locked at 60 fps, however, screen tearing artifacts appear. Following GLFW documentation, glfwSwapInterval(0); should unlock the frame rate from the default of using VSync. That works as expected. Using glfwSwapInterval(1); should lock the frame rate to match the monitors refresh rate. Not calling glfwSwapInterval(); at all should default to using VSync. While frame rate is correcly locked / unlocked using these calls, I experienced extremely interesting behaviours.
When glfwSwapInterval(); isn't called at all, VSync is set as default. But the wait for the next frame happens at the first draw call! One would think that the delay for the next frame would happen at glfwSwapBuffers(). No screen artifacts are visible what so ever.
When calling glfwSwapInterval(1);, Vsync is set and the delay for the next frame happens at glfwSwapBuffers()! That's great, however, when explicitly setting VSync, screen tearing artifacts appear.
Right now, not calling glfwSwapInterval() for using VSync seems to be a hacky solution, but :
The user wouldn't be able to disable VSync without window reconstruction,
The profiler identifies the first draw call taking way too long, as VSync wait time is somehow happening there.
I have tried fiddling with GPU driver settings and testing the code on multiple machines. The problem is persistent across machines if using windows and OpenGL.
If anyone can make any sense of this, please share, or if I am misunderstanding something, I would greatly appreciate some pointers in the right direction.
EDIT:
Some other detail: the tearing happens at a specific horizontal line. The rest of the frame seems to work properly.
After doing some more tests, it seems that everything is working as intended on integrated graphics. Correct me if I am wrong, but it looks like it is a graphics driver issue.
Related
How to enable something like v-sync in the freeglut program, is there a way to reduce screen tearing such as possibly properly syncing up the monitor refresh rate with the frames pushed out of the program? is there possibly a library that I could use that already contains something that could take care of this? I have found tutorials for "wglSwapIntervalEXT" but I have not found any simple tutorial in its bare form. I have my freeglut program in fullscreen and there seems to be screen tearing that I can't figure out how to fix.
I'm struggling desperately to get Vsync to work in my OpenGL application. Here's the vital stats:
I'm using Windows, coding in C++ OpenGL and I'm using FreeGLUT for my OpenGL context (double buffering). I'm aware that for the swap buffer to wait for vertical sync in Windows you are required to call wglSwapIntervalEXT().
My code does call this (as you'll see below), yet I am still getting vertical tearing. The only way I've managed to stop it is by calling glFinish() which of course has a significant performance penalty associated with it.
The relevant parts of my main() function look like this:
//Initiating glut window
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowSize (initial_window_width, initial_window_height);
glutInitWindowPosition (100, 100);
int glut_window_hWnd = glutCreateWindow(window_title.c_str());
//Setting up swap intervals
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC)wglGetProcAddress("wglGetSwapIntervalEXT");
}
wglSwapIntervalEXT (1)
init ();
glutMainLoop(); ///Starting the glut loop
return(0);
I should point out that the return from the wglInitSwapControlARB function is true, so the extension is supported.
Ok - between the great help I've received here, and my own hours of research and messing around with it I've discovered a few things, including a solution that works for me (in case others come across this problem).
Firstly, I was using freeGLUT, I converted my code to GLFW and the result was the same, so this was NOT an API issue, don't waste your time like I did!
In my program at least, using wglSwapIntervalEXT(1) DOES NOT stop vertical tearing, and this was what led to it being such a headache to solve.
With my NVIDIA driver set to VSYNC = ON I was still getting tearing (because this is equivalent to SwapInterval(1) which doesn't help) - but it was set correctly, the driver was doing what it should have been I just didn't know it because I was still getting tearing.
So I set my NVIDIA driver to VSYNC = 'Application preference' and used wglSwapIntervalEXT(60) instead of 1 which I had always been using, and found that this was actually working because it was giving me a refresh rate of about 1Hz.
I don't know why wglSwapIntervalEXT(1) doesn't Vsync my screen, but wglSwapIntervalEXT(2) has the desired effect, though obviously I'm now rendering every other frame which is inefficient.
I found that with VSYNC disabled glFinish DOES NOT help with tearing, but with it enabled it does (If anyone can explain why that would be great).
So in summary, with wglSwapIntervalEXT(1) set and glFinish() enabled I no longer have tearing, but I don't understand still why.
Here's some performance stats in my app (deliberately loaded to have FPS's below 60):
wglSwapIntervalEXT(0) = Tearing = 58 FPS
wglSwapIntervalEXT(1) = Tearing = 58 FPS
wglSwapIntervalEXT(2) = No tearing = 30 FPS
wglSwapIntervalEXT(1) + glFinish = No Tearing = 52 FPS
I hope this helps someone in the future. Thanks for all your help.
i am also having problem with vsync and freeglut.
previously i used glut, and i was able to selectivly enable vsync for multiple GLUT-windows.
Now with freeglut, it seems like that the wglSwapIntervalEXT() has no effect.
What has effect is the global vsync option in the NVIDIA control panel. If I enable the vsync there, i have vsync in both of my freeglut windows, and if i disable i dont have. It does not matter what i set specificly for my application (in the nvidia control panel).
Also this confirms what i observe:
if(wglSwapIntervalEXT(0))
printf("VSYNC set\n");
int swap=wglGetSwapIntervalEXT();
printf("Control window vsync: %i\n",swap);
the value of swap is always the value that is set in the NVIDIA control panel.
Does not matter what is want to set with wglSwapIntervalEXT().
Otherwise here you can read what the GLFinish() is good for:
http://www.opengl.org/wiki/Swap_Interval
I use it because i need to know when my monitor is updated, so i can synchronously execute tasks after that (capture with a camera something).
As described in this question here, no "true" VSync exists. Using GLFinish is the correct approach. This will cause your Card to finish processing everything it has been sent before continuing and rendering the next frame.
You should keep track of your FPS and the time to render a frame, you might find GLFinish is simply exposing another bottleneck in your code.
I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.
I am working on Ogre application that I set real time views as a background in my window. Hovewer I have question when I try to get my application's frame rate by using RenderTarget::getAverageFPS() and then I get 19.7433. Is this right frame rate ?
and how can I change this frame rate for example to 30fps or 40fps ?
Unless your application is locked to the screen's vsync, you can't just change your framerate. You have to optimize your rendering, so that you can render within whatever framerate you desire. Or alternatively, render less stuff.
So if you want to render a frame 30 times a second, your rendering (and everything else) must happen within 1/30th of a second.
In short: Ogre is probably not directly the cause of your framerate. What you're telling Ogre to do is.
Note that you should be checking this in an optimized, release build, not debug. Debug builds are slow (because they're for debugging).
I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.