Trouble with vsync using glut in OpenGL - c++

I'm struggling desperately to get Vsync to work in my OpenGL application. Here's the vital stats:
I'm using Windows, coding in C++ OpenGL and I'm using FreeGLUT for my OpenGL context (double buffering). I'm aware that for the swap buffer to wait for vertical sync in Windows you are required to call wglSwapIntervalEXT().
My code does call this (as you'll see below), yet I am still getting vertical tearing. The only way I've managed to stop it is by calling glFinish() which of course has a significant performance penalty associated with it.
The relevant parts of my main() function look like this:
//Initiating glut window
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowSize (initial_window_width, initial_window_height);
glutInitWindowPosition (100, 100);
int glut_window_hWnd = glutCreateWindow(window_title.c_str());
//Setting up swap intervals
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC)wglGetProcAddress("wglGetSwapIntervalEXT");
}
wglSwapIntervalEXT (1)
init ();
glutMainLoop(); ///Starting the glut loop
return(0);
I should point out that the return from the wglInitSwapControlARB function is true, so the extension is supported.

Ok - between the great help I've received here, and my own hours of research and messing around with it I've discovered a few things, including a solution that works for me (in case others come across this problem).
Firstly, I was using freeGLUT, I converted my code to GLFW and the result was the same, so this was NOT an API issue, don't waste your time like I did!
In my program at least, using wglSwapIntervalEXT(1) DOES NOT stop vertical tearing, and this was what led to it being such a headache to solve.
With my NVIDIA driver set to VSYNC = ON I was still getting tearing (because this is equivalent to SwapInterval(1) which doesn't help) - but it was set correctly, the driver was doing what it should have been I just didn't know it because I was still getting tearing.
So I set my NVIDIA driver to VSYNC = 'Application preference' and used wglSwapIntervalEXT(60) instead of 1 which I had always been using, and found that this was actually working because it was giving me a refresh rate of about 1Hz.
I don't know why wglSwapIntervalEXT(1) doesn't Vsync my screen, but wglSwapIntervalEXT(2) has the desired effect, though obviously I'm now rendering every other frame which is inefficient.
I found that with VSYNC disabled glFinish DOES NOT help with tearing, but with it enabled it does (If anyone can explain why that would be great).
So in summary, with wglSwapIntervalEXT(1) set and glFinish() enabled I no longer have tearing, but I don't understand still why.
Here's some performance stats in my app (deliberately loaded to have FPS's below 60):
wglSwapIntervalEXT(0) = Tearing = 58 FPS
wglSwapIntervalEXT(1) = Tearing = 58 FPS
wglSwapIntervalEXT(2) = No tearing = 30 FPS
wglSwapIntervalEXT(1) + glFinish = No Tearing = 52 FPS
I hope this helps someone in the future. Thanks for all your help.

i am also having problem with vsync and freeglut.
previously i used glut, and i was able to selectivly enable vsync for multiple GLUT-windows.
Now with freeglut, it seems like that the wglSwapIntervalEXT() has no effect.
What has effect is the global vsync option in the NVIDIA control panel. If I enable the vsync there, i have vsync in both of my freeglut windows, and if i disable i dont have. It does not matter what i set specificly for my application (in the nvidia control panel).
Also this confirms what i observe:
if(wglSwapIntervalEXT(0))
printf("VSYNC set\n");
int swap=wglGetSwapIntervalEXT();
printf("Control window vsync: %i\n",swap);
the value of swap is always the value that is set in the NVIDIA control panel.
Does not matter what is want to set with wglSwapIntervalEXT().
Otherwise here you can read what the GLFinish() is good for:
http://www.opengl.org/wiki/Swap_Interval
I use it because i need to know when my monitor is updated, so i can synchronously execute tasks after that (capture with a camera something).

As described in this question here, no "true" VSync exists. Using GLFinish is the correct approach. This will cause your Card to finish processing everything it has been sent before continuing and rendering the next frame.
You should keep track of your FPS and the time to render a frame, you might find GLFinish is simply exposing another bottleneck in your code.

Related

How to prevent screen tearing with OpenGL + GLFW?

I am working on a graphical application that supports multiple operating systems and graphical back ends. The window is created with GLFW and the graphics API is chosen at runtime. When running the program on windows and using OpenGL, Vsync seems to be broken. The frame rate is locked at 60 fps, however, screen tearing artifacts appear. Following GLFW documentation, glfwSwapInterval(0); should unlock the frame rate from the default of using VSync. That works as expected. Using glfwSwapInterval(1); should lock the frame rate to match the monitors refresh rate. Not calling glfwSwapInterval(); at all should default to using VSync. While frame rate is correcly locked / unlocked using these calls, I experienced extremely interesting behaviours.
When glfwSwapInterval(); isn't called at all, VSync is set as default. But the wait for the next frame happens at the first draw call! One would think that the delay for the next frame would happen at glfwSwapBuffers(). No screen artifacts are visible what so ever.
When calling glfwSwapInterval(1);, Vsync is set and the delay for the next frame happens at glfwSwapBuffers()! That's great, however, when explicitly setting VSync, screen tearing artifacts appear.
Right now, not calling glfwSwapInterval() for using VSync seems to be a hacky solution, but :
The user wouldn't be able to disable VSync without window reconstruction,
The profiler identifies the first draw call taking way too long, as VSync wait time is somehow happening there.
I have tried fiddling with GPU driver settings and testing the code on multiple machines. The problem is persistent across machines if using windows and OpenGL.
If anyone can make any sense of this, please share, or if I am misunderstanding something, I would greatly appreciate some pointers in the right direction.
EDIT:
Some other detail: the tearing happens at a specific horizontal line. The rest of the frame seems to work properly.
After doing some more tests, it seems that everything is working as intended on integrated graphics. Correct me if I am wrong, but it looks like it is a graphics driver issue.

Qt: vsync - missing rendered frames

for a scientific task, flickering areas with a stable frequency (max. 60 Hz), shall be displayed on the screen. I tried to achieve a stable stimulus visualization using Qt 5.6.
According to this blog entry and many other online recommendations, I realized three different approaches: Inheriting from QWindow Class, QOpenGLWindow Class and QRasterWindow Class. I wanted to get the advantage of vsync and avoid the usage of QTimer.
The flickering area can be displayed. Also a stable time period between the frames has been measured with 16 up to 17 ms.
But every few seconds some missed frames are spotted. It can be seen very clearly that there is no stable visualization of the stimulus. The same effect occurs on all three approaches.
Have I done the implementation of my code properly or do better solutions exist? If the code is adequate for its purpose do I have to assume that it is a hardware problem? Could it be that difficult then, to display a simple flickering area?
Thank you very much for helping me!
As Example you can see my code for QWindow Class here:
Window::Window(QWindow *parent)
: m_context(0)
, m_paintDevice(0)
, m_bFlickerState(true){
setSurfaceType(QSurface::OpenGLSurface);
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setSwapInterval(1);
this->setFormat(format);
m_context.setFormat(format);
m_context.create();}
The render() function, which is called by overwritten event functions, is:
void Window::render(){
//calculating exposed time between frames
m_t1 = QTime::currentTime();
int curDelta = m_t0.msecsTo(m_t1);
m_t0 = m_t1;
qDebug()<< curDelta;
m_context.makeCurrent(this);
if (!m_paintDevice)
m_paintDevice = new QOpenGLPaintDevice;
if (m_paintDevice->size() != size())
m_paintDevice->setSize(size());
QPainter p(m_paintDevice);
// draw using QPainter
if(m_bFlickerState){
p.setBrush(Qt::white);
p.drawRect(0,0,this->width(),this->height());
}
p.end();
m_bFlickerState = !m_bFlickerState;
m_context.swapBuffers(this);
// animate continuously: schedule an update
QCoreApplication::postEvent( this, new QEvent(QEvent::UpdateRequest));}
I got help of some experts from the qt-forum. You can follow the whole discussion here. At the end, this was the result:
"
V-sync is hard ;) Basically it's fighting with the inherent noisiness of the system. If the output shows 16-17 ms then that's the problem. 17 ms is too much. That's the skipping you see.
Couple of things to reduce that noise:
Don't do I/O in the render loop! qDebug()is I/O and it can block on all kinds of buffering shenanigans.
Testing V-sync under a debugger is useless. Debugging introduces all kinds of noise into your app. You should be testing v-sync in Release mode without debugger attached.
try not to use signals/slots/events if you can help it. They can be noisy i.e. call update() manually at the end of paintGL. You skip some overhead this way (not much but every bit counts).
If all you need is a flickering screen avoid QPainter. It's not exactly slow, but drop into the begin() method of it and see how much it actually does. OpenGL has fast, dedicated facilities to fill the buffer with a color. You might as well use it.
Not directly related, but it will make your code cleaner:
Use QElapsedTimer instead of manually calculating time intervals. Why re-invent the wheel.
Applying these bits I was able to remove the skipping from your example. Note that the skipping will occur in some circumstances, e.g. when you move/resize the window or when OS/other apps are busy doing something . You have no control over that.
"

Is it possible to vsync SDL_SetVideoMode?

I am trying to change the size of the window of my app with:
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
Although I am using vsync swapbuffers (in driver xorg-video-ati), I can see flickering when the window size changes (I guess one or more black frames):
void Video::draw()
{
if (videoChanged){
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
scene->init(); //Update glFrustum & glViewPort
}
scene->draw();
SDL_GL_SwapBuffers();
}
So please, someone knows, if...
The SDL_SetVideoMode is not vsync'ed as is SDL_GL_SwapBuffers()?
Or is it destroying the window and creating another and the buffer is black in meantime?
Someone knows a working code to do this? Maybe in freeglut?
In SDL-1 when you're using a windowed video mode the window is completely torn down and a new one created when changing the video mode. Of course there's some undefined data inbetween, which is perceived as flicker. This issue has been addressed in SDL-2. Either use that or use a different OpenGL framework, that resizes windows without gong a full window recreation.
If you're using a FULLSCREEN video mode then something different happens additionally:
A change of the video mode actually changes the video signal timings going from the graphics card to the display. After such a change the display has to find synchronization with the new settings and that takes some time. This of course comes with some flickering as the display may try to display a frame of different timings with the old settings until it detects that those no longer match. It's a physical effect and there's nothing you can do in software to fix this, other than not changing the video mode at all.

Compiz and OpenGL window

I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.

How to sync page-flips with vertical retrace in a windowed SDL application?

I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.