I'm writting a game with OGL / GLFW in c++.
My game is always running at 60 fps and without any screen tearing. After doing some research, it seems that the glfwSwapInterval() function should be able to enable/disable V-sync or the 60fps cap.
However, no matter the value I pass to the function, the framerate stays locked at 60 and there is no tearing whatsoever. I have also checked the compositor settings on linux and the nvidia panel, and they take no effect.
This is a common thing I assume, is there a way to get around this fps cap?
Is there a way to remove 60 fps cap in GLFW?
The easiest way is to use single buffering instead of double buffering. Since at single buffering is always use the same buffer there is no buffer swap and no "vsync".
Use the glfwWindowHint to disable double buffering:
glfwWindowHint(GLFW_DOUBLEBUFFER, GLFW_FALSE);
GLFWwindow *wnd = glfwCreateWindow(w, h, "OGL window", nullptr, nullptr);
Note, when you use singel buffering, then you have to explicite force execution of the GL commands by (glFlush), instead of the buffer swap (glfwSwapBuffers).
Another possibility is to set the number of screen updates to wait from the time glfwSwapBuffers was called before swapping the buffers to 0. This can be done by glfwSwapInterval, after making the OpenGL context current (glfwMakeContextCurrent):
glfwMakeContextCurrent(wnd);
glfwSwapInterval(0);
But note, whether this solution works or not, may depend on the hardware and the driver.
Related
I am using an FBO+RBO, and instead of regular double buffering on the default framebuffer, I am drawing to the RBO and then blit directly on the GL_FRONT buffer of the default FBO (0) in a single buffered OpenGL context.
It is fine and I dont get any flickering, but if the scene gets a bit complex, I experience a HUGE drop in fps, something so weird that I knew something had to be wrong. And I dont mean from 1/60 to 1/30 because of a skipped sync, I mean a sudden 90% fps drop.
I tried a glFlush() after the blit - no difference, then I tried a glFinish() after the blit, and I had a 10x fps boost.
So I used regular doble buffering on the default framebuffer and swapbuffers(), and the fps got a boost as well, as when using glFinish().
I cannot figure out what is happening. Why glFinish() makes so much of a difference when it should not? and, is it ok to use a RBO and blit directly on the front buffer, instead of using a swapbuffers call in a double buffering context? I know Im missing vsync but the composite manager will sync anyway (infact im not seeing any tearing), it is just as if the monitor is missing 9 out of 10 frames.
And just out of curiosity, does a native swapbuffers() use glFinish() on either windows or linux?
I believe it is a sync-related issue.
When rendering directly to the RBO and blitting to the front buffer, there is simply no sync whatsoever. Thus on complex scenes the GPU command queue will fill quite quickly, then the CPU driver queue will fill quickly as well, until a CPU sync will be forced by the driver during an OpenGL command. At that point the CPU thread will be halted.
What I mean is that, without any form of sync, complex renderings (renderings for which one or more OpenGL command will be put in a queue) will always cause the CPU thread to be halted at some point, since as the queues will fill, the CPU will be issuing more and more commands.
In order to get a smooth (more constant) user interaction, a sync is needed (either with a platform-specific swapbuffers() or a glFinish()) so to stop the CPU from making things worse issuing more and more commands (which in turn would bring the CPU thread to a stop later)
reference: OpenGL Synchronization
There are separate issues here, that are also a little bit connected.
1) Re-implementing double buffering yourself, while on spec the same thing, is not the same thing to the driver. Drivers are highly optimized for the common case. For example, many chips have distinct 2d and 3d units. The swap in swapBuffers is often handled by the 2d unit. Blitting a buffer is probably still done with the 3d unit.
2) glFlush (and Finish) are ignored by many drivers. Flush is a relic of client server rendering. Finish was intended for profiling. But it got abused to work around driver bugs. So now drivers often ignore it to improve the performance of legacy code that used Finish as a workaround.
3) Just don't do single buffered. There is no performance benefit and you are working off the "good" path of the driver. Window managers are super optimized for double buffered opengl.
4) What you are seeing looks a lot like you are simply leaking resources. Do you allocate buffers without freeing them? A quick and dirty way to check is if any glGen* functions return ever increasing ids.
I'm trying to do vertical synced renders so that exactly one render is done per vertical sync, without skipping or repeating any frames. I would need this to work under Windows 7 and (in the future) Windows 8.
It would basically consist of drawing a sequence of QUADS that would fit the screen so that a pixel from the original images matches 1:1 a pixel on the screen. The rendering part is not a problem, either with OpenGL or DirectX. The problem is the correct syncing.
I previously tried using OpenGL, with the WGL_EXT_swap_control extension, by drawing and then calling
SwapBuffers(g_hDC);
glFinish();
I tried all combinations and permutation of these two instructions along with glFlush(), and it was not reliable.
I then tried with Direct3D 10, by drawing and then calling
g_pSwapChain->Present(1, 0);
pOutput->WaitForVBlank();
where g_pSwapChain is a IDXGISwapChain* and pOutput is the IDXGIOutput* associated to that SwapChain.
Both versions, OpenGL and Direct3D, result in the same: The first sequence of, say, 60 frames, doesn't last what it should (instead of about 1000ms at 60hz, is lasts something like 1030 or 1050ms), the following ones seem to work fine (about 1000.40ms), but every now and then it seems to skip a frame. I do the measuring with QueryPerformanceCounter.
On Direct3D, trying a loop of just the WaitForVBlank, the duration of 1000 iterations is consistently 1000.40 with little variation.
So the trouble here is not knowing exactly when each of the functions called return, and whether the swap is done during the vertical sync (not earlier, to avoid tearing).
Ideally (if I'm not mistaken), to achieve what I want, it would be to perform one render, wait until the sync starts, swap during the sync, then wait until the sync is done. How to do that with OpenGL or DirectX?
Edit:
A test loop of just WaitForVSync 60x takes consistently from 1000.30ms to 1000.50ms.
The same loop with Present(1,0) before WaitForVSync, with nothing else, no rendering, takes the same time, but sometimes it fails and takes 1017ms, as if having repeated a frame. There's no rendering, so there's something wrong here.
I have the same problem in DX11. I want to guarantee that my frame rendering code takes an exact multiple of the monitor's refresh rate, to avoid multi-buffering latency.
Just calling pSwapChain->present(1,0) is not sufficient. That will prevent tearing in fullscreen mode, but it does not wait for the vblank to happen. The present call is asynchronous and it returns right away if there are frame buffers remaining to be filled. So if your render code is producing a new frame very quickly (say 10ms to render everything) and the user has set the driver's "Maximum pre-rendered frames" to 4, then you will be rendering four frames ahead of what the user sees. This means 4*16.7=67ms of latency between mouse action and screen response, which is unacceptable. Note that the driver's setting wins - even if your app asked for pOutput->setMaximumFrameLatency(1), you'll get 4 frames regardless. So the only way to guarantee no mouse-lag regardless of driver setting is for your render loop to voluntarily wait until the next vertical refresh interval, so that you never use those extra frameBuffers.
IDXGIOutput::WaitForVBlank() is intended for this purpose. But it does not work! When I call the following
<render something in ~10ms>
pSwapChain->present(1,0);
pOutput->waitForVBlank();
and I measure the time it takes for the waitForVBlank() call to return, I am seeing it alternate between 6ms and 22ms, roughly.
How can that happen? How could waitForVBlank() ever take longer than 16.7ms to complete? In DX9 we solved this problem using getRasterState() to implement our own, much-more-accurate version of waitForVBlank. But that call was deprecated in DX11.
Is there any other way to guarantee that my frame is exactly aligned with the monitor's refresh rate? Is there another way to spy the current scanline like getRasterState used to do?
I previously tried using OpenGL, with the WGL_EXT_swap_control extension, by drawing and then calling
SwapBuffers(g_hDC);
glFinish();
That glFinish() or glFlush is superfluous. SwapBuffers implies a glFinish.
Could it be, that in your graphics driver settings you set "force V-Blank / V-Sync off"?
We use DX9 currently, and want to switch to DX11. We currently use GetRasterState() to manually sync to the screen. That goes away in DX11, but I've found that making a DirectDraw7 device doesn't seem to disrupt DX11. So just add this to your code and you should be able to get the scanline position.
IDirectDraw7* ddraw = nullptr;
DirectDrawCreateEx( NULL, reinterpret_cast<LPVOID*>(&ddraw), IID_IDirectDraw7, NULL );
DWORD scanline = -1;
ddraw->GetScanLine( &scanline );
On Windows 8.1 and Windows 10, you can make use of the DXGI 1.3 DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT. See MSDN. The sample here is for Windows 8 Store apps, but it should be adaptable to class Win32 windows swapchains as well.
You may find this video useful as well.
When creating a Direct3D device, set PresentationInterval parameter of the D3DPRESENT_PARAMETERS structure to D3DPRESENT_INTERVAL_DEFAULT.
If you run in kernel-mode or ring-0, you can attempt to read bit 3 from the VGA input register (03bah,03dah). The information is quite old but although it was hinted here that the bit might have changed location or may be obsoleted in later version of Windows 2000 and up, I actually doubt this. The second link has some very old source-code that attempts to expose the vblank signal for old Windows versions. It no longer runs, but in theory rebuilding it with latest Windows SDK should fix this.
The difficult part is building and registering a device driver that exposes this information reliably and then fetching it from your application.
As today's cards seem to keep a list of render commands and flush only on a call to glFlush or glFinish, is double buffering really needed any more? An OpenGL game I am developing on Linux (ATI Mobility radeon card) with SDL/OpenGL actually flickers less when SDL_GL_swapbuffers() is replaced by glFinish() and with SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,0) in the init code. Is this a particular case of my card or are such things likely on all cards?
EDIT: I've discovered that the cause for this is KWin. It appears that as datenwolf said, compositing without sync was the cause. When I switched off KWin compositing, the game works fine without ANY source code patches
Double buffering and glFinish are two very different things.
glFinish blocks the program, until all drawing operations are completed.
Double buffering is used to hide the rendering process from the user. Without double buffering, each and every single drawing operation would become visible immediately, assuming that the display refresh frequency is infinitely high. In practice you will get some display artifacts, like parts of the scene visible in one state, the rest not visible or in some other state, the picture could be incomplete, etc. Double buffering avoids this by first rendering into a back buffer, and only after the rendering has been finished swapping this back with the front buffer, that gets sent to the display device.
Now today compositing window management becomes prevalent: Windows has Aero, MacOS X Quartz Extreme and on Linux at least Unity and the GNOME3 shell use compositing if available. The point is: Compositing technically creates doublebuffering: Windows draw to offscreen buffers and of these the final screen is composited. So if you're running on a machine with compositing, then double buffering is kind of redundant if performed in your program, and all it'd take was some kind of synchronization mechanism, to tell the compositor when the next frame is ready. MacOS X has this. X11 still lacks a proper synchronization scheme, see this post on the maillist: http://lists.freedesktop.org/archives/xorg/2004-May/000607.html
TL;DR: Double buffering and glFinish are different things, and you need double buffering (of some sort) to make things look good.
I would expect that it has more to do with what you're rendering or your hardware than anything that could be generalized to something not on your machine. So no: don't try to do this.
Oh, and don't forget multisampling. Many implementations only multisample the back buffer; the front buffer is not multisampled. Doing a swap will downsample from the multisampled buffer.
how can I use graphic card instead of CPU when I call SDL functions?
You cannot, SDL is only a software renderer. However, you can use SDL to create a window, catch events and then you can perform your drawing using OpenGL.
If your program is eating 100 % CPU, make sure that you limit the FPS correctly (by adding SDL_Delay to the main loop).
If possible, you can use SDL2.0.
That supports Full 3D hardware acceleration using SDL1.2 similar functions.
http://wiki.libsdl.org/MigrationGuide
In SDL 2.0, create your renderer with SDL_RENDERER_ACCELERATED and SDL_RENDERER_PRESENTVSYNC flags. First one implies hardware acceleration when possible, and second limits your program to run 60(depends on monitor's refresh rate) fps at most, thus freeing CPU from continuous work.
I usually create a pixel format using wglChoosePixelFormatARB() with these arguments (among others):
WGL_DOUBLE_BUFFER_ARB = GL_TRUE
WGL_SAMPLE_BUFFERS_ARB = GL_TRUE
WGL_SAMPLES_ARB = 4
i.e. double buffering on and x4 multisampling. This works just fine.
But when I try to turn of the double buffering:
WGL_DOUBLE_BUFFER_ARB = GL_FALSE
WGL_SAMPLE_BUFFERS_ARB = GL_TRUE
WGL_SAMPLES_ARB = 4
The call to wglChoosePixelFormatARB() fails (or rather indicates it didn't create anything)
When I effectively turn multisampling off:
WGL_DOUBLE_BUFFER_ARB = GL_FALSE
WGL_SAMPLE_BUFFERS_ARB = GL_TRUE
WGL_SAMPLES_ARB = 1
I works fine again.
Is there something inherent that prevents a non-double buffered pixel format to work with multisampling?
The reason I'm turning double buffering off is to achieve unconstrained frame rate. with double buffering the frame rate I get is only up to 60 FPS (this laptop LCD works at 60Hz). But with double buffering off I can get up to 1500 FPS. Is there a way to achieve this with double buffering on?
In theory, drawing in a single-buffer mode means that you're directly modifying what is being presented to the screen (aka the front buffer).
Since that memory is in a specific format already, you don't get to choose another one.
(I'm saying in theory because the platform does however it pleases it in practice. Aero for example does not allow access to the front-buffer).
Moreover, when doing multisampling, the step that converts the X samples/pixel to 1 pixel for drawing is when the back-buffer is copied to the front buffer (what is called the resolve step). In single buffer mode, there is no such step.
As to your 60 fps locking, you might want to look atWGL_EXT_swap_control. The issue here is that you don't generally want to update what is being shown on screen while the screen refreshes the data; it creates tearing. So by default, Swap only updates while the screen is vertical syncing (aka vsync), so you end up locking to the refresh rate of the screen.
If you don't mind your display showing parts of different frames, you can turn it off.
For completeness, there is an alternative mode called triple buffering, that essentially has the GPU ping-pong rendering between 2 back-buffers while the front buffer is shown. It is up to the gpu to pick the last finished back-buffer when comes time to change what shows on screen (vsync). Sadly, I am not aware of a WGL method to ask for triple buffering.