OpenGL unable to sync "player" FPS to monitor refresh rate - opengl

I am trying to write a video player that will play at the the EXACT FPS as the monitor refresh rate (lets say it is 60Hz).
I am writing c++ (VS2010) on windows and using OpenGL.
I have a very strong PC, when no sync is set I can reach to 500FPS.
this is the relevant code:
glfwSwapInterval(1);
while (1)
{
glBindTexture(GL_TEXTURE_2D,Texture[frameIndexInArray]);
glBegin(GL_POLYGON);
glNormal3f(0.0f, 0.0f, 1.0f);
...
glVertex3f(-1.0f, 1.0f, 0.0f);
glEnd();
glFinish();
glfwSwapBuffers(window);
glfwPollEvents();
...
}
the vertical sync option on the graphics driver set to "on".
and I have a grabber that records my output via DP cable. (I know for a fact that it works fine)
my problem is that my player, once in a few hundreds frames is getting out of sync,
the output is: frame(n-1), frame(n), frame(n), frame(n+1) ... (double frame)
or it can also be: frame(n-1), frame(n), frame(n), frame(n+2) ... (double and skip frame)
I tried to use glfwSwapInterval (0) and to set vsync in the graphics driver to "application settings", I tried without glFinish(), I even tried to give the thread high priority with SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
is it possible to get exactly 60FPS? end if so, how?? I could use any advice you have 'cause I literally tried everything I know.

If for some reason you are ever late on a buffer swap, you will wind up drawing 1 frame for at least two monitor refreshes while VSYNC is enabled.
If your driver supports adaptive VSYNC (most NV and AMD drivers do), I would suggest trying that first. That way it will never draw faster than 60 FPS, but if you are slightly late it is not going to penalize you. Check for this WGL extension: WGL_EXT_swap_control_tear and if it is supported, you can pass -1 to glfwSwapInterval (...). The extension does not add any new functions, it just gives meaning to negative values for the swap interval.
GLFW may be too stupid to accept a negative value (I have never tried it), and you might have to interface directly with the platform-specific function: wglSwapIntervalEXT (...)

Related

Is there a way to remove 60 fps cap in GLFW?

I'm writting a game with OGL / GLFW in c++.
My game is always running at 60 fps and without any screen tearing. After doing some research, it seems that the glfwSwapInterval() function should be able to enable/disable V-sync or the 60fps cap.
However, no matter the value I pass to the function, the framerate stays locked at 60 and there is no tearing whatsoever. I have also checked the compositor settings on linux and the nvidia panel, and they take no effect.
This is a common thing I assume, is there a way to get around this fps cap?
Is there a way to remove 60 fps cap in GLFW?
The easiest way is to use single buffering instead of double buffering. Since at single buffering is always use the same buffer there is no buffer swap and no "vsync".
Use the glfwWindowHint to disable double buffering:
glfwWindowHint(GLFW_DOUBLEBUFFER, GLFW_FALSE);
GLFWwindow *wnd = glfwCreateWindow(w, h, "OGL window", nullptr, nullptr);
Note, when you use singel buffering, then you have to explicite force execution of the GL commands by (glFlush), instead of the buffer swap (glfwSwapBuffers).
Another possibility is to set the number of screen updates to wait from the time glfwSwapBuffers was called before swapping the buffers to 0. This can be done by glfwSwapInterval, after making the OpenGL context current (glfwMakeContextCurrent):
glfwMakeContextCurrent(wnd);
glfwSwapInterval(0);
But note, whether this solution works or not, may depend on the hardware and the driver.

How to show the result of opengl?

I'm very new to OpenGL. I want to learn it from the Blue Book (OpenGL superbible 6th edition).
When I compile the first program with Visual Studio 2013, everything goe well, except that a white window appears and then the program quit with code 0.
The program is:
// Include the "sb6.h" header file
#include "sb6.h"
// Derive my_application from sb6::application
class my_application : public sb6::application
{
public:
// Our rendering function
void render(double currentTime)
{
// Simply clear the window with red
static const GLfloat red[] = { 1.0f, 0.0f, 0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, red);
}
};
// Our one and only instance of DECLARE_MAIN
DECLARE_MAIN(my_application);
I think both the compiling and building process are working fine because the result code 0. But I cannot figure out why I don't see a red output window. Please help me.
You are basically dealing with a real-time application which means it takes very little time to render a frame. Rendering a frame means, OpenGL will take all the commands you used to define the scene (the geometry, the settings of the buffers, the shaders, etc.) and render 1 frame as quickly as it can. Generally, if your application needs to be real-time, it will have to be able to render this scene is more than 30 frame per second.
So what basically OpenGL does is render this frame, and then quit. Normally (you don't say in your post which framework you use to create your application, GLUT, GLFW? Well you say you use the code the blue book but not sure which OpenGL lib it uses, it's obviously a wrapper around something else), but a typical OpenGL app does this (in pseudo c/c++ code, assuming some arbitrary framework to deal with keyboard/mouse events, etc.):
bool run = true;
main() {
...
while (run) {
event e = get_event(e); // this is a key, or mouse event
process_event(e);
render_frame();
}
...
}
void processe_event(event e)
{
if (e.type == KEYBOARD_EVENT && e.value == ESC) { run = false; return; }
...
}
The idea is that you run the render function within an infinite loop. So each time the program iterates over the loop, it will render to the screen the content of your openGL scene. Of course, since it's an infinite loop, the window stays on the screen, until your decide to kill the program (or implement some mechanism in which you escape the loop when some specific keys are used, typically the escape key).
The most basic way of getting this to work, is to use an infinity loop:
while (1) {
render_frame();
}
and do a ctrl+c or interrupt/kill your program. That way you don't have to deal with keys, etc. and then you can at least see what your program does and then move on learning how to use keys.
Aslo I am not sure your code will do anything. First, if you use a double buffer (generally the default these days), you will need to switch buffers to see even the clear function doing something. Second, you will need to add some geometry to your scene. However, note that if you use OpenGL4 for example, you will need to declare and use shaders to see anything, and this is not easy to get this working the first time.
Note that maybe the infinite loop is embedded within the macro DECLARE_MAIN, however this is the problem with using code/framework like the one you are showing in your example, you don't know what's happening elsewhere in the code and how are things coded. For example maybe the buffer swap is happening in the macro DECLARE_MAIN. Generally, I understand why they use macro like that for teaching people (because it hides from you all the complexity of how to get the OpenGL app working) but the downfall is that it stops you from truly understanding the principle of how things work. I personally think this is not the best way to teach graphics especially OpenGL.
The blue book is great, but I would also recommend that you find some example on the web, which shows how to render for example a simple triangle in GL, for example:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
They are quite a few on the web, which are simple, well explained, etc.
You will need to choose your framework first though. I recommend GLFW if you can.
Also, while lessons on OpenGL haven't been written for this yet, I would recommend you check www.scratchapixel.com in the future. It will explain how OpenGL works and guide you step by step to get a simple app running.
If you have more questions please add them in your comments.

Segfault with glPopMatrix

I'm porting/testing my code on a Raspberry Pi running Pidora, all updated.
The test is a very simple OpenGL program, which works fine on two other computers. I narrowed down a problem to a glPopMatrix call, which causes a segfault. A trivial reproduction of the problem (the entire draw function) is:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glViewport(0,0,200,200);
float d1=0.0f, d2=0.1f;
glPushMatrix(); //gratuitous push/pop pair to demonstrate
glBegin(GL_TRIANGLES);
glColor4f(1.0f,0.0f,0.0f,0.5f); glVertex3f( 0.00f, 1.0f,d1);
glColor4f(0.0f,1.0f,0.0f,0.5f); glVertex3f( 0.87f,-0.5f,d1);
glColor4f(0.0f,0.0f,1.0f,0.5f); glVertex3f(-0.87f,-0.5f,d1);
glColor4f(1,0,0,1);
glVertex3f(0,0,d2);
glVertex3f(1,0,d2);
glVertex3f(0,1,d2);
glColor4f(1,0,1,1);
glVertex3f( 0, 0,-d2);
glVertex3f(-1, 0,-d2);
glVertex3f( 0,-1,-d2);
glEnd();
glPopMatrix(); //segfault here!
The only reference I could find was this 2011 bug report, which describes effectively the same problem. So far, it seems they only have a workaround:
export GALLIUM_DRIVER=softpipe
export DRAW_USE_LLVM=no
I found only the first line was necessary. However, as suggested by the above, it looks like it might be forcing the OS to use a software fallback. It shows. The program (which as above draws three triangles) runs at about 10Hz.
It's common knowledge that the OpenGL matrix stack is deprecated, but simple usage cases like the above are useful for testing. I would expect glPopMatrix to at least not crash if it's present. Is there a way I can get hardware acceleration but still use this?
The raspberry pi has hardware support for OpenGL ES 1.x/2.x via vendor-specific libraries - but none for desktop GL. Immediate mode (glBegin/glEnd) was never in GLES. You have to use vertex arrays. However, the matrix stack is available in GLES1.x. You have to use egl to get a hw-accelerated context. On the upside, GL on the RPi does not require X11, so you can have an GL overlay just directly on the console, which is very cool. The official RPi firmware comes with the hello_triangle demo, which shows you how to get a valid context, the source can be found in /opt/vc/src/hello_pi/hello_triangle. There are also Ben O. Steen's RPi ports of the examples of the OpenGL ES programming guide/
You are currently using the mesa software renderer, which will be extremely slow on that platform. The crash seems to be a mesa bug, but as mesa doesn't have any hw accleration support for the GPU of the rpi, this should not really matter. The cod you have pasted is valid desktop GL1.x/2.x and should not crash on a conforming implementation.

glScissor behaving strangely in my code

A laptop user running an ATI Radeon HD 5800 Series just showed me a video of my 2D game running on his machine, and it would appear my glScissor code was not working as intended on it (despite it working on soo many other computers).
Whenever I want to restrict rendering to a certain rectangle on the screen, I call my SetClip function to either the particular rectangle, or Rectangle.Empty to reset the scissor.
public void SetClip(Rectangle theRect)
{
if (theRect.IsEmpty)
OpenGL.glDisable(OpenGL.EnableCapp.ScissorTest);
else
{
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
OpenGL.glScissor(theRect.Left, myWindow.clientSizeHeight - theRect.Bottom,
theRect.Width, theRect.Height);
CheckError();
}
}
Is this approach wrong? For instance, I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.
I'd put my money on a driver bug. Your code looks fine.
The only thing I suggest changing this:
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
into a mere
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
Changing glEnable state comes practically free, so doing that test is a microoptimization. In fact first testing with glIsEnabled probably costs as much, as the overhead causes by redundantly setting it. But it happens that some drivers may be buggy in what they report by glIsEnabled, so I'd remove that to cut another potential error source.
I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.
No they don't. In fact glFlush and glFinish are not required for most programs. If you're using a double buffer you don't need them at all.

How can I efficiently draw thousands of vertices?

I'm currently writing an interactive simulator which displays the evolution of a system of particles. I'm developing on Windows 7 32-bit, using Visual Studio.
Currently, I have a function to draw all the particles on screen that looks something like this:
void Simulator::draw()
{
glColor4f(255, 255, 255, 0);
glBegin();
for (size_t i = 0; i < numParticles; i++)
glVertex3dv(p[i].pos);
glEnd();
}
This works great and all for testing, but it's absurdly slow. If I have 200 particles on screen, without doing any other computations (just repeatedly calling draw()), I get about 60 fps. But if I use 1000 particles, this runs at only about 15 - 20 fps.
My question is: How can I draw the particles more quickly? My simulation runs at a fairly decent speed, and at a certain point it's actually being held back (!) by the drawing.
The very first optimization you should do is to drop glBegin/glEnd (Immediate mode) and move to Vertex Arrays (VAs) and Vertex Buffer Objects (VBOs).
You might also want to look into Point Sprites to draw the particles.
this will be unpopular, I know I hate it:
use Direct X instead.
Microsoft have been trying to kill off openGL since Vista.
so chances are they are going to make life difficult for you.
so if ALL ELSE fails, you could try using DirectX.
but yeah, what others have said:
-use DrawArrays / Elements
-use Vertex Buffers
-use point sprites
-you could make a vertex shader so that you don't have to update the whole vertex buffer every frame (you just store initial settings like init pos, init velocity, init time, then constant gravity, then the shader can calculate current position each frame based on a single updated global constant 'current time')
-if it is a setting in open GL (like it is in directX) make sure you are using Hardware Vertex Processing)
First, if you haven't done so, disable the desktop window manager -- it can make a huge difference in speed (you're basically getting largely software rendering with it turned on).
Second, though I doubt you'll need to, you could switch to using glDrawArrays or glDrawElements (for only a couple among many possibilities).