understanding of FPS and the methods they use - c++

Just looking on resources that break down how frames per second work. I know it has something to do with keeping track of Ticks and figure out how many ticks occured between each frame. But I never ran into any resources on why exactly you have to use the methods you use in order to get a smooth frame work. I am trying to get a thourough understanding of this. Can any explain or provide any good resources ? Thanks

There are basically two approaches.
In ActionScript (and many other engines), you request the player to call a certain function at a certain framerate. For Flash games, you'll set the framerate to be 30 FPS, and then you'll implement a function that listens for ENTER_FRAME events to do what you need to do. This means you get roughly 33 ms per frame (1000ms/30FPS=33.33ms/frame). If your code that responds to ENTER_FRAME takes more than 33 ms, you'll get some stuttering.
In home-rolled main loops (like you'd generally do in C++/SDL, for example), you run the main loop as fast as possible. This means the time between each frame will be variable. You still need to keep the "guts" of your frame code less than 33 ms to make sure you'll get at least 30 FPS, but your game will run faster than 30 FPS if not a lot's going on. To account for this, you need to program all your logic in terms of elapsed time since last frame, and abandon using frames themselves as a unit of time.

http://forums.xna.com/forums/t/42624.aspx
How do you separate game logic from display?

For a continously variable frame rate you can measure the time the last frame took and assume this frame will take the same length of time. This has the benefit of meaning that time runs more or less constantly. Your biggest issue with this approach is that it is entirely possible for a VSync'd game to change from 60 fps to 30 fps and back again on subsequent frames. From experience a good way to solve this is to average the last few frame times. This smooths the result out. In the 60 to 30 fps switch each frame will progress assuming 1/45 seconds and hence the 60fps frame run slow and the 30fps frame run fast and the perceived speed remains at 45fps.
Better still is to not use this sort of time step in your calculations. Set yourself a minimum fps ... say 10fps. You then calculate all your game logic at some multiple of these 1/10 second intervals. The render engine then knows where the object is and where it is heading towards and so can inter/extrapolate the object position until its next "decision" frame shows up. This has numerous advantages. It decouples your logic entirely from rendering. It allows you to spread the logic calculations over a number of frames. For example, at 60Hz, you can test at what point a logic object will interesect with the world if it maintains its path every 6th frame. This gives the bonus of allowing you to process some logic objects on different frames to spread calculation load across the time. Its biggest disadvantage is that if the frame rate drops below your target rate everything slows down.

Related

OpenGL framerate: connection with the size of the window

I was in the process of tracking down and eliminating those parts of my C++/OpenGL/GLUT code that were inefficient and slow, and in doing so, I watched my frames per second counter to know if I was actually making progress. I noticed that my frame rate dropped from about 120 to 60 if I maximized the window.
Further experimentation revealed that this was a linear thing, I could change the frame rate by changing the size of the window.
Does this mean that my bottleneck in in the GPU rendering? Surely GPUs these days are more than powerful enough not to notice the difference between a 300x300 and 1920x1080? Or am I asking too much from my graphics card?
The alternative is that there is some bug in my code that is causing the system to slow down on larger renders.
What I am asking is this: is it reasonable to expect a halving of framerate when changing the window size, or is there something very wrong?
Further experimentation revealed that this was a linear thing, I could change the frame rate by changing the size of the window.
Congratulations: You discovered fill rate
Does this mean that my bottleneck in in the GPU rendering?
Yes, pretty much. To be specific the bottleneck is either the bandwidth from/to the graphics memory, or the complexity of the fragment shader, or a combination of both.
Surely GPUs these days are more than powerful enough not to notice the difference between a 300x300 and 1920x1080?
300×300 = 90000
1920×1080 = 2073600
Or in other words: You ask the GPU to fill about 20 times as many pixels. Which means 20 times as much data must be flung around and also be processed.
That drop from 120Hz to 60Hz comes from V-Sync. If you disabled V-Sync you'd find, that your program would probably reach way higher rates than 60Hz for 1920×1080, but for 300×300 it will be something below 180Hz.
The reason for that is simple: When synched to the display vertical retrace, your GPU can "put out" the next frame only at the moment the display is v-syncing. If your display can do 120Hz (like yours as it's obvious) and your rendering takes less time than 1/120s to complete it will make the deadline and your framerate synchronizes to the display. If however drawing a frame takes more then 1/120s, then it will sync with every 2nd frame displayed. If rendering takes more than 1/60s second every 3rd, 1/30s every 4th and so on.

How to fix game loop running at different framerate than VSync?

For example, I have 60 fps for game logic but 50 fps for VSync. Should I move game loop into a different thread and send drawing requests for drawing thread or I can still fix this with one thread?
Does your game logic take 1s/60 to compute or is this just the simulation interval you use? In case of it being an interval I suggest you adjust the simulation intervals by the time it took to display a frame each, i.e. you measure the time between frames and feed your simulation with that.
You should rather use a timer than hard-coded timing. (Rather use the difference between the previous update and current update than 1/60 seconds.) That avoid situations where computations that can not keep up at 60fps. Then VSync won't be an issue also.
Correct me if I am wrong, but you can render at any fps and the screen draw at any other fps without any issues (except maybe a little bit of jitter)
With multi-core CPUs, I recommend rather use thread where practical. For example physics in one thread, and render in another (Render with snap-shot data to prevent side-effects).
Take a look at this article by Glenn Fiedler, "Fix your timestep":
http://gafferongames.com/game-physics/fix-your-timestep/

How can I ensure proper speed of an animation in Qt using OpenGL?

I have 200 frames to be displayed per second. The frames are very very simple, black and white, just a couple of lines. A timer is driving the animation. The goal is to play back the frames at approx. 200 fps.
Under Linux I set the timer to 5 ms and I let it display every frame (that is 200fps). Works just fine but it fails under Win 7.
Under Win 7 (the same machine) I had to set the timer to 20 ms and let it display every 4 frame (50 fps × 4 = 200). I found these magic numbers by trial and error.
What should I do to guarantee (within reasonable limits) that the animation will be played back at a proper speed on the user's machine?
For example, what if the user's machine can only do 30 fps or 60 fps?
The short answer is, you can't (in general).
For best aesthetics, most windowing systems have "vsync" on by default, meaning that screen redraws happen at the refresh rate of the monitor. In the old CRT days, you might be able to get 75-90 Hz with a high-end monitor, but with today's LCDs you're likely stuck at 60 fps.
That said, there are OpenGL extensions that can disable VSync (don't remember the extension name off hand) programmatically, and you can frequently disable it at the driver level. However, no matter what you do (barring custom hardware), you're not going to be able to display complete frames at 200 fps.
Now, it's not clear if you've got pre-rendered images that you need to display at 200 fps, or if you're rendering from scratch and hoping to achieve 200 fps. If it's the former, a good option might be to use a timer to determine which frame you should display (at each 60 Hz. update), and use that value to linearly interpolate between two of the pre-rendered frames. If it's the latter, I'd just use the timer to control motion (or whatever is dynamic in your scene) and render the appropriate scene given the time. Faster hardware or disabled VSYNC will give you more frames (hence smoother animation, modulo the tearing) in the same amount of time, etc. But the scene will unfold at the right pace either way.
Hope this is helpful. We might be able to give you better advice if you give a little more info on your application and where the 200 fps requirement originates.
I've already read, that you've data sampled at 200Hz, which you want to play back at natural speed. I.e. one second of sampled data shall be rendered over one second time.
First: Forget about using timers to coordinate your rendering, this is unlikely to work properly. Instead you should measure the time a full rendering cycle (including v-sync) takes and advance the animation time-counter by this. Now 200Hz is already some very good time resolution, so if the data is smooth enough, then there should be no need to interpolate at all. So something like this (Pseudocode):
objects[] # the objects, animated by the animation
animation[] # steps of the animation, sampled at 200Hz
ANIMATION_RATE = 1./200. # Of course this shouldn't be hardcoded,
# but loaded with the animation data
animationStep = 0
timeLastFrame = None
drawGL():
timeNow = now() # time in seconds with (at least) ms-accuracy
if timeLastFrame:
stepTime = timeNow - timeLastFrame
else:
stepTime = 0
animationStep = round(animationStep + stepTime * ANIMATION_RATE)
drawObjects(objects, animation[animationStep])
timeLastFrame = timeNow
It may be, that your rendering is much faster than the time between screen refreshs. In that case you may want to render some of the intermediate steps, too, to get some kind of motion blur effect (you can also use the animation data to obtain motion vectors, which can be used in a shader to create a vector blur effect), the render loop would then look like this:
drawGL():
timeNow = now() # time in seconds with (at least) ms-accuracy
if timeLastFrame:
stepTime = timeNow - timeLastFrame
else:
stepTime = 0
timeRenderStart = now()
animationStep = round(animationStep + stepTime * ANIMATION_RATE)
drawObjects(objects, animation[animationStep])
glFinish() # don't call SwapBuffers
timeRender = now() - timeRenderStart
setup_GL_for_motion_blur()
intermediates = floor(stepTime / timeRender) - 1 # subtract one to get some margin
backstep = ANIMATION_RATE * (stepTime / intermediates)
if intermediates > 0:
for i in 0 to intermediates:
drawObjects(objects, animation[animationStep - i * backstep])
timeLastFrame = timeNow
One way is to sleep for 1ms at each iteration of your loop and check how much time has passed.
If more than the target amount of time has passed (for 200fps that is 1000/200 = 5ms), then draw a frame. Else, continue to the next iteration of the loop.
E.g. some pseudo-code:
target_time = 1000/200; // 200fps => 5ms target time.
timer = new timer(); // Define a timer by whatever method is permitted in your
// implementation.
while(){
if(timer.elapsed_time < target_time){
sleep(1);
continue;
}
timer.reset(); // Reset your timer to begin counting again.
do_your_draw_operations_here(); // Do some drawing.
}
This method has the advantage that if the user's machine is not capable of 200fps, you will still draw as fast as possible, and sleep will never be called.
There are probably two totally independant factors to consider here:
How fast is the users machine? It could be that you are not achieving your target frame rate due to the fact that the machine is still processing the last frame by the time it is ready to start drawing the next frame.
What is the resolution of the timers you are using? My impression (although I have no evidence to back this up) is that timers under Windows operating systems provide far poorer resolution than those under Linux. So you might be requesting a sleep of (for example) 5 mS, and getting a sleep of 15 mS instead.
Further testing should help you figure out which of these two scenarios is more pertinent to your situation.
If the problem is a lack of processing power, you can choose to display intermediate frames (as you are doing now), or degrade the visuals (lower quality, lower resolution, or anything else that might help speed thigns up).
If the problem is timer resolution, you can look at alternative timer APIs (Windows API provides two different timer functionc alls, each with different resolutions, perhaps you are using the wrong one), or try and compensate by asking for smaller time slices (as in Kdoto's suggestion). However, doing this may actually degrade performance, since you're now doing a lot more processing than you were before - you may notice your CPU usage spike under this method.
Edit:
As Drew Hall mentions in his answer, there's another whole site to this: The refresh rate you get in code may be very different to the actual refresh rate appearing on screen. However, that's output device dependent, and it sounds from your question like the issue is in code, rather than in output hardware.

setting max frames per second in openGL

Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.
You have two different ways to solve this problem:
Suppose that you have a variable called maximum_fps, which contains for the maximum number of frames you want to display.
Then You measure the amount of time spent on the last frame (a timer will do)
Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call sleep() to reach the amount of time left for a frame.
Or you can have a variable called tick, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on the tick var, since it contains the current time of the application.
The big advantage of option 2 is that your application will be much easier to debug, since you can play around with the tick variable, go forward and back in time whenever you want. This is a big plus.
Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.
You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.
The common way to deal with this problem is using elapsed time from previous update. For example,
// Bad
void enemy::update()
{
position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}
// Good
void enemy::update(elapsedTime)
{
position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?
No.
There is no way to precisely calculate how many updates should be called to reach desired framerate.
However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.
I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.
Or how else can I prevent FPS to drop or raise dramatically?
You prevent FPS from raising by sleeping.
As for preventing FPS from dropping...
It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.
You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).
Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).
windows: wglSwapIntervalEXT
x11 (linux): glXSwapIntervalSGI
max os x: ?
These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep" is always perfectly timed instead of an educated guess based on how long the last frame took.
You absolutely do wan't to throttle your frame-rate it all depends on what you got
going on in that rendering loop and what your application does. Especially with it's
Physics/Network related. Or if your doing any type of graphics processing with an out side toolkit (Cairo, QPainter, Skia, AGG, ...) unless you want out of sync results or 100% cpu usage.
This code may do the job, roughly.
static int redisplay_interval;
void timer(int) {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void setFPS(int fps)
{
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}
Here is a similar question, with my answer and worked example
I also like deft_code's answer, and will be looking into adding what he suggests to my solution.
The crucial part of my answer is:
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
The example is for animation code that renders at the same speed regardless of whether benchmarking mode, or fixed FPS mode, is active. An animation triggered before the change even keeps a constant speed after the change.

Question about running a program at same speed in any computer

I made a program (in C++, using gl/glut) for study purposes where you can basically run around a screen (in first person), and it has several solids around the scene. I tried to run it on a different computer and the speed was completely different, so I searched on the subject and I'm currently doing something like this:
Idle function:
start = glutGet (GLUT_ELAPSED_TIME);
double dt = (start-end)*30/1000;
<all the movement*dt>
glutPostRedisplay ();
end = glutGet (GLUT_ELAPSED_TIME);
Display function:
<rendering for all objects>
glutSwapBuffers ();
My question is: is this the proper way to do it? The scene is being displayed after the idle function right?
I tried placing end = glutGet (GLUT_ELAPSED_TIME) before glutSwapBuffers () and didn't notice any change, but when I put it after glutSwapBuffers () it slows down alot and even stops sometimes.
EDIT: I just noticed that in the way I'm thinking, end-start should end up being the time that passed since all the drawing was done and before the movement update, as idle () would be called as soon as display () ends, so is it true that the only time that's not being accounted for here is the time the computer takes to do all of the movement? (Which should be barely nothing?)
Sorry if this is too confusing..
Thanks in advance.
I don't know what "Glut" is, but as a general rule of game development, I would never base movement speed off of how fast the computer can process the directives. That's what they did in the late 80's and that's why when you play an old game, things move at light speed.
I would set up a timer, and base all of my movements off of clear and specific timed events.
Set up a high-resolution timer (eg. QueryPerformanceCounter on Windows) and measure the time between every frame. This time, called delta-time (dt), should be used in all movement calculations, eg. every frame, set an object's position to:
obj.x += 100.0f * dt; // to move 100 units every second
Since the sum of dt should always be 1 over 1 second, the above code increments x by 100 every second, no matter what the framerate is. You should do this for all values which change over time. This way your game proceeds at the same rate on all machines (framerate independent), rather than depending on the rate the computer processes the logic (framerate dependent). This is also useful if the framerate starts to drop - the game doesn't suddenly start running in slow-motion, it keeps going at the same speed, just rendering less frequently.
I wouldn't use a timer. Things can go wrong, and events can stack up if the PC is too slow or too busy to run at the required rate. I'd let the loop run as fast as it's allowed, and each time calculate how much time has passed and put this into your movement/logic calculations.
Internally, you might actually implement small fixed-time sub-steps, because trying to make everything work right on variable time-steps is not as simple as x+=v*dt.
Try gamedev.net for stuff like this. lots of articles and a busy forum.
There is a perfect article about game loops that should give you all the information you need.
You have plenty of answers on how to do it the "right" way, but you're using GLUT, and GLUT sometimes sacrifices the "right" way for simplicity and maintaining platform independence. The GLUT way is to register a timer callback function with glutTimerFunc().
static void timerCallback (int value)
{
// Calculate the deltas
glutPostRedisplay(); // Have GLUT call your display function
glutTimerFunc(elapsedMilliseconds, timerCallback, value);
}
If you set elapsedMilliseconds to 40, this function will be called slightly less than 25 times a second. That slightly less will depend upon how long the computer takes to process your delta calculation code. If you keep that code simple, your animation will run the same speed on all systems, as long as each system can process the display function in less than 40 milliseconds. For more flexibility, you can adjust the frame rate at runtime with a command line option or by adding a control to your interface.
You start the timer loop by calling glutTimerFunc(elapsedMilliseconds, timerCallback, value); in your initialization process.
I'm a games programmer and have done this many times.
Most games run the AI in fixed time increments like 60hz for example. Also most are synced to the monitor refresh to avoid screen tearing so the max rate would be 60 even if the machine was really fast and could do 1000 fps. So if the machine was slow and was running at 20 fps then it would call the update ai function 3 times per render. Doing it this way solves rounding error problems with small values and also makes the AI deterministic across multiple machines since the AI update rate is decoupled from the machine speed ( necessary for online multiplayer games).
This is a very hard question.
The first thing you need to awnser yourself is, do you really want your application to really run at the same speed or just appear to run the same speed? 99% of the time you only want it to appear to run the same speed.
Now there are two problems: Speeding up you application or slowing it down.
Speeding up your application is really hard, since that requires things like dynamic LOD that adjusts to the current speed. This means LOD in everything, not only graphics.
Slowing your application down is fairly easy. You have two options sleeping or "busy waiting". It basically depends on your target frame rate for your simulation. If your simulation is way above something like 50 ms you can sleep. The problem is that when sleeping you are depended on the process scheduler and it works on average system at granularity of 10 ms.
In games busy waiting is not such a bad idea. What you do is you update your simulation and render your frame, then you use an time accumulator for the next frame. When rendering frames without simulation you then interpolate the state to get a smooth animation. A really great article on the subject can be found at http://gafferongames.com/game-physics/fix-your-timestep/.