Issue controlling game frame rate C++ - c++

I'm programming a game in C++ and I'm having trouble creating a way so that the game only updates 60 times per second. The code I have written looks like it should work but the frame rate actually ends up as 44 frames per second instead of 60.
const int FRAMES_PER_SECOND = 60;
const int FRAME_CONTROL = (1000 / FRAMES_PER_SECOND);
double lastFrameTime;
double currentFrameTime;
void GameLoop()
{
currentFrameTime = GetTickCount();
if ((currentFrameTime - lastFrameTime) >= FRAME_CONTROL)
{
lastFrameTime = currentFrameTime;
// Update Game.
}
}
So yeah, it should be 60 frames but it actually runs at 44. And the class I'm using to count the frame rate works perfectly in other programs which already have a capped frame rate.
Any ideas what the problem is?

Its due to the resolution of getTickCount. That function only gives 10-16 ms resolution Microsoft GetTickCount()

Related

Measured fps is higher than theoretical one

I'm measuring execution time for several functions in an image processing program in C++. In particular, I want to have the actual execution time for capturing a frame with my USB camera.
The problem is that the results don't seem consistent with the camera parameters: the camera is supposed to be 30 fps at most, and I often get a measured time that is less than 33 ms for getting a frame, which is the value I think should be expected. For instance, I get a lot of 12 ms intervals and that really seems too little.
Here is the code:
#include <time.h>
#include <sys/time.h>
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
int main(){
while (true) {
double previoustime = get_wall_time();
this->camera.readFrame();
double currenttime = get_wall_time();
std::cout << currenttime-previoustime << std::endl;
// Other stuff
// ...
// ...
usleep(3000);
}
}
As #Revolver_Ocelot remarked, you are measuring time spent from the end of get_wall_time to the end of another similar call. To fix your code, do this:
double currenttime = get_wall_time();
while (true) {
double previoustime = currenttime;
this->camera.readFrame();
...
currentime = get_wall_time();
}
Can you spot the difference? This code measures the interval between each pass, which is what you want to get frames per second.
The speed at which you can read your camera will not be the same as the rate at which it will complete a new frame. Your camera could be recording at 30 FPS and you could be reading it at 15 FPS or 90 FPS, thus subsampling or oversampling the frame stream.
The limit at which you can oversample is 1 / time that it takes to read in an image and store it.
That's what #Jacob Hull meant with blocking; if readFrame just reads the last frame, it's not blocking until a new frame and you will get the results like you do with your measurement.

Limiting FPS in C++

I'm currently making a game in which I would like to limit the frames per second but I'm having problems with that. Here's what I'm doing:
I'm getting the deltaTime through this method that is executed each frame:
void Time::calc_deltaTime() {
double currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
}
deltaTime is having the value I would expect (around 0.012.... to 0.016...)
And than I'm using deltaTime to delay the frame through the Sleep windows function like this:
void Time::limitToMAXFPS() {
if(1.0 / MAXFPS > deltaTime)
Sleep((1.0 / MAXFPS - deltaTime) * 1000.0);
}
MAXFPS is equal to 60 and I'm multiplying by 1000 to convert seconds to milliseconds. Though everything seems correct I'm sill having more than 60 fps (I'm getting around 72 fps)
I also tried this method using while loop:
void Time::limitToMAXFPS() {
double diff = 1.0 / MAXFPS - deltaTime;
if(diff > 0) {
double t = glfwGetTime( );
while(glfwGetTime( ) - t < diff) { }
}
}
But still I'm getting more than 60 fps, I'm still getting around 72 fps... Am I doing something wrong or is there a better way for doing this?
How important is it that you return cycles back to the CPU? To me, it seems like a bad idea to use sleep at all. Someone please correct me if I am wrong, but I think sleep functions should be avoided.
Why not simply use an infinite loop that executes if more than a certain time interval has passed. Try:
const double maxFPS = 60.0;
const double maxPeriod = 1.0 / maxFPS;
// approx ~ 16.666 ms
bool running = true;
double lastTime = 0.0;
while( running ) {
double time = glfwGetTime();
double deltaTime = time - lastTime;
if( deltaTime >= maxPeriod ) {
lastTime = time;
// code here gets called with max FPS
}
}
Last time that I used GLFW, it seemed to self-limit to 60 fps anyway. If you are doing anything high performance orientated (game or 3D graphics), avoid anything that sleeps, unless you wanna use multithreading.
Sleep can be very inaccurate. A common phenomenon seen is that the actual time slept has a resolution of 14-15 milliseconds, which gives you a frame rate of ~70.
Is Sleep() inaccurate?
I've given up of trying to limit the fps like this... As you said Windows is very inconsistent with Sleep. My fps average is being always 64 fps and not 60. The problem is that Sleep takes as argument an integer (or long integer) so I was casting it with static_cast. But I need to pass to it as a double. 16 milliseconds each frame is different from 16.6666... That's probably the cause of this extra 4 fps (so I think).
I also tried :
std::this_thread::sleep_for(std::chrono::milliseconds(static_cast<long>(1.0 / MAXFPS - deltaTime) * 1000.0)));
and the same thing is happening with sleep_for. Then I tried passing the decimal value remaining from the milliseconds to chrono::microseconds and chrono::nanoseconds using them 3 together to get a better precision but guess what I still get the freaking 64 fps.
Another weird thing is in the expression (1.0 / MAXFPS - deltaTime) * 1000.0) sometimes (Yes, this is completely random) when I change 1000.0 to a const integer making the expression become (1.0 / MAXFPS - deltaTime) * 1000) my fps simply jumps to 74 for some reason, while the expression is completely equal to each other and nothing should happen. Both of them are double expressions I don't think is happening any type promotion here.
So I decided to force the V-sync through the function wglSwapIntervalEXT(1); in order to avoid screen tearing. And then I'm gonna use that method of multiplying deltaTime with every value that might very depending on the speed of the computer executing my game. It's gonna be a pain because I might forget to multiply some value and not noticing it on my own computer creating inconsistency, but I see no other way... Thank you all for the help though.
I've recently started using glfw for a small side project I'm working on, and I've use std::chrono along side std::this_thread::sleep_until to achieve 60fps
auto start = std::chrono::steady_clock::now();
while(!glfwWindowShouldClose(window))
{
++frames;
auto now = std::chrono::steady_clock::now();
auto diff = now - start;
auto end = now + std::chrono::milliseconds(16);
if(diff >= std::chrono::seconds(1))
{
start = now;
std::cout << "FPS: " << frames << std::endl;
frames = 0;
}
glfwPollEvents();
processTransition(countit);
render.TickTok();
render.RenderBackground();
render.RenderCovers(countit);
std::this_thread::sleep_until(end);
glfwSwapBuffers(window);
}
to add you can easily adjust FPS preference by adjusting end.
now with that said, I know glfw was limited to 60fps but I had to disable the limit with glfwSwapInterval(0); just before the while loop.
Are you sure your Sleep function accept floating point values. If it only accepts int, your sleep will be a Sleep(0) which will explain your issue.

Time Step in PhysX

I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?
From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.
Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.
I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.

FPS stutter - game running at double FPS specified?

I have managed to create a system that allows game objects to move according to the change in time (rather than the change in number of frames). I am now, for practise's sake, trying to impose a FPS limiter.
The problem I'm having is that double the amount of game loops are occurring than what I expect. For example, if I try to limit the number of frames to 1 FPS, 2 frames will pass in that one second - one very long frame (~1 second) and 1 extremely short frame (~15 milliseconds). This can be shown by outputting the difference in time (in milliseconds) between each game loop - an example output might be 993, 17, 993, 16...
I have tried changing the code such that delta time is calculated before the FPS is limited, but to no avail.
Timer class:
#include "Timer.h"
#include <SDL.h>
Timer::Timer(void)
{
_currentTicks = 0;
_previousTicks = 0;
_deltaTicks = 0;
_numberOfLoops = 0;
}
void Timer::LoopStart()
{
//Store the previous and current number of ticks and calculate the
//difference. If this is the first loop, then no time has passed (thus
//previous ticks == current ticks).
if (_numberOfLoops == 0)
_previousTicks = SDL_GetTicks();
else
_previousTicks = _currentTicks;
_currentTicks = SDL_GetTicks();
//Calculate the difference in time.
_deltaTicks = _currentTicks - _previousTicks;
//Increment the number of loops.
_numberOfLoops++;
}
void Timer::LimitFPS(int targetFps)
{
//If the framerate is too high, compute the amount of delay needed and
//delay.
if (_deltaTicks < (unsigned)(1000 / targetFps))
SDL_Delay((unsigned)(1000 / targetFps) - _deltaTicks);
}
(Part of the) game loop:
//The timer object.
Timer* _timer = new Timer();
//Start the game loop and continue.
while (true)
{
//Restart the timer.
_timer->LoopStart();
//Game logic omitted
//Limit the frames per second.
_timer->LimitFPS(1);
}
Note: I am calculating the FPS using a SMA; this code has been omitted.
The problem is part of how your timer structure is setup.
_deltaTicks = _currentTicks - _previousTicks;
I looked though the code a few times and based on how you have the timer setup this seems to be the problem line.
Walking though it at the start of your code _currentTicks will equal 0 and _previousTicks will equal 0 so for the first loop your _deltaTicks will be 0. This means regardless of your game logic LimitFPS will misbehave.
void Timer::LimitFPS(int targetFps)
{
if (_deltaTicks < (unsigned)(1000 / targetFps))
SDL_Delay((unsigned)(1000 / targetFps) - _deltaTicks);
}
We just calculated that _deltaTicks was 0 at the start of the loop, thus we wait the full 1000ms regardless of what happens in your game logic.
On the next frame we have this, _previousTicks will equal 0 and _currentTicks will now equal approximately 1000ms. This means the _deltaTicks will be equal to 1000.
Again we hit the LimitFPS function but this time with a _deltaTicks of 1000 which makes it so we wait for 0ms.
This behavior will constantly flip like this.
wait 1000ms
wait 0ms
wait 1000ms
wait 0ms
...
those wait 0ms essentially double your FPS.
You need to test for time at the start and end of your game logic.
//The timer object.
Timer* _timer = new Timer();
//Start the game loop and continue.
while (true)
{
//Restart the timer.
_timer->LoopStart();
//Game logic omitted
//Obtain the end time
_timer->LoopEnd();
//Now you can calculate the delta based on your start and end times.
_timer->CaculateDelta();
//Limit the frames per second.
_timer->LimitFPS(1);
}
Hope that helps.
Assuming your gameloop is triggered only when SDL_Delay() expired wouldn't it be enough to write:
void Timer::LimitFPS(int targetFps)
{
SDL_Delay((unsigned)(1000 / targetFps));
}
to delay your gameloop by the right amount of milliseconds depending on the requested target FPS?

C++: Calculating Moving FPS

I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...