Why does this elapsed time (frametime) calculation lockup my game - c++

currently I am trying to implement a fixed-step game loop but somehow my code seems to lockup my game.
Uint32 SDL_GetTicks(void) : Returns an unsigned 32-bit value representing the number of milliseconds since the SDL library initialized.
This works:
1.cc)
Uint32 FPS = 60;
Uint32 MS_PER_SEC = 1000 / FPS;
Uint32 current_time, last_time, elapsed_time;
current_time = last_time = elapsed_time = 0;
while(Platform.Poll())
{
current_time = SDL_GetTicks(); // Get time of frame begin
// Clear Window
Renderer.Clear();
// Update Input
//...
// Draw
Renderer.Draw();
// Update Window
Renderer.Update();
last_time = SDL_GetTicks(); // Get time of frame end
elapsed_time = last_time - current_time; // calculate frametime
SDL_Delay(MS_PER_SEC - elapsed_time);
}
However this does not:
2.cc)
Uint32 FPS = 60;
Uint32 MS_PER_SEC = 1000 / FPS;
Uint32 current_time, last_time, elapsed_time;
current_time = elapsed_time = 0;
last_time = SDL_GetTicks();
// Poll for Input
while(Platform.Poll())
{
current_time = SDL_GetTicks();
elapsed_time = current_time - last_time;
// Clear Window
Renderer.Clear();
// Update Input
//...
// Draw
Renderer.Draw();
// Update Window
Renderer.Update();
last_time = current_time;
SDL_Delay(MS_PER_SEC - elapsed_time);
}
I expect the results of 1.cc and 2.cc to be the same, meaning that SDL_Delay(MS_PER_SEC - elapsed_time) does delay by a fixed time minus frametime (here 16 - frametime).
But 2.cc does lockup my game.
Is not the elapsed_time (frametime) calculation from 2.cc equivalent to 1.cc ?

Let's unroll the loop a little...
// First iteration
last_time = SDL_GetTicks();
current_time = SDL_GetTicks();
elapsed_time = current_time - last_time; // Probably zero
...
SDL_Delay(MS_PER_SEC - elapsed_time); // Probably a whole second
// Second iteration
current_time = SDL_GetTicks();
elapsed_time = current_time - last_time; // Probably slightly more than a whole second
...
SDL_Delay(MS_PER_SEC - elapsed_time); // Probably negative (4 billion milliseconds)

That function is based on RTC and have very low accuracy and and uses much time to use, depending on platform. 10-30 milliseocnds accuracy is an optimistic guess.
This might be related: SDL_GetTicks() accuracy below the millisecond level

Related

Not sure my delta time function in C++ has the right logic

I've been working on making a basic game loop in C++ and I'm unsure my logic is completely right as my delta time isn't what I expect. I expect to get 1/60 seconds per frame as my cap is 60fps, but over the average I take I get a delta time of ~0.03 seconds per frame giving me 33fps. I know the program can run faster than this as if I raise the frame cap it has a smaller delta time, which is still inaccurate. Any help? (I've removed unimportant bits of code to focus on the logic)
using namespace std::chrono;
int main(void)
{
//Init time - start and end, with delta time being the time between each run of the loop
system_clock::time_point startTime = system_clock::now();
system_clock::time_point endTime = system_clock::now();
float deltaTime = 0.0f;
//*Game made here*
/* Loop until the user closes the window */
while (window is not closed)
{
//Takes time at start of loop
startTime = system_clock::now();
deltaFrameCount++;
//Handle game processes
//Get time at end of game processes
endTime = system_clock::now();
//Take time during work period
duration<double, std::milli> workTime = endTime - startTime;
//Check if program took less time to work than the cap
if (workTime.count() < (milliseconds per frame cap))
{
//Works out time to sleep for by casting to double
duration<double, std::milli> sleepDurationMS((milliseconds per frame cap) - workTime.count());
//Casts back to chrono type to get sleep time
auto sleepDuration = duration_cast<milliseconds>(sleepDurationMS);
//Sleeps this thread for calculated duration
std::this_thread::sleep_for(milliseconds(sleepDuration.count()));
}
//get time at end of all processes - time for one whole cycle
endTime = system_clock::now();
duration<double, std::milli> totalTime = endTime - startTime;
deltaTime = (totalTime / 1000.0f).count();
}
//cleans game
return 0;
}

Calculating glut framerate using clocks_per_sec much too slow

I'm trying to calculate the framerate of a GLUT window by calling a custom CalculateFrameRate method I made at the beginning of my Display() callback function. I call glutPostRedisplay() after calculations I perform every frame so Display() gets called for every frame.
I also have an int numFrames that increments every frame (every time glutPostRedisplay gets called) and I print that out as well. My CalculateFrameRate method calculates a rate of about 7 fps but if I look at a stopwatch and compare it to how quickly my numFrames incrementor increases, the framerate is easily 25-30 fps.
I can't seem to figure out why there is such a discrepancy. I've posted my CalcuateFrameRate method below
clock_t lastTime;
int numFrames;
//GLUT Setup callback
void Renderer::Setup()
{
numFrames = 0;
lastTime = clock();
}
//Called in Display() callback every time I call glutPostRedisplay()
void CalculateFrameRate()
{
clock_t currentTime = clock();
double diff = currentTime - lastTime;
double seconds = diff / CLOCKS_PER_SEC;
double frameRate = 1.0 / seconds;
std::cout<<"FRAMERATE: "<<frameRate<<endl;
numFrames ++;
std::cout<<"NUM FRAMES: "<<numFrames<<endl;
lastTime = currentTime;
}
The function clock (except in Windows) gives you the CPU-time uses, so if you are not spinning the CPU for the entire frame-time, then it will give you a lower time than expected. Conversely, if you have 16 cores running 16 of your threads flat out, the time reported by clock will be 16 times the actual time.
You can use std::chrono::steady_clock, std::chrono::high_resolution_clock, or if you are using Linux/Unix, gettimeofday (which gives you microosecond resolution).
Here's a couple of snippets of how to use gettimeofday to measure milliseconds:
double time_to_double(timeval *t)
{
return (t->tv_sec + (t->tv_usec/1000000.0)) * 1000.0;
}
double time_diff(timeval *t1, timeval *t2)
{
return time_to_double(t2) - time_to_double(t1);
}
gettimeofday(&t1, NULL);
... do stuff ...
gettimeofday(&t2, NULL);
cout << "Time taken: " << time_diff(&t1, &t2) << "ms" << endl;
Here's a piece of code to show how to use std::chrono::high_resolution_clock:
auto start = std::chrono::high_resolution_clock::now();
... stuff goes here ...
auto diff = std::chrono::high_resolution_clock::now() - start;
auto t1 = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);

C++ - using glfwGetTime() for a fixed time step

After doing some research and debugging in my C++ project with glfwGetTime(), I'm having trouble making a game loop for my project. As far as time goes I really only worked with nanoseconds in Java and on the GLFW website it states that the function returns the time in seconds. How would I make a fixed time step loop with glfwGetTime()?
What I have now -
while(!glfwWindowShouldClose(window))
{
double now = glfwGetTime();
double delta = now - lastTime;
lastTime = now;
accumulator += delta;
while(accumulator >= OPTIMAL_TIME) // OPTIMAL_TIME = 1 / 60
{
//tick
accumulator -= OPTIMAL_TIME;
}
}
All you need is this code for limiting updates, but keeping the rendering at highest possible frames. The code is based on this tutorial which explains it very well. All I did was to implement the same principle with GLFW and C++.
static double limitFPS = 1.0 / 60.0;
double lastTime = glfwGetTime(), timer = lastTime;
double deltaTime = 0, nowTime = 0;
int frames = 0 , updates = 0;
// - While window is alive
while (!window.closed()) {
// - Measure time
nowTime = glfwGetTime();
deltaTime += (nowTime - lastTime) / limitFPS;
lastTime = nowTime;
// - Only update at 60 frames / s
while (deltaTime >= 1.0){
update(); // - Update function
updates++;
deltaTime--;
}
// - Render at maximum possible frames
render(); // - Render function
frames++;
// - Reset after one second
if (glfwGetTime() - timer > 1.0) {
timer ++;
std::cout << "FPS: " << frames << " Updates:" << updates << std::endl;
updates = 0, frames = 0;
}
}
You should have a function update() for updating game logic and a render() for rendering. Hope this helps.

Limiting Update Rate in C++. Why does this code update once a second not 60 times a second?

I am making a small game with C++ OpenGL. update() is normally called once every time the program runs through the code. I am trying to limit this to 60 times per second (I want the game to update at the same speed on different speed computers).
The code included below runs a timer and should call update() once the timer is >= than 0.0166666666666667 (60 times per second). However the statement if((seconds - lastTime) >= 0.0166666666666667) seems only to be tripped once per second. Does anyone know why?
Thanks in advance for your help.
//Global Timer variables
double secondsS;
double lastTime;
time_t timer;
struct tm y2k;
double seconds;
void init()
{
glClearColor(0,0,0,0.0); // Sets the clear colour to white.
// glClear(GL_COLOR_BUFFER_BIT) in the display function
//Init viewport
viewportX = 0;
viewportY = 0;
initShips();
//Time
lastTime = 0;
time_t timerS;
struct tm y2k;
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timerS); /* get current time; same as: timer = time(NULL) */
secondsS = difftime(timerS,mktime(&y2k));
printf ("%.f seconds since January 1, 2000 in the current timezone \n", secondsS);
loadTextures();
ShowCursor(true);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
void timeKeeper()
{
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timer); /* get current time; same as: timer = time(NULL) */
seconds = difftime(timer,mktime(&y2k));
seconds -= secondsS;
//Run 60 times a second. This limits updates to a constant standard.
if((seconds - lastTime) >= 0.0166666666666667)
{
lastTime = seconds;
update();
//printf ("%.f seconds since beginning program \n", seconds);
}
}
timeKeeper() is called in int WINAPI WinMain, while the program is !done
EDIT:
Thanks to those who helped, you pointed me on the right track. As mentioned in the answer below <ctime> does not have ms accuracy. I have therefore implemented the following code that has the correct accuracy:
double GetSystemTimeSample()
{
FILETIME ft1, ft2;
// assume little endian and that ULONGLONG has same alignment as FILETIME
ULONGLONG &t1 = *reinterpret_cast<ULONGLONG*>(&ft1),
&t2 = *reinterpret_cast<ULONGLONG*>(&ft2);
GetSystemTimeAsFileTime(&ft1);
do
{
GetSystemTimeAsFileTime(&ft2);
} while (t1 == t2);
return (t2 - t1) / 10000.0;
}//GetSystemTimeSample
void timeKeeper()
{
thisTime += GetSystemTimeSample();
cout << thisTime << endl;
//Run 60 times a second. This limits updates to a constant standard.
if(thisTime >= 16.666666666666699825) //Compare to a value in milliseconds
{
thisTime = seconds;
update();
}
}
http://www.cplusplus.com/reference/ctime/difftime/
Calculates the difference in seconds between beginning and end
So, you get a value in seconds. So, even if your value is double, you will get an integer.
So, you only get a difference between a value and the previous one when that difference is at least of 1 second.

Design fps limiter

I try to cap the animation at 30 fps. So I design the functions below to achieve the goal. Unfortunately, the animation doesn't behave as fast as no condition checking for setFPSLimit() function when I set 60 fps (DirectX caps game application at 60 fps by default). How should I fix it to make it work?
getGameTime() function counts the time like stopwatch in millisecond when game application starts.
//Called every time you need the current game time
float getGameTime()
{
UINT64 ticks;
float time;
// This is the number of clock ticks since start
if( !QueryPerformanceCounter((LARGE_INTEGER *)&ticks) )
ticks = (UINT64)timeGetTime();
// Divide by frequency to get the time in seconds
time = (float)(__int64)ticks/(float)(__int64)ticksPerSecond;
// Subtract the time at game start to get
// the time since the game started
time -= timeAtGameStart;
return time;
}
With fps limit
http://www.youtube.com/watch?v=i3VDOMqI6ic
void update()
{
if ( setFPSLimit(60) )
updateAnimation();
}
With No fps limit http://www.youtube.com/watch?v=Rg_iKk78ews
void update()
{
updateAnimation();
}
bool setFPSLimit(float fpsLimit)
{
// Convert fps to time
static float timeDelay = 1 / fpsLimit;
// Measure time elapsed
static float timeElapsed = 0;
float currentTime = getGameTime();
static float totalTimeDelay = timeDelay + getGameTime();
if( currentTime > totalTimeDelay)
{
totalTimeDelay = timeDelay + getGameTime();
return true;
}
else
return false;
}