How to make timer for a game loop? - c++

I want to time fps count, and set it's limit to 60 and however i've been looking throught some code via google, I completly don't get it.

If you want 60 FPS, you need to figure out how much time you have on each frame. In this case, 16.67 milliseconds. So you want a loop that completes every 16.67 milliseconds.
Usually it goes (simply put): Get input, do physics stuff, render, pause until 16.67ms has passed.
Its usually done by saving the time at the top of the loop and then calculating the difference at the end and sleeping or looping doing nothing for that duration.
This article describes a few different ways of doing game loops, including the one you want, although I'd use one of the more advanced alternatives in this article.

delta time is the final time, minus the original time.
dt= t-t0
This delta time, though, is simply the amount of time that passes while the velocity is changing.
The derivative of a function represents an infinitesimal change
in the function with respect to one of its variables.
The derivative of a function with respect to the variable is defined as
f(x + h) - f(x)
f'(x) = lim -----------------
h->0 h
http://mathworld.wolfram.com/Derivative.html
#include<time.h>
#include<stdlib.h>
#include<stdio.h>
#include<windows.h>
#pragma comment(lib,"winmm.lib")
void gotoxy(int x, int y);
void StepSimulation(float dt);
int main(){
int NewTime = 0;
int OldTime = 0;
float dt = 0;
float TotalTime = 0;
int FrameCounter = 0;
int RENDER_FRAME_COUNT = 60;
while(true){
NewTime = timeGetTime();
dt = (float) (NewTime - OldTime)/1000; //delta time
OldTime = NewTime;
if (dt > (0.016f)) dt = (0.016f); //delta time
if (dt < 0.001f) dt = 0.001f;
TotalTime += dt;
if(TotalTime > 1.1f){
TotalTime=0;
StepSimulation(dt);
}
if(FrameCounter >= RENDER_FRAME_COUNT){
// draw stuff
//Render();
gotoxy(1,2);
printf(" \n");
printf("OldTime = %d \n",OldTime);
printf("NewTime = %d \n",NewTime);
printf("dt = %f \n",dt);
printf("TotalTime = %f \n",TotalTime);
printf("FrameCounter = %d fps\n",FrameCounter);
printf(" \n");
FrameCounter = 0;
}
else{
gotoxy(22,7);
printf("%d ",FrameCounter);
FrameCounter++;
}
}
return 0;
}
void gotoxy(int x, int y){
COORD coord;
coord.X = x; coord.Y = y;
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), coord);
return;
}
void StepSimulation(float dt){
// calculate stuff
//vVelocity += Ae * dt;
}

You shouldn't try to limit the fps. The only reason to do so is if you are not using delta time and you expect each frame to be the same length. Even the simplest game cannot guarantee that.
You can however take your delta time and slice it into fixed sizes and then hold onto the remainder.
Here's some code I wrote recently. It's not thoroughly tested.
void GameLoop::Run()
{
m_Timer.Reset();
while(!m_Finished())
{
Time delta = m_Timer.GetDelta();
Time frameTime(0);
unsigned int loopCount = 0;
while (delta > m_TickTime && loopCount < m_MaxLoops)
{
m_SingTick();
delta -= m_TickTime;
frameTime += m_TickTime;
++loopCount;
}
m_Independent(frameTime);
// add an exception flag later.
// This is if the game hangs
if(loopCount >= m_MaxLoops)
{
delta %= m_TickTime;
}
m_Render(delta);
m_Timer.Unused(delta);
}
}
The member objects are Boost slots so different code can register with different timing methods. The Independent slot is for things like key mapping or changing music Things that don't need to be so precise. SingTick is good for physics where it is easier if you know every tick will be the same but you don't want to run through a wall. Render takes the delta so animations run smooth, but must remember to account for it on the next SingTick.
Hope that helps.

There are many good reasons why you should not limit your frame rate in such a way. One reason being as stijn pointed out, not every monitor may run at exactly 60fps, another reason being that the resolution of timers is not sufficient, yet another reason being that even given sufficient resolutions, two separate timers (monitor refresh and yours) running in parallel will always get out of sync with time (they must!) due to random inaccuracies, and the most important reason being that it is not necessary at all.
Note that the default timer resolution under Windows is 15ms, and the best possible resolution you can get (by using timeBeginPeriod) is 1ms. Thus, you can (at best) wait 16ms or 17ms. One frame at 60fps is 16.6666ms How do you wait 16.6666ms?
If you want to limit your game's speed to the monitor's refresh rate, enable vertical sync. This will do what you want, precisely, and without sync issues. Vertical sync does have its pecularities too (such as the funny surprise you get when a frame takes 16.67ms), but it is by far the best available solution.
If the reason why you wanted to do this was to fit your simulation into the render loop, this is a must read for you.

check this one out:
//Creating Digital Watch in C++
#include<iostream>
#include<Windows.h>
using namespace std;
struct time{
int hr,min,sec;
};
int main()
{
time a;
a.hr = 0;
a.min = 0;
a.sec = 0;
for(int i = 0; i<24; i++)
{
if(a.hr == 23)
{
a.hr = 0;
}
for(int j = 0; j<60; j++)
{
if(a.min == 59)
{
a.min = 0;
}
for(int k = 0; k<60; k++)
{
if(a.sec == 59)
{
a.sec = 0;
}
cout<<a.hr<<" : "<<a.min<<" : "<<a.sec<<endl;
a.sec++;
Sleep(1000);
system("Cls");
}
a.min++;
}
a.hr++;
}
}

Related

time based movement sliding object

At the moment i have a function that moves my object based on FPS, if the frames have not passed it wont do anything.
It works fine if the computer can run it at that speed.
How would i use time based and move it based on the time?
Here is my code:
typedef unsigned __int64 u64;
auto toolbarGL::Slide() -> void
{
LARGE_INTEGER li = {};
QueryPerformanceFrequency(&li);
u64 freq = static_cast<u64>(li.QuadPart); // clock ticks per second
u64 period = 60; // fps
u64 delay = freq / period; // clock ticks between frame paints
u64 start = 0, now = 0;
QueryPerformanceCounter(&li);
start = static_cast<u64>(li.QuadPart);
while (true)
{
// Waits to be ready to slide
// Keeps looping till stopped then starts to wait again
SlideEvent.wait();
QueryPerformanceCounter(&li);
now = static_cast<u64>(li.QuadPart);
if (now - start >= delay)
{
if (slideDir == SlideFlag::Right)
{
if (this->x < 0)
{
this->x += 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else if (slideDir == SlideFlag::Left)
{
if (this->x > -90)
{
this->x -= 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else
SlideEvent.stop();
start = now;
}
}
}
You can update your objects by time difference. We need to have start timestamp and then count difference on each iteration of global loop. So global loop is very important too, it has to work all the time. My example shows just call update method for your objects. All your objects should depend on time not FPS. Fps shows different behavior on different computers and even same computer can show different fps because of others processes running in background.
#include <iostream>
#include <chrono>
#include <unistd.h>
//Function to update all objects
void Update( float dt )
{
//For example
//for( auto Object : VectorObjects )
//{
// Object->Update(dt);
//}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
For this example in console you can see something like this:
Seconds passed since last update: 0.002685 seconds
Seconds passed since last update: 0.002711 seconds
Seconds passed since last update: 0.002619 seconds
Seconds passed since last update: 0.00253 seconds
Seconds passed since last update: 0.002509 seconds
Seconds passed since last update: 0.002757 seconds
Your time base logic seems to be incorrect, here's a sample code snippet. The speed of the object should be same irrespective of speed of the system. Instead of QueryPerformanceFrequency which is platform dependent, use std::chrono.
void animate(bool& stop)
{
static float speed = 1080/5; // = 1080px/ 5sec = 5sec to cross screen
static std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
float fps;
int object_x = 1080;
while(!stop)
{
//calculate factional time
auto now = std::chrono::system_clock::now();
auto diff = now - start;
auto lapse_milli = std::chrono::duration_cast<std::chrono::milliseconds>(diff);
auto lapse_sec = lapse_milli.count()/1000;
//apply to object
int incr_x = speed * lapse_sec ;
object_x -= incr_x;
if( object_x <0) object_x = 1080;
// render object here
fps = lapse_milli.count()/1000;
//print fps
std::this_thread::sleep_for(std::chrono::milliseconds(100)); // change to achieve a desired fps rate
start = now;
}
}

Real-time based increment / decrement in C++ console

Is there a way to increment / decrement value of a varaible in correspondence to the real-time in a simple C++ console application without using third-party libraries.
For, e.g. in Unity3D game engine, there is something called Time.DeltaTime which gives the time it took to complete last frame. Now, I understand there are no draw update functions or frames in a console app, however what I'm trying to do is able to increment / decrement value of a variable such that, it changes with respect to the time something like,
variable = 0
variable = variable + Time.DeltaTime
so that the variable value increment each second. Is something like this possible in C++ 11 ? so, say if speed is 5 then after 5 seconds the variable has a value 5.
Bascially, I am creating a simple particle system where I want the particles to die after their MAX_AGE is reached. I am not sure how to implement that in a simple C++ console app.
For simple timing you can use std::chrono::system_clock. The tbb::tick_count is a better delta timer, but you said no third party libraries.
As you already know, delta time is the final time, minus the original time.
dt= t-t0
This delta time, though, is simply the amount of time that passes while the velocity is changing.
The derivative of a function represents an infinitesimal change in the function with respect to one of its variables. The derivative of a function with respect to the variable is defined as
f(x + h) - f(x)
f'(x) = lim -----------------
h->0 h
First you get a time, i.e NewTime = timeGetTime().
Then you subtract the old time from a new time you just got. call it delta time , dt.
OldTime = NewTime;
dt = (float) (NewTime - OldTime)/1000;
Now you sum dt into totaltime.
TotalTime += dt
So that when TotalTime reaches 5, you end the life of the particles.
if(TotalTime >= 5.0f){
//particles to die after their MAX_AGE is reached
TotalTime=0;
}
...
More interesting reading:
http://gafferongames.com/game-physics/
http://gafferongames.com/game-physics/integration-basics/
...
example code for Windows:
#include<time.h>
#include<stdlib.h>
#include<stdio.h>
#include<windows.h>
#pragma comment(lib,"winmm.lib")
void gotoxy(int x, int y);
void StepSimulation(float dt);
int main(){
int NewTime = 0;
int OldTime = 0;
int StartTime = 0;
int TimeToLive = 5000; //// TimeToLive 5 seconds
float dt = 0;
float TotalTime = 0;
int FrameCounter = 0;
int RENDER_FRAME_COUNT = 60;
// reset TimeToLive
StartTime = timeGetTime();
while(true){
NewTime = timeGetTime();
dt = (float) (NewTime - OldTime)/1000; //delta time
OldTime = NewTime;
// print to screen TimeToLive
gotoxy(1,1);
printf("NewTime - StartTime = %d ", NewTime - StartTime );
if ( (NewTime - StartTime ) >= TimeToLive){
// reset TimeToLive
StartTime = timeGetTime();
}
// The rest of the timestep and 60fps lock
if (dt > (0.016f)) dt = (0.016f); //delta time
if (dt < 0.001f) dt = 0.001f;
TotalTime += dt;
if(TotalTime >= 5.0f){
TotalTime=0;
StepSimulation(dt);
}
if(FrameCounter >= RENDER_FRAME_COUNT){
// draw stuff
//Render();
gotoxy(1,3);
printf("OldTime = %d \n",OldTime);
printf("NewTime = %d \n",NewTime);
printf("dt = %f \n",dt);
printf("TotalTime = %f \n",TotalTime);
printf("FrameCounter = %d fps\n",FrameCounter);
printf(" \n");
FrameCounter = 0;
}
else{
gotoxy(22,7);
printf("%d ",FrameCounter);
FrameCounter++;
}
}
return 0;
}
void gotoxy(int x, int y){
COORD coord;
coord.X = x; coord.Y = y;
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), coord);
return;
}
void StepSimulation(float dt){
// calculate stuff
//vVelocity += Ae * dt;
}

Limit while loop to run at 30 "FPS" using a delta variable C++

I basically need a while loop to only run at 30 "FPS".
I was told to do this:
"Inside your while loop, make a deltaT , and if that deltaT is lesser than 33 miliseconds use sleep(33-deltaT) ."
But I really wasn't quite sure how to initialize the delta/what to set this variable to. I also couldn't get a reply back from the person that suggested this.
I'm also not sure why the value in sleep is 33 instead of 30.
Does anyone know what I can do about this?
This is mainly for a game server to update players at 30FPS, but because I'm not doing any rendering on the server, I need a way to just have the code sleep to limit how many times it can run per second or else it will process the players too fast.
You basically need to do something like this:
int now = GetTimeInMilliseconds();
int lastFrame = GetTimeInMilliseconds();
while(running)
{
now = GetTimeInMilliseconds();
int delta = now - lastFrame;
lastFrame = now;
if(delta < 33)
{
Sleep(33 - delta);
}
//...
Update();
Draw();
}
That way you calculate the amount of milliseconds passed between the current frame and last frame, and if it's smaller than 33 millisecods (1000/30, 1000 milliseconds in a second divided by 30 FPS = 33.333333....) then you sleep until 33 milliseconds has passed. Has for GetTimeInMilliseconds() and Sleep() function, it depends on the library that you're using and/or the platform.
c++11 provides a simple mechanism for that:
#include <chrono>
#include <thread>
#include <iostream>
using namespace std;
using namespace std::chrono;
void doStuff(){
std::cout << "Loop executed" << std::endl;
}
int main() {
time_point<system_clock> t = system_clock::now();
while (1) {
doStuff();
t += milliseconds(33);
this_thread::sleep_until(t);
}
}
The only thing you have to be aware of though is that if one loop iteration takes longer than the 33ms, the next two iterations will be executed without a pause in between (until t has caught up with the real time), which may or may not be what you want.
Glenn Fiedler has written a nice article on this topic a few years ago. Hacking with sleep() is not very precise, instead you want to run your physics a fixed number of times per second, let your graphics run freely, and between frames, you do as many fixed timesteps as time has passed.
The code that follows looks intimidating at first, but once you get the idea, it becomes simple; it's best to read the article completely.
Fix Your Timestep
double t = 0.0;
double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = hires_time_in_seconds();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25;
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState * alpha +
previousState * ( 1.0 - alpha );
render( state );
}
If it goes offline, there should be several backups available; however, I remember Gaffer on Games for many years already.

no performance improvement with std::thread

I am working on a audio "real time" application and I would like to imrpove the performance of it. I actually already posted a topic but this is about std::thread specificly.
The audio processing ist mostly done by two seperate objects ( leftProcessor track and rightProcessor). Since these objects don't rely on each other, using two threads to process them should realy improve the performance with multi core CPUs. However, I currently get the opposite result.
Before I activated the compiler performance optimization (O2), using two threads got me about 50% more performance, but after I switched the optimization on, I got ~10-20% less performance, while the performance got drasticly better for both versions.
I measure the performance by taking the time at two points within the function and bruteforce printing the result to the screen so there could be some problems with this. ;)
My guess would be that creating a std::thread would take more time than I actually gain from running the processing on the second thread.
In this case, is it possible to improve the performance by using the same "thread" for every function call and just passing the thread the new arguments? I don't realy know if this is possible.
The function currently takes about 0.0005ms to 0.002ms to process.
Here is the code:
void AudioController::processAudio(int frameCount, float *output) {
#ifdef AUDIOCONTROLLER_MULTITHREADING
// CALCULATE RIGHT TRACK
std::thread rightProcessorThread;
if(rightLoaded) {
rightProcessorThread = std::thread(&AudioProcessor::tick, //function
rightProcessor, //object
rightFrameBuffer, //arg1
frameCount); //arg2
} else {
for(int i = 0; i < frameCount; i++) {
rightFrameBuffer[i].leftSample = 0.0f;
rightFrameBuffer[i].rightSample = 0.0f;
}
}
#else
// CALCULATE RIGHT TRACK
if(rightLoaded) {
rightProcessor->tick(rightFrameBuffer, frameCount);
} else {
for(int i = 0; i < frameCount; i++) {
rightFrameBuffer[i].leftSample = 0.0f;
rightFrameBuffer[i].rightSample = 0.0f;
}
}
#endif
// CALCULATE LEFT TRACK
Frame * leftFrameBuffer = (Frame*) output;
if(leftLoaded) {
leftProcessor->tick(leftFrameBuffer, frameCount);
} else {
for(int i = 0; i < frameCount; i++) {
leftFrameBuffer[i].leftSample = 0.0f;
leftFrameBuffer[i].rightSample = 0.0f;
}
}
#ifdef AUDIOCONTROLLER_MULTITHREADING
// CALCULATE RIGHT TRACK
if(rightLoaded) {
rightProcessorThread.join();
}
#endif
// MIX
for(int i = 0; i < frameCount; i++ ) {
leftFrameBuffer[i] = volume * (leftRightMix * leftFrameBuffer[i] + (1.0 - leftRightMix) * rightFrameBuffer[i]);
}
}

Limiting Update Rate in C++. Why does this code update once a second not 60 times a second?

I am making a small game with C++ OpenGL. update() is normally called once every time the program runs through the code. I am trying to limit this to 60 times per second (I want the game to update at the same speed on different speed computers).
The code included below runs a timer and should call update() once the timer is >= than 0.0166666666666667 (60 times per second). However the statement if((seconds - lastTime) >= 0.0166666666666667) seems only to be tripped once per second. Does anyone know why?
Thanks in advance for your help.
//Global Timer variables
double secondsS;
double lastTime;
time_t timer;
struct tm y2k;
double seconds;
void init()
{
glClearColor(0,0,0,0.0); // Sets the clear colour to white.
// glClear(GL_COLOR_BUFFER_BIT) in the display function
//Init viewport
viewportX = 0;
viewportY = 0;
initShips();
//Time
lastTime = 0;
time_t timerS;
struct tm y2k;
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timerS); /* get current time; same as: timer = time(NULL) */
secondsS = difftime(timerS,mktime(&y2k));
printf ("%.f seconds since January 1, 2000 in the current timezone \n", secondsS);
loadTextures();
ShowCursor(true);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
void timeKeeper()
{
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timer); /* get current time; same as: timer = time(NULL) */
seconds = difftime(timer,mktime(&y2k));
seconds -= secondsS;
//Run 60 times a second. This limits updates to a constant standard.
if((seconds - lastTime) >= 0.0166666666666667)
{
lastTime = seconds;
update();
//printf ("%.f seconds since beginning program \n", seconds);
}
}
timeKeeper() is called in int WINAPI WinMain, while the program is !done
EDIT:
Thanks to those who helped, you pointed me on the right track. As mentioned in the answer below <ctime> does not have ms accuracy. I have therefore implemented the following code that has the correct accuracy:
double GetSystemTimeSample()
{
FILETIME ft1, ft2;
// assume little endian and that ULONGLONG has same alignment as FILETIME
ULONGLONG &t1 = *reinterpret_cast<ULONGLONG*>(&ft1),
&t2 = *reinterpret_cast<ULONGLONG*>(&ft2);
GetSystemTimeAsFileTime(&ft1);
do
{
GetSystemTimeAsFileTime(&ft2);
} while (t1 == t2);
return (t2 - t1) / 10000.0;
}//GetSystemTimeSample
void timeKeeper()
{
thisTime += GetSystemTimeSample();
cout << thisTime << endl;
//Run 60 times a second. This limits updates to a constant standard.
if(thisTime >= 16.666666666666699825) //Compare to a value in milliseconds
{
thisTime = seconds;
update();
}
}
http://www.cplusplus.com/reference/ctime/difftime/
Calculates the difference in seconds between beginning and end
So, you get a value in seconds. So, even if your value is double, you will get an integer.
So, you only get a difference between a value and the previous one when that difference is at least of 1 second.