I am running the same OpenGL executable in CLion and cmd, but they show different outputs. CLion shows the right number, cmd shows inf and sometimes the number. cout does the same.
(I've mitigated all ogl calls from the code since they're unnecessary)
The code to that is:
int i = 0;
auto lastTime = std::chrono::high_resolution_clock::now();
while ... {
auto currentTime = std::chrono::high_resolution_clock::now();
if (i % 10000 == 0) {
auto deltaTime = std::chrono::duration_cast<std::chrono::duration<float>>(currentTime - lastTime).count();
printf("FPS: %10.0f\n", 1.0f / deltaTime);
}
i++;
lastTime = currentTime;
}
CLion left, cmd right
As Peter pointed out, the problem is that some iterations take less time than the resolution of the clock being used and that time in turn depends on the environment where the program is executed.
A solution to such problems could be to measure the time for X iterations (10000 in the following example) and then calculate the average time:
int i = 0;
auto lastTime = std::chrono::high_resolution_clock::now();
while ... {
if (++i == 10000) {
auto currentTime = std::chrono::high_resolution_clock::now();
auto deltaTime = std::chrono::duration_cast<std::chrono::duration<float>>(currentTime - lastTime).count();
deltaTime /= 10000.0f; //average time for 1 iteration
printf("FPS: %10.0f\n", 1.0f / deltaTime);
lastTime = currentTime;
i = 0; //reset counter to prevent overflow in long runs
}
}
Related
At the moment i have a function that moves my object based on FPS, if the frames have not passed it wont do anything.
It works fine if the computer can run it at that speed.
How would i use time based and move it based on the time?
Here is my code:
typedef unsigned __int64 u64;
auto toolbarGL::Slide() -> void
{
LARGE_INTEGER li = {};
QueryPerformanceFrequency(&li);
u64 freq = static_cast<u64>(li.QuadPart); // clock ticks per second
u64 period = 60; // fps
u64 delay = freq / period; // clock ticks between frame paints
u64 start = 0, now = 0;
QueryPerformanceCounter(&li);
start = static_cast<u64>(li.QuadPart);
while (true)
{
// Waits to be ready to slide
// Keeps looping till stopped then starts to wait again
SlideEvent.wait();
QueryPerformanceCounter(&li);
now = static_cast<u64>(li.QuadPart);
if (now - start >= delay)
{
if (slideDir == SlideFlag::Right)
{
if (this->x < 0)
{
this->x += 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else if (slideDir == SlideFlag::Left)
{
if (this->x > -90)
{
this->x -= 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else
SlideEvent.stop();
start = now;
}
}
}
You can update your objects by time difference. We need to have start timestamp and then count difference on each iteration of global loop. So global loop is very important too, it has to work all the time. My example shows just call update method for your objects. All your objects should depend on time not FPS. Fps shows different behavior on different computers and even same computer can show different fps because of others processes running in background.
#include <iostream>
#include <chrono>
#include <unistd.h>
//Function to update all objects
void Update( float dt )
{
//For example
//for( auto Object : VectorObjects )
//{
// Object->Update(dt);
//}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
For this example in console you can see something like this:
Seconds passed since last update: 0.002685 seconds
Seconds passed since last update: 0.002711 seconds
Seconds passed since last update: 0.002619 seconds
Seconds passed since last update: 0.00253 seconds
Seconds passed since last update: 0.002509 seconds
Seconds passed since last update: 0.002757 seconds
Your time base logic seems to be incorrect, here's a sample code snippet. The speed of the object should be same irrespective of speed of the system. Instead of QueryPerformanceFrequency which is platform dependent, use std::chrono.
void animate(bool& stop)
{
static float speed = 1080/5; // = 1080px/ 5sec = 5sec to cross screen
static std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
float fps;
int object_x = 1080;
while(!stop)
{
//calculate factional time
auto now = std::chrono::system_clock::now();
auto diff = now - start;
auto lapse_milli = std::chrono::duration_cast<std::chrono::milliseconds>(diff);
auto lapse_sec = lapse_milli.count()/1000;
//apply to object
int incr_x = speed * lapse_sec ;
object_x -= incr_x;
if( object_x <0) object_x = 1080;
// render object here
fps = lapse_milli.count()/1000;
//print fps
std::this_thread::sleep_for(std::chrono::milliseconds(100)); // change to achieve a desired fps rate
start = now;
}
}
Is there a way to increment / decrement value of a varaible in correspondence to the real-time in a simple C++ console application without using third-party libraries.
For, e.g. in Unity3D game engine, there is something called Time.DeltaTime which gives the time it took to complete last frame. Now, I understand there are no draw update functions or frames in a console app, however what I'm trying to do is able to increment / decrement value of a variable such that, it changes with respect to the time something like,
variable = 0
variable = variable + Time.DeltaTime
so that the variable value increment each second. Is something like this possible in C++ 11 ? so, say if speed is 5 then after 5 seconds the variable has a value 5.
Bascially, I am creating a simple particle system where I want the particles to die after their MAX_AGE is reached. I am not sure how to implement that in a simple C++ console app.
For simple timing you can use std::chrono::system_clock. The tbb::tick_count is a better delta timer, but you said no third party libraries.
As you already know, delta time is the final time, minus the original time.
dt= t-t0
This delta time, though, is simply the amount of time that passes while the velocity is changing.
The derivative of a function represents an infinitesimal change in the function with respect to one of its variables. The derivative of a function with respect to the variable is defined as
f(x + h) - f(x)
f'(x) = lim -----------------
h->0 h
First you get a time, i.e NewTime = timeGetTime().
Then you subtract the old time from a new time you just got. call it delta time , dt.
OldTime = NewTime;
dt = (float) (NewTime - OldTime)/1000;
Now you sum dt into totaltime.
TotalTime += dt
So that when TotalTime reaches 5, you end the life of the particles.
if(TotalTime >= 5.0f){
//particles to die after their MAX_AGE is reached
TotalTime=0;
}
...
More interesting reading:
http://gafferongames.com/game-physics/
http://gafferongames.com/game-physics/integration-basics/
...
example code for Windows:
#include<time.h>
#include<stdlib.h>
#include<stdio.h>
#include<windows.h>
#pragma comment(lib,"winmm.lib")
void gotoxy(int x, int y);
void StepSimulation(float dt);
int main(){
int NewTime = 0;
int OldTime = 0;
int StartTime = 0;
int TimeToLive = 5000; //// TimeToLive 5 seconds
float dt = 0;
float TotalTime = 0;
int FrameCounter = 0;
int RENDER_FRAME_COUNT = 60;
// reset TimeToLive
StartTime = timeGetTime();
while(true){
NewTime = timeGetTime();
dt = (float) (NewTime - OldTime)/1000; //delta time
OldTime = NewTime;
// print to screen TimeToLive
gotoxy(1,1);
printf("NewTime - StartTime = %d ", NewTime - StartTime );
if ( (NewTime - StartTime ) >= TimeToLive){
// reset TimeToLive
StartTime = timeGetTime();
}
// The rest of the timestep and 60fps lock
if (dt > (0.016f)) dt = (0.016f); //delta time
if (dt < 0.001f) dt = 0.001f;
TotalTime += dt;
if(TotalTime >= 5.0f){
TotalTime=0;
StepSimulation(dt);
}
if(FrameCounter >= RENDER_FRAME_COUNT){
// draw stuff
//Render();
gotoxy(1,3);
printf("OldTime = %d \n",OldTime);
printf("NewTime = %d \n",NewTime);
printf("dt = %f \n",dt);
printf("TotalTime = %f \n",TotalTime);
printf("FrameCounter = %d fps\n",FrameCounter);
printf(" \n");
FrameCounter = 0;
}
else{
gotoxy(22,7);
printf("%d ",FrameCounter);
FrameCounter++;
}
}
return 0;
}
void gotoxy(int x, int y){
COORD coord;
coord.X = x; coord.Y = y;
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), coord);
return;
}
void StepSimulation(float dt){
// calculate stuff
//vVelocity += Ae * dt;
}
I'm trying to calculate the framerate of a GLUT window by calling a custom CalculateFrameRate method I made at the beginning of my Display() callback function. I call glutPostRedisplay() after calculations I perform every frame so Display() gets called for every frame.
I also have an int numFrames that increments every frame (every time glutPostRedisplay gets called) and I print that out as well. My CalculateFrameRate method calculates a rate of about 7 fps but if I look at a stopwatch and compare it to how quickly my numFrames incrementor increases, the framerate is easily 25-30 fps.
I can't seem to figure out why there is such a discrepancy. I've posted my CalcuateFrameRate method below
clock_t lastTime;
int numFrames;
//GLUT Setup callback
void Renderer::Setup()
{
numFrames = 0;
lastTime = clock();
}
//Called in Display() callback every time I call glutPostRedisplay()
void CalculateFrameRate()
{
clock_t currentTime = clock();
double diff = currentTime - lastTime;
double seconds = diff / CLOCKS_PER_SEC;
double frameRate = 1.0 / seconds;
std::cout<<"FRAMERATE: "<<frameRate<<endl;
numFrames ++;
std::cout<<"NUM FRAMES: "<<numFrames<<endl;
lastTime = currentTime;
}
The function clock (except in Windows) gives you the CPU-time uses, so if you are not spinning the CPU for the entire frame-time, then it will give you a lower time than expected. Conversely, if you have 16 cores running 16 of your threads flat out, the time reported by clock will be 16 times the actual time.
You can use std::chrono::steady_clock, std::chrono::high_resolution_clock, or if you are using Linux/Unix, gettimeofday (which gives you microosecond resolution).
Here's a couple of snippets of how to use gettimeofday to measure milliseconds:
double time_to_double(timeval *t)
{
return (t->tv_sec + (t->tv_usec/1000000.0)) * 1000.0;
}
double time_diff(timeval *t1, timeval *t2)
{
return time_to_double(t2) - time_to_double(t1);
}
gettimeofday(&t1, NULL);
... do stuff ...
gettimeofday(&t2, NULL);
cout << "Time taken: " << time_diff(&t1, &t2) << "ms" << endl;
Here's a piece of code to show how to use std::chrono::high_resolution_clock:
auto start = std::chrono::high_resolution_clock::now();
... stuff goes here ...
auto diff = std::chrono::high_resolution_clock::now() - start;
auto t1 = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);
After doing some research and debugging in my C++ project with glfwGetTime(), I'm having trouble making a game loop for my project. As far as time goes I really only worked with nanoseconds in Java and on the GLFW website it states that the function returns the time in seconds. How would I make a fixed time step loop with glfwGetTime()?
What I have now -
while(!glfwWindowShouldClose(window))
{
double now = glfwGetTime();
double delta = now - lastTime;
lastTime = now;
accumulator += delta;
while(accumulator >= OPTIMAL_TIME) // OPTIMAL_TIME = 1 / 60
{
//tick
accumulator -= OPTIMAL_TIME;
}
}
All you need is this code for limiting updates, but keeping the rendering at highest possible frames. The code is based on this tutorial which explains it very well. All I did was to implement the same principle with GLFW and C++.
static double limitFPS = 1.0 / 60.0;
double lastTime = glfwGetTime(), timer = lastTime;
double deltaTime = 0, nowTime = 0;
int frames = 0 , updates = 0;
// - While window is alive
while (!window.closed()) {
// - Measure time
nowTime = glfwGetTime();
deltaTime += (nowTime - lastTime) / limitFPS;
lastTime = nowTime;
// - Only update at 60 frames / s
while (deltaTime >= 1.0){
update(); // - Update function
updates++;
deltaTime--;
}
// - Render at maximum possible frames
render(); // - Render function
frames++;
// - Reset after one second
if (glfwGetTime() - timer > 1.0) {
timer ++;
std::cout << "FPS: " << frames << " Updates:" << updates << std::endl;
updates = 0, frames = 0;
}
}
You should have a function update() for updating game logic and a render() for rendering. Hope this helps.
I am making a small game with C++ OpenGL. update() is normally called once every time the program runs through the code. I am trying to limit this to 60 times per second (I want the game to update at the same speed on different speed computers).
The code included below runs a timer and should call update() once the timer is >= than 0.0166666666666667 (60 times per second). However the statement if((seconds - lastTime) >= 0.0166666666666667) seems only to be tripped once per second. Does anyone know why?
Thanks in advance for your help.
//Global Timer variables
double secondsS;
double lastTime;
time_t timer;
struct tm y2k;
double seconds;
void init()
{
glClearColor(0,0,0,0.0); // Sets the clear colour to white.
// glClear(GL_COLOR_BUFFER_BIT) in the display function
//Init viewport
viewportX = 0;
viewportY = 0;
initShips();
//Time
lastTime = 0;
time_t timerS;
struct tm y2k;
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timerS); /* get current time; same as: timer = time(NULL) */
secondsS = difftime(timerS,mktime(&y2k));
printf ("%.f seconds since January 1, 2000 in the current timezone \n", secondsS);
loadTextures();
ShowCursor(true);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
void timeKeeper()
{
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timer); /* get current time; same as: timer = time(NULL) */
seconds = difftime(timer,mktime(&y2k));
seconds -= secondsS;
//Run 60 times a second. This limits updates to a constant standard.
if((seconds - lastTime) >= 0.0166666666666667)
{
lastTime = seconds;
update();
//printf ("%.f seconds since beginning program \n", seconds);
}
}
timeKeeper() is called in int WINAPI WinMain, while the program is !done
EDIT:
Thanks to those who helped, you pointed me on the right track. As mentioned in the answer below <ctime> does not have ms accuracy. I have therefore implemented the following code that has the correct accuracy:
double GetSystemTimeSample()
{
FILETIME ft1, ft2;
// assume little endian and that ULONGLONG has same alignment as FILETIME
ULONGLONG &t1 = *reinterpret_cast<ULONGLONG*>(&ft1),
&t2 = *reinterpret_cast<ULONGLONG*>(&ft2);
GetSystemTimeAsFileTime(&ft1);
do
{
GetSystemTimeAsFileTime(&ft2);
} while (t1 == t2);
return (t2 - t1) / 10000.0;
}//GetSystemTimeSample
void timeKeeper()
{
thisTime += GetSystemTimeSample();
cout << thisTime << endl;
//Run 60 times a second. This limits updates to a constant standard.
if(thisTime >= 16.666666666666699825) //Compare to a value in milliseconds
{
thisTime = seconds;
update();
}
}
http://www.cplusplus.com/reference/ctime/difftime/
Calculates the difference in seconds between beginning and end
So, you get a value in seconds. So, even if your value is double, you will get an integer.
So, you only get a difference between a value and the previous one when that difference is at least of 1 second.