C++ How to correctly cap FPS (Using GLFW) - c++

So I have been trying to limit my fps to 60:
//Those are members inside the Display class
double tracker = glfwGetTime();
const float frameCap = 1 / 60.0f;
void Display::present() {
glfwSwapBuffers( _window );
//Getting the time between each call
double now = glfwGetTime( );
double frameTime=now - tracker;
tracker=now;
//delaying if required
if ( frameTime < frameCap )
delay( frameCap - frameTime );
}
void game() {
//Creating window and opengl context
.....
//Disabling "vsync" so i can cap fps by my own
glfwSwapInterval(0);
while(running) {
//Rendering and updating
.........
//Swap buffers and delay if required
display.present();
}
}
My delay/sleep function
#ifdef _WIN32
#include <Windows.h>
#else
#include <unistd.h>
#endif
void delay( uint32_t ms )
{
#ifdef _WIN32
Sleep( ms );
#else
usleep( ms * 1000 );
#endif
}
Basically the idea to cap the framerate in each Display::present() call.
It looks like that nothing is being capped at all in fact the fps is 4000+

For the first call of present your double frameTime=glfwGetTime( ) - tracker;, sets frameTime to the difference between the current time (glfwGetTime() and the inital value of tracker you set with glfwGetTime().
In the next line, you set tracker=frameTime; (frameTime is not the time but the difference here.)
For the next call of present, the value tracker is really small (as it is the difference and not the time), as of that your double frameTime=glfwGetTime( ) - tracker; becomes really large (larger than frameCap), so there the sleep won't happen.
For the next call of present the condition of ( frameTime < frameCap ) might be true again, but for the flow up it definitely won't be anymore again.
So for at least every second invocation of present the delay won't be called.
Besides that glfwGetTime returns seconds, frameCap also represents how many seconds a frame should last, but your void delay( uint32_t ms ) expects milliseconds.

Related

time based movement sliding object

At the moment i have a function that moves my object based on FPS, if the frames have not passed it wont do anything.
It works fine if the computer can run it at that speed.
How would i use time based and move it based on the time?
Here is my code:
typedef unsigned __int64 u64;
auto toolbarGL::Slide() -> void
{
LARGE_INTEGER li = {};
QueryPerformanceFrequency(&li);
u64 freq = static_cast<u64>(li.QuadPart); // clock ticks per second
u64 period = 60; // fps
u64 delay = freq / period; // clock ticks between frame paints
u64 start = 0, now = 0;
QueryPerformanceCounter(&li);
start = static_cast<u64>(li.QuadPart);
while (true)
{
// Waits to be ready to slide
// Keeps looping till stopped then starts to wait again
SlideEvent.wait();
QueryPerformanceCounter(&li);
now = static_cast<u64>(li.QuadPart);
if (now - start >= delay)
{
if (slideDir == SlideFlag::Right)
{
if (this->x < 0)
{
this->x += 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else if (slideDir == SlideFlag::Left)
{
if (this->x > -90)
{
this->x -= 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else
SlideEvent.stop();
start = now;
}
}
}
You can update your objects by time difference. We need to have start timestamp and then count difference on each iteration of global loop. So global loop is very important too, it has to work all the time. My example shows just call update method for your objects. All your objects should depend on time not FPS. Fps shows different behavior on different computers and even same computer can show different fps because of others processes running in background.
#include <iostream>
#include <chrono>
#include <unistd.h>
//Function to update all objects
void Update( float dt )
{
//For example
//for( auto Object : VectorObjects )
//{
// Object->Update(dt);
//}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
For this example in console you can see something like this:
Seconds passed since last update: 0.002685 seconds
Seconds passed since last update: 0.002711 seconds
Seconds passed since last update: 0.002619 seconds
Seconds passed since last update: 0.00253 seconds
Seconds passed since last update: 0.002509 seconds
Seconds passed since last update: 0.002757 seconds
Your time base logic seems to be incorrect, here's a sample code snippet. The speed of the object should be same irrespective of speed of the system. Instead of QueryPerformanceFrequency which is platform dependent, use std::chrono.
void animate(bool& stop)
{
static float speed = 1080/5; // = 1080px/ 5sec = 5sec to cross screen
static std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
float fps;
int object_x = 1080;
while(!stop)
{
//calculate factional time
auto now = std::chrono::system_clock::now();
auto diff = now - start;
auto lapse_milli = std::chrono::duration_cast<std::chrono::milliseconds>(diff);
auto lapse_sec = lapse_milli.count()/1000;
//apply to object
int incr_x = speed * lapse_sec ;
object_x -= incr_x;
if( object_x <0) object_x = 1080;
// render object here
fps = lapse_milli.count()/1000;
//print fps
std::this_thread::sleep_for(std::chrono::milliseconds(100)); // change to achieve a desired fps rate
start = now;
}
}

Limit while loop to run at 30 "FPS" using a delta variable C++

I basically need a while loop to only run at 30 "FPS".
I was told to do this:
"Inside your while loop, make a deltaT , and if that deltaT is lesser than 33 miliseconds use sleep(33-deltaT) ."
But I really wasn't quite sure how to initialize the delta/what to set this variable to. I also couldn't get a reply back from the person that suggested this.
I'm also not sure why the value in sleep is 33 instead of 30.
Does anyone know what I can do about this?
This is mainly for a game server to update players at 30FPS, but because I'm not doing any rendering on the server, I need a way to just have the code sleep to limit how many times it can run per second or else it will process the players too fast.
You basically need to do something like this:
int now = GetTimeInMilliseconds();
int lastFrame = GetTimeInMilliseconds();
while(running)
{
now = GetTimeInMilliseconds();
int delta = now - lastFrame;
lastFrame = now;
if(delta < 33)
{
Sleep(33 - delta);
}
//...
Update();
Draw();
}
That way you calculate the amount of milliseconds passed between the current frame and last frame, and if it's smaller than 33 millisecods (1000/30, 1000 milliseconds in a second divided by 30 FPS = 33.333333....) then you sleep until 33 milliseconds has passed. Has for GetTimeInMilliseconds() and Sleep() function, it depends on the library that you're using and/or the platform.
c++11 provides a simple mechanism for that:
#include <chrono>
#include <thread>
#include <iostream>
using namespace std;
using namespace std::chrono;
void doStuff(){
std::cout << "Loop executed" << std::endl;
}
int main() {
time_point<system_clock> t = system_clock::now();
while (1) {
doStuff();
t += milliseconds(33);
this_thread::sleep_until(t);
}
}
The only thing you have to be aware of though is that if one loop iteration takes longer than the 33ms, the next two iterations will be executed without a pause in between (until t has caught up with the real time), which may or may not be what you want.
Glenn Fiedler has written a nice article on this topic a few years ago. Hacking with sleep() is not very precise, instead you want to run your physics a fixed number of times per second, let your graphics run freely, and between frames, you do as many fixed timesteps as time has passed.
The code that follows looks intimidating at first, but once you get the idea, it becomes simple; it's best to read the article completely.
Fix Your Timestep
double t = 0.0;
double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = hires_time_in_seconds();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25;
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState * alpha +
previousState * ( 1.0 - alpha );
render( state );
}
If it goes offline, there should be several backups available; however, I remember Gaffer on Games for many years already.

Calculating glut framerate using clocks_per_sec much too slow

I'm trying to calculate the framerate of a GLUT window by calling a custom CalculateFrameRate method I made at the beginning of my Display() callback function. I call glutPostRedisplay() after calculations I perform every frame so Display() gets called for every frame.
I also have an int numFrames that increments every frame (every time glutPostRedisplay gets called) and I print that out as well. My CalculateFrameRate method calculates a rate of about 7 fps but if I look at a stopwatch and compare it to how quickly my numFrames incrementor increases, the framerate is easily 25-30 fps.
I can't seem to figure out why there is such a discrepancy. I've posted my CalcuateFrameRate method below
clock_t lastTime;
int numFrames;
//GLUT Setup callback
void Renderer::Setup()
{
numFrames = 0;
lastTime = clock();
}
//Called in Display() callback every time I call glutPostRedisplay()
void CalculateFrameRate()
{
clock_t currentTime = clock();
double diff = currentTime - lastTime;
double seconds = diff / CLOCKS_PER_SEC;
double frameRate = 1.0 / seconds;
std::cout<<"FRAMERATE: "<<frameRate<<endl;
numFrames ++;
std::cout<<"NUM FRAMES: "<<numFrames<<endl;
lastTime = currentTime;
}
The function clock (except in Windows) gives you the CPU-time uses, so if you are not spinning the CPU for the entire frame-time, then it will give you a lower time than expected. Conversely, if you have 16 cores running 16 of your threads flat out, the time reported by clock will be 16 times the actual time.
You can use std::chrono::steady_clock, std::chrono::high_resolution_clock, or if you are using Linux/Unix, gettimeofday (which gives you microosecond resolution).
Here's a couple of snippets of how to use gettimeofday to measure milliseconds:
double time_to_double(timeval *t)
{
return (t->tv_sec + (t->tv_usec/1000000.0)) * 1000.0;
}
double time_diff(timeval *t1, timeval *t2)
{
return time_to_double(t2) - time_to_double(t1);
}
gettimeofday(&t1, NULL);
... do stuff ...
gettimeofday(&t2, NULL);
cout << "Time taken: " << time_diff(&t1, &t2) << "ms" << endl;
Here's a piece of code to show how to use std::chrono::high_resolution_clock:
auto start = std::chrono::high_resolution_clock::now();
... stuff goes here ...
auto diff = std::chrono::high_resolution_clock::now() - start;
auto t1 = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);

Design fps limiter

I try to cap the animation at 30 fps. So I design the functions below to achieve the goal. Unfortunately, the animation doesn't behave as fast as no condition checking for setFPSLimit() function when I set 60 fps (DirectX caps game application at 60 fps by default). How should I fix it to make it work?
getGameTime() function counts the time like stopwatch in millisecond when game application starts.
//Called every time you need the current game time
float getGameTime()
{
UINT64 ticks;
float time;
// This is the number of clock ticks since start
if( !QueryPerformanceCounter((LARGE_INTEGER *)&ticks) )
ticks = (UINT64)timeGetTime();
// Divide by frequency to get the time in seconds
time = (float)(__int64)ticks/(float)(__int64)ticksPerSecond;
// Subtract the time at game start to get
// the time since the game started
time -= timeAtGameStart;
return time;
}
With fps limit
http://www.youtube.com/watch?v=i3VDOMqI6ic
void update()
{
if ( setFPSLimit(60) )
updateAnimation();
}
With No fps limit http://www.youtube.com/watch?v=Rg_iKk78ews
void update()
{
updateAnimation();
}
bool setFPSLimit(float fpsLimit)
{
// Convert fps to time
static float timeDelay = 1 / fpsLimit;
// Measure time elapsed
static float timeElapsed = 0;
float currentTime = getGameTime();
static float totalTimeDelay = timeDelay + getGameTime();
if( currentTime > totalTimeDelay)
{
totalTimeDelay = timeDelay + getGameTime();
return true;
}
else
return false;
}

How to capture accurate framerate in OpenGL

What is a good way to get an accurate framerate (frames per second) in native windows opengl c++?
Here's a timer class I used to use back in the day, in an ATL project. Haven't done C++ or opengl for awhile, but maybe this will give you some ideas:
Usage
// Put this in your class somewhere
CTimer m_timer;
// Initialize the timer using
m_timer.Init();
// Call this everytime you call draw your scene
m_timer.Update();
// Call this to get the frames/sec
m_timer.GetFPS();
Timer Class
// Timer.h: Timer class used for determining elapsed time and
// frames per second.
//
//////////////////////////////////////////////////////////////////////
#ifndef _E_TIMER_H
#define _E_TIMER_H
#pragma once
//////////////////////////////////////////////////////////////////////
// INCLUDES
//////////////////////////////////////////////////////////////////////
#include <windows.h>
#include <stdio.h>
#include <math.h>
//////////////////////////////////////////////////////////////////////
// CLASSES
//////////////////////////////////////////////////////////////////////
class CTimer
{
private:
//performance timer variables
__int64 m_i64PerformanceTimerStart;
__int64 m_i64PerformanceTimerElapsed;
//multimedia timer variables
unsigned long m_ulMMTimerElapsed;
unsigned long m_ulMMTimerStart;
//general timer variables
__int64 m_i64Frequency;
float m_fResolution;
bool m_bPerformanceTimer;
//FPS variables
float m_fTime1;
float m_fTime2;
float m_fDiffTime;
float m_fFPS;
int m_iFramesElapsed;
public:
//----------------------------------------------------------
// Name: CTimer::CTimer
// Desc: Default constructor
// Args: None
// Rets: None
//----------------------------------------------------------
CTimer( void )
: m_fFPS(0.0f), m_fTime1(0.0f), m_fTime2(0.0f), m_fDiffTime(0.0f), m_iFramesElapsed(0)
{ }
//----------------------------------------------------------
// Name: CTimer::CTimer
// Desc: Default destructor
// Args: None
// Rets: None
//----------------------------------------------------------
virtual ~CTimer( void )
{ }
//----------------------------------------------------------
// Name: CTimer::Init - public
// Desc: Initiate the timer for the program
// Args: None
// Rets: bool: -true: using performance timer
// -false: using multimedia timer
//----------------------------------------------------------
bool Init( void )
{
//check to see if we are going to be using the performance counter
if( QueryPerformanceFrequency( ( LARGE_INTEGER* )&m_i64Frequency ) )
{
//we are able to use the performance timer
m_bPerformanceTimer= true;
//get the current time and store it in m_i64PerformanceTimerStart
QueryPerformanceCounter( ( LARGE_INTEGER* )&m_i64PerformanceTimerStart );
//calculate the timer resolution
m_fResolution= ( float )( ( ( double )1.0f )/( ( double )m_i64Frequency ) );
//initialize the elapsed time variable
m_i64PerformanceTimerElapsed= m_i64PerformanceTimerStart;
}
//we cannot use the performence counter, so we'll use the multimedia counter
else
{
//we're using the multimedia counter
m_bPerformanceTimer= false;
m_ulMMTimerStart = timeGetTime( ); //record the time the program started
m_ulMMTimerElapsed = m_ulMMTimerStart; //initialize the elapsed time variable
m_fResolution = 1.0f/1000.0f;
m_i64Frequency = 1000;
}
return m_bPerformanceTimer;
}
//----------------------------------------------------------
// Name: CTimer::Update - public
// Desc: Update the timer (perform FPS counter calculations)
// Args: None
// Rets: None
//----------------------------------------------------------
void Update( void )
{
//increase the number of frames that have passed
m_iFramesElapsed++;
if ( m_iFramesElapsed % 5 == 1 )
m_fTime1 = GetTime( )/1000;
else if ( m_iFramesElapsed % 5 == 0 )
{
m_fTime1 = m_fTime2;
m_fTime2 = GetTime( )/1000;
m_fDiffTime= ( float )fabs( m_fTime2-m_fTime1 );
}
m_fFPS= 5/( m_fDiffTime );
/*m_fTime2 = GetTime( )/1000;
m_fDiffTime= ( float )fabs( m_fTime2-m_fTime1 );
if (m_fDiffTime > 1.0f)
{
m_fTime1 = m_fTime2;
m_fFPS= m_iFramesElapsed / ( m_fDiffTime );
m_iFramesElapsed = 0;
}
*/
}
//----------------------------------------------------------
// Name: CTimer::GetTime - public
// Desc: Get the current time since the program started
// Args: None
// Rets: float: The time elapsed since the program started.
//----------------------------------------------------------
float GetTime( void )
{
__int64 i64Time;
//check to see if we are using the performance counter
if( m_bPerformanceTimer )
{
//get the current performance time
QueryPerformanceCounter( ( LARGE_INTEGER* )&i64Time );
//return the time since the program started
return ( ( float )( i64Time - m_i64PerformanceTimerStart )*m_fResolution )*1000.0f;
}
//we are using the multimedia counter
else
{
//return the time since the program started
return ( ( float )( timeGetTime( ) - m_ulMMTimerStart )*m_fResolution )*1000.0f;
}
}
//----------------------------------------------------------
// Name: CTimer::GetElapsedSeconds - public
// Desc: Get the elapsed seconds since the last frame was drawn.
// Args: elapsedFrames:
// Rets: float: The time elapsed since the program started.
//----------------------------------------------------------
float GetElapsedSeconds(unsigned long elapsedFrames = 1)
{ return m_fDiffTime; }
//----------------------------------------------------------
// Name: CTimer::GetFPS - public
// Desc: Get the current number of frames per second
// Args: None
// Rets: float: the number of frames per second
//----------------------------------------------------------
inline float GetFPS( void )
{ return m_fFPS; }
};
#endif // _E_TIMER_H
in c++, my favorite timer is the same as Steve suggests.
there may also be the issue of disabling vsync in your opengl app, for me it has always been on by default and you have to load some function to disable it.
as for a maybe more platform independent solution,
use time.h
I can't remember the function :( but it returns how long your app has been running in seconds, in that case just count the number of frames that have passed between seconds and that’s your fps (hypothetical function GetTime() )
// in your loop:
//////////
static int lastTime = GetTime();
static int framesDone = 0;
int currentTime = GetTime();
if(currentTime > lastTime)
{
int fps = framesDone;
framesDone = 0;
lastTime = currentTime;
}
framesDone++;
/////////
but yeah, for windows the first answer is the best.
if you need help disabling vsync, let us know.