How to Make a Basic FPS Counter? - c++

I'm trying to display my frames-per-second in my cube-rendering program. I would like to see its performance. So, how can I do it? I have done research on this already, but the examples I've seen use either multiple classes and still don't work, or they use libraries that I don't have. Is there a way to get the FPS by using pre-installed libs like ctime? I am using OpenGL with C++.
Here is my (empty) function:
void GetFPS()
{
}
and then I display my FPS in my render function with:
std::cout << xRot << " " << yRot << " " << zRot << " " << FPS << "\n"; //xRot, yRot, and zRot are my cube's rotation.
My program is set to 60FPS, but I would like to see the actual FPS, not what it's set to.

You have to sample 2 different time intervals using clock()
however notes that there are several problems:
resolution of clock is several milliseconds (you may workaround using std::chrono etc, however even chrono may have not so high resolution depending on implementation. On my PC with GCC 4.9.1 I never get better resolution than 16 milliseconds even with std::chrono.
tipically using clock() you will get 0 many times and at some time you will measure a real time (in my case it just make a jump of 15/16 milliseconds)
unless you are using vertical syncronization (vsync), you will not measure real frametime but just the CPU time spent in your render loop (to activate vsync you have to SetSwapInterval(1) wich your OS function or for example using a library like SDL that provide portable cross platform implementaiton)
To measure real rendering time you may use a GL'S time query (you may have only 1 timer bound at any time so if you are measuring framerate you cann't measure how long takes render something specific).
Do not measure FPS (well unless you want just to show it to users), instead measure frame time in milliseconds, that gives much more intuitive approximation of performance. (you know going from 100 to 80 FPS is 2,5 ms difference, going from 40 to 20 FPS is 25 ms difference!)
Do that:
double clockToMilliseconds(clock_t ticks){
// units/(units/time) => time (seconds) * 1000 = milliseconds
return (ticks/(double)CLOCKS_PER_SEC)*1000.0;
}
//...
clock_t deltaTime = 0;
unsigned int frames = 0;
double frameRate = 30;
double averageFrameTimeMilliseconds = 33.333;
while(rendering){
clock_t beginFrame = clock();
render();
clock_t endFrame = clock();
deltaTime += endFrame - beginFrame;
frames ++;
//if you really want FPS
if( clockToMilliseconds(deltaTime)>1000.0){ //every second
frameRate = (double)frames*0.5 + frameRate*0.5; //more stable
frames = 0;
deltaTime -= CLOCKS_PER_SEC;
averageFrameTimeMilliseconds = 1000.0/(frameRate==0?0.001:frameRate);
if(vsync)
std::cout<<"FrameTime was:"<<averageFrameTimeMilliseconds<<std::endl;
else
std::cout<<"CPU time was:"<<averageFrameTimeMilliseconds<<std::endl;
}
}
The above code works also when you do something that takes several seconds. I do a computation that is updated every second, you could as well update it more often. (note I use exactly that code in most of my projects that need FPS)

Simply save the time "ticks" before and after you render your scene, then do a simple calculation.
Here's an example that uses <ctime>'s clock() function. (note that clock() works differently on different platform)
clock_t current_ticks, delta_ticks;
clock_t fps = 0;
while(true)// your main loop. could also be the idle() function in glut or whatever
{
current_ticks = clock();
render();
delta_ticks = clock() - current_ticks; //the time, in ms, that took to render the scene
if(delta_ticks > 0)
fps = CLOCKS_PER_SEC / delta_ticks;
cout << fps << endl;
}

just call this in any loop to measure the number of calls a second.
#include <chrono>
void printFPS() {
static std::chrono::time_point<std::chrono::steady_clock> oldTime = std::chrono::high_resolution_clock::now();
static int fps; fps++;
if (std::chrono::duration_cast<std::chrono::seconds>(std::chrono::high_resolution_clock::now() - oldTime) >= std::chrono::seconds{ 1 }) {
oldTime = std::chrono::high_resolution_clock::now();
std::cout << "FPS: " << fps << std::endl;
fps = 0;
}
}

If you want to measure FPS only for the sake of printing it, you may use std::chrono as it measures wall clock. Using std::clock() results in a more accurate value than std::chrono since it measures processing time, but maybe you don't want to print FPS with such a high precision.
The solution below uses std::chrono to calculate a program's uptime and increments a frame counter after each frame update. Dividing the frame counter by the program's uptime gives you the FPS.
#include <chrono>
#include <iostream>
#include <thread>
using namespace std::chrono;
steady_clock::time_point first_tp;
unsigned long frame_count = 0;
duration<double> uptime()
{
if (first_tp == steady_clock::time_point{})
return duration<double>{ 0 };
return steady_clock::now() - first_tp;
}
double fps()
{
const double uptime_sec = uptime().count();
if (uptime_sec == 0)
return 0;
return frame_count / uptime_sec;
}
void time_consuming_function()
{
std::this_thread::sleep_for(milliseconds{ 100 });
}
void run_forever()
{
std::cout << "fps at first: " << fps() << '\n';
first_tp = std::chrono::steady_clock::now();
while (true)
{
std::cout << "fps: " << fps() << '\n';
time_consuming_function();
frame_count++;
}
}
int main()
{
run_forever();
}
Running it on my machine produces:
$ ./measure_fps
fps at first: 0
fps: 0
fps: 9.99108
fps: 9.99025
fps: 9.98997
fps: 9.98984
Whereas adapting it to std::clock() gives
$ ./measure_fps
fps at first: 0
fps: 0
fps: 37037
fps: 25316.5
fps: 23622
fps: 22346.4

Fast and complete
#include <time.h> // clock
#include <stdio.h> // snprintf
unsigned int iMemClock, iCurClock, iLoops;
char aFPS[12];
/* main loop */
{
if (iMemClock > (iCurClock = clock()))
iLoops++;
else
{
snprintf(aFPS, sizeof(aFPS),"FPS: %d",iLoops);
iMemClock = iCurClock + CLOCKS_PER_SEC;
iLoops = 0;
}
/*
someRoutines();
*/
}
/* main loop */

Related

Writing OpenCV frames to disk in C++: is mono-thread write speed limited by anything other than disk throughput?

I'm facing what I consider a fairly odd behaviour when writin OpenCV frames to disk: I can't write to disk faster that ~20 fps, independently if I do it on my SSD or my HDD. But, and here's the thing: if I use one thread to write the first half of the data and another to write the second half, then I can write at double the speed (~40 fps).
I'm testing using the code below: two std::vectors are filled with 1920x1080 frames from my webcam, and then sent to two different threads to be written to disk. If, for example, I write 2 vectors of size 50 to disk, I can do it at an overall speed of ~40 fps. But if I only use one vector of size 100, that drops to half. How can it be? I thought I would be limited by the disk throughput, that is sufficient to write at least 30 fps, but I'm missing something and I don't know what. Is there other limit (apart from cpu) that I'm not taking into account?
#include "opencv2/opencv.hpp"
#include "iostream"
#include "thread"
#include <unistd.h>
#include <chrono>
#include <ctime>
cv::VideoCapture camera(0);
void writeFrames(std::vector<cv::Mat> &frames, std::vector<int> &compression_params, std::string dir)
{
for(size_t i=0; i<frames.size(); i++)
{
cv::imwrite(dir + std::to_string(i) + ".jpg",
frames[i], compression_params);
}
}
int main(int argc, char* argv[])
{
camera.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
camera.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
camera.set(cv::CAP_PROP_FPS, 30);
std::vector<int> compression_params;
compression_params.push_back(cv::IMWRITE_JPEG_QUALITY);
compression_params.push_back(95); // [0 - 100] (100 better), default 95
size_t vecSizeA = 50;
size_t vecSizeB = 50;
std::vector<cv::Mat> framesA, framesB;
cv::Mat frame;
std::chrono::system_clock::time_point t0 = std::chrono::system_clock::now();
for(unsigned int i=0; i<vecSizeA; i++)
{
camera >> frame;
framesA.push_back(frame);
}
for(unsigned int i=0; i<vecSizeB; i++)
{
camera >> frame;
framesB.push_back(frame);
}
std::chrono::system_clock::time_point t1 = std::chrono::system_clock::now();
std::thread trA(writeFrames, std::ref(framesA), std::ref(compression_params), "/tmp/frames/A/");
std::thread trB(writeFrames, std::ref(framesB), std::ref(compression_params), "/tmp/frames/B/");
trA.join();
trB.join();
std::chrono::system_clock::time_point t2 = std::chrono::system_clock::now();
double tr = std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count() / 1000.0;
double tw = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() / 1000.0;
std::cout << "Read fps: " << (vecSizeA + vecSizeB) / tr << std::endl;
std::cout << "Write fps: " << (vecSizeA + vecSizeB) / tw << std::endl;
return 0;
}
Edit: just in case it is not very clear, I'm looking for the way to achieve at least 30 fps on write speed. Disks can handle that (we wouldn't be able to record video at 30fps if that wouln't be the case), so my limitations come from my code or from something I'm missing.
Because 2 thread is reaching same function at the same time and it seems it is faster than one thread. Your threads' joins are in the same place. If you use them like this, you will get the same fps like one thread:
std::thread trA(writeFrames, std::ref(framesA), std::ref(compression_params), "/tmp/frames/A/");
trA.join();
std::thread trB(writeFrames, std::ref(framesB), std::ref(compression_params), "/tmp/frames/A/");
trB.join();
You can also check here to have more idea.
If, for example, I write 2 vectors of size 50 to disk, I can do it at an overall speed of ~40 fps. But if I only use one vector of size 100, that drops to [~20 fps]. How can it be?
in imwrite you are encoding/compressing the frames as well. so more work is being done than simply writing to the disk. that could potentially explain the speedup from using multiple threads.

time based movement sliding object

At the moment i have a function that moves my object based on FPS, if the frames have not passed it wont do anything.
It works fine if the computer can run it at that speed.
How would i use time based and move it based on the time?
Here is my code:
typedef unsigned __int64 u64;
auto toolbarGL::Slide() -> void
{
LARGE_INTEGER li = {};
QueryPerformanceFrequency(&li);
u64 freq = static_cast<u64>(li.QuadPart); // clock ticks per second
u64 period = 60; // fps
u64 delay = freq / period; // clock ticks between frame paints
u64 start = 0, now = 0;
QueryPerformanceCounter(&li);
start = static_cast<u64>(li.QuadPart);
while (true)
{
// Waits to be ready to slide
// Keeps looping till stopped then starts to wait again
SlideEvent.wait();
QueryPerformanceCounter(&li);
now = static_cast<u64>(li.QuadPart);
if (now - start >= delay)
{
if (slideDir == SlideFlag::Right)
{
if (this->x < 0)
{
this->x += 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else if (slideDir == SlideFlag::Left)
{
if (this->x > -90)
{
this->x -= 5;
this->controller->Paint();
}
else
SlideEvent.stop();
}
else
SlideEvent.stop();
start = now;
}
}
}
You can update your objects by time difference. We need to have start timestamp and then count difference on each iteration of global loop. So global loop is very important too, it has to work all the time. My example shows just call update method for your objects. All your objects should depend on time not FPS. Fps shows different behavior on different computers and even same computer can show different fps because of others processes running in background.
#include <iostream>
#include <chrono>
#include <unistd.h>
//Function to update all objects
void Update( float dt )
{
//For example
//for( auto Object : VectorObjects )
//{
// Object->Update(dt);
//}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
For this example in console you can see something like this:
Seconds passed since last update: 0.002685 seconds
Seconds passed since last update: 0.002711 seconds
Seconds passed since last update: 0.002619 seconds
Seconds passed since last update: 0.00253 seconds
Seconds passed since last update: 0.002509 seconds
Seconds passed since last update: 0.002757 seconds
Your time base logic seems to be incorrect, here's a sample code snippet. The speed of the object should be same irrespective of speed of the system. Instead of QueryPerformanceFrequency which is platform dependent, use std::chrono.
void animate(bool& stop)
{
static float speed = 1080/5; // = 1080px/ 5sec = 5sec to cross screen
static std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
float fps;
int object_x = 1080;
while(!stop)
{
//calculate factional time
auto now = std::chrono::system_clock::now();
auto diff = now - start;
auto lapse_milli = std::chrono::duration_cast<std::chrono::milliseconds>(diff);
auto lapse_sec = lapse_milli.count()/1000;
//apply to object
int incr_x = speed * lapse_sec ;
object_x -= incr_x;
if( object_x <0) object_x = 1080;
// render object here
fps = lapse_milli.count()/1000;
//print fps
std::this_thread::sleep_for(std::chrono::milliseconds(100)); // change to achieve a desired fps rate
start = now;
}
}

Limit while loop to run at 30 "FPS" using a delta variable C++

I basically need a while loop to only run at 30 "FPS".
I was told to do this:
"Inside your while loop, make a deltaT , and if that deltaT is lesser than 33 miliseconds use sleep(33-deltaT) ."
But I really wasn't quite sure how to initialize the delta/what to set this variable to. I also couldn't get a reply back from the person that suggested this.
I'm also not sure why the value in sleep is 33 instead of 30.
Does anyone know what I can do about this?
This is mainly for a game server to update players at 30FPS, but because I'm not doing any rendering on the server, I need a way to just have the code sleep to limit how many times it can run per second or else it will process the players too fast.
You basically need to do something like this:
int now = GetTimeInMilliseconds();
int lastFrame = GetTimeInMilliseconds();
while(running)
{
now = GetTimeInMilliseconds();
int delta = now - lastFrame;
lastFrame = now;
if(delta < 33)
{
Sleep(33 - delta);
}
//...
Update();
Draw();
}
That way you calculate the amount of milliseconds passed between the current frame and last frame, and if it's smaller than 33 millisecods (1000/30, 1000 milliseconds in a second divided by 30 FPS = 33.333333....) then you sleep until 33 milliseconds has passed. Has for GetTimeInMilliseconds() and Sleep() function, it depends on the library that you're using and/or the platform.
c++11 provides a simple mechanism for that:
#include <chrono>
#include <thread>
#include <iostream>
using namespace std;
using namespace std::chrono;
void doStuff(){
std::cout << "Loop executed" << std::endl;
}
int main() {
time_point<system_clock> t = system_clock::now();
while (1) {
doStuff();
t += milliseconds(33);
this_thread::sleep_until(t);
}
}
The only thing you have to be aware of though is that if one loop iteration takes longer than the 33ms, the next two iterations will be executed without a pause in between (until t has caught up with the real time), which may or may not be what you want.
Glenn Fiedler has written a nice article on this topic a few years ago. Hacking with sleep() is not very precise, instead you want to run your physics a fixed number of times per second, let your graphics run freely, and between frames, you do as many fixed timesteps as time has passed.
The code that follows looks intimidating at first, but once you get the idea, it becomes simple; it's best to read the article completely.
Fix Your Timestep
double t = 0.0;
double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = hires_time_in_seconds();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25;
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState * alpha +
previousState * ( 1.0 - alpha );
render( state );
}
If it goes offline, there should be several backups available; however, I remember Gaffer on Games for many years already.

OpenCV calculate time detection features

I'm trying to calculate the time that my program takes to detect the keypoints from an image.
If in my c++ program I do it two times (with the same image) there is a huge difference between both. The first time it uses around 600-800 ms and the second time just 100-200 ms.
Does anyone know what is happening?
Here is the code where I get the times:
struct timeval t1, t2;
Ptr<SURF> detector = SURF::create(400);
gettimeofday(&t1, 0x0);
detector->detect( imagen1, keypoints_1 );
gettimeofday(&t2, 0x0);
int milliSeconds = Utils::calculateDiff(t1, t2);
Here is the code where I calculate the diff:
static int calculateDiff(timeval t1, timeval t2)
{
return (t2.tv_sec - t1.tv_sec) * 1000 + (t2.tv_usec - t1.tv_usec)/1000;
}
Here is a sample:
Sample
note, that gettimeofday is using wall-time, while problems like this usually require cpu/clock-time.
for profiling, try something (even more portable), like this:
int64 t0 = cv::getTickCount();
//
// some lengthy op.
//
int64 t1 = cv::getTickCount();
double secs = (t1-t0)/cv::getTickFrequency();
you can use getTickCount(), and getTickFrequency() to count time. however there is a truncation problem when using these functions. After some tries this code worked for me:
long double execTime, prevCount, time;
execTime = prevCount = time = 0;
for (;;)
{
prevCount = getTickCount() * 1.0000;
/*do image processing*/
time += execTime;
cout << "execTime = " << execTime << "; time = " << time << endl;
execTime = (getTickCount()*1.0000 - prevCount) / (getTickFrequency() * 1.0000);
}

Calculating glut framerate using clocks_per_sec much too slow

I'm trying to calculate the framerate of a GLUT window by calling a custom CalculateFrameRate method I made at the beginning of my Display() callback function. I call glutPostRedisplay() after calculations I perform every frame so Display() gets called for every frame.
I also have an int numFrames that increments every frame (every time glutPostRedisplay gets called) and I print that out as well. My CalculateFrameRate method calculates a rate of about 7 fps but if I look at a stopwatch and compare it to how quickly my numFrames incrementor increases, the framerate is easily 25-30 fps.
I can't seem to figure out why there is such a discrepancy. I've posted my CalcuateFrameRate method below
clock_t lastTime;
int numFrames;
//GLUT Setup callback
void Renderer::Setup()
{
numFrames = 0;
lastTime = clock();
}
//Called in Display() callback every time I call glutPostRedisplay()
void CalculateFrameRate()
{
clock_t currentTime = clock();
double diff = currentTime - lastTime;
double seconds = diff / CLOCKS_PER_SEC;
double frameRate = 1.0 / seconds;
std::cout<<"FRAMERATE: "<<frameRate<<endl;
numFrames ++;
std::cout<<"NUM FRAMES: "<<numFrames<<endl;
lastTime = currentTime;
}
The function clock (except in Windows) gives you the CPU-time uses, so if you are not spinning the CPU for the entire frame-time, then it will give you a lower time than expected. Conversely, if you have 16 cores running 16 of your threads flat out, the time reported by clock will be 16 times the actual time.
You can use std::chrono::steady_clock, std::chrono::high_resolution_clock, or if you are using Linux/Unix, gettimeofday (which gives you microosecond resolution).
Here's a couple of snippets of how to use gettimeofday to measure milliseconds:
double time_to_double(timeval *t)
{
return (t->tv_sec + (t->tv_usec/1000000.0)) * 1000.0;
}
double time_diff(timeval *t1, timeval *t2)
{
return time_to_double(t2) - time_to_double(t1);
}
gettimeofday(&t1, NULL);
... do stuff ...
gettimeofday(&t2, NULL);
cout << "Time taken: " << time_diff(&t1, &t2) << "ms" << endl;
Here's a piece of code to show how to use std::chrono::high_resolution_clock:
auto start = std::chrono::high_resolution_clock::now();
... stuff goes here ...
auto diff = std::chrono::high_resolution_clock::now() - start;
auto t1 = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);