How calculated fps can be greater than camera's declared fps? - c++

I am trying to measure Frames Per Second when processing frame from camera. Calculations are nothing special and can be found in this question: How to write function with parameter which type is deduced with 'auto' word?
My camera is pretty old and manufacturer declared FPS is no more than 30 with resolution 640x480. However, when I am running those calculations it shows me 40-50 on live streams. How can it be?
Update: Code:
#include <chrono>
#include <iostream>
using std::cerr;
using std::cout;
using std::endl;
#include <string>
#include <numeric>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
using cv::waitKey;
using cv::Mat;
using time_type = decltype(std::chrono::high_resolution_clock::now());
void showFPS(Mat* frame, const time_type &startTime);
int main(int argc, char** argv) {
cv::VideoCapture capture;
std::string videoDevicePath = "/dev/video0";
if (!capture.open(videoDevicePath)) {
std::cerr << "Unable to open video capture.";
return 1;
}
//TODO normally through cmd or from cameraParameters.xml
bool result;
result = capture.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
if (result) {
std::cout << "Camera: PROP_FOURCC: MJPG option set.";
} else {
std::cerr << "Camera: PROP_FOURCC: MJPG option was not set.";
}
result = capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
if (result) {
std::cout << "Camera: PROP_FRAME_WIDTH option set.";
} else {
std::cerr << "Camera: PROP_FRAME_WIDTH option was not set.";
}
result = capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
if (result) {
std::cout << "Camera: PROP_FRAME_HEIGHT option set.";
} else {
std::cerr << "Camera: PROP_FRAME_HEIGHT option was not set.";
}
result = capture.set(CV_CAP_PROP_FPS, 30);
if (result) {
std::cout << "Camera: PROP_FPS option set.";
} else {
std::cerr << "Camera: PROP_FPS option was not set.";
}
Mat frame, raw;
while (cv::waitKey(5) != 'q') {
auto start = std::chrono::high_resolution_clock::now();
capture >> raw;
if (raw.empty()) {
return 1;
}
if (raw.channels() > 1) {
cv::cvtColor(raw, frame, CV_BGR2GRAY);
} else {
frame = raw;
}
showFPS(&raw1, start);
}
return 0;
}
void showFPS(Mat* frame, const time_type &startTime) {
typedef std::chrono::duration<float> fsec_t;
auto stopTime = std::chrono::high_resolution_clock::now();
fsec_t duration = stopTime - startTime;
double sec = duration.count();
double fps = (1.0 / sec);
std::stringstream s;
s << "FPS: " << fps;
cv::putText(*frame, s.str(), Point2f(20, 20), constants::font,
constants::fontScale, constants::color::green);
}

Camera's FPS is the number of frames that camera could provide per second.
It means that camera provides new frame every 33ms.
On the other side, what you are measuring is not FPS.
You are measuring inverse time of the function of the new frame retrieval plus color converting.
And this time is 20-25ms, based on your results.
This is not correct way of measuring FPS, at least because you can't guarantee the synchronization of these two processes.
If you want to measure FPS correctly, you can measure the time for showing last N frames.
Pseudocode:
counter = 0;
start = getTime();
N = 100;
while (true) {
captureFrame();
convertColor();
counter++;
if (counter == N) {
fps = N / (getTime() - start);
printFPS(fps);
counter = 0;
start = getTime();
}
}

Aleksey Petrov's answer is not bad, but while averaging over the last N frames gives smoother values, one can measure the frame rate relatively accurately without averaging. Here the code from the question modified to do that:
// see question for earlier code
Mat frame, raw;
time_type prevTimePoint; // default-initialized to epoch value
while (waitKey(1) != 'q') {
capture >> raw;
auto timePoint = std::chrono::high_resolution_clock::now();
if (raw.empty()) {
return 1;
}
if (raw.channels() > 1) {
cv::cvtColor(raw, frame, CV_BGR2GRAY);
} else {
frame = raw;
}
showFPS(&frame, prevTimePoint, timePoint);
cv::imshow("frame", frame);
}
return 0;
}
void showFPS(Mat* frame, time_type &prevTimePoint, const time_type &timePoint) {
if (prevTimePoint.time_since_epoch().count()) {
std::chrono::duration<float> duration = timePoint - prevTimePoint;
cv::putText(*frame, "FPS: " + std::to_string(1/duration.count()),
cv::Point2f(20, 40), 2, 2, cv::Scalar(0,255,0));
}
prevTimePoint = timePoint;
}
Note that this measures the time point right after capture >> raw returns, which (without messing with OpenCV) is the closest one can get to when the camera sent the frame, and that the time is measured only once per loop and compared against the previous measurement, which gives a quite precise current frame rate. Of course, if the processing takes more time than 1/(frame rate), the measurement will be off.
The reason the question's code gave too high a frame rate was actually the code between the two time measurements: the now() in showFPS() and the now() in the while loop. My hunch is this code included cv::imshow(), which is not in the question and which together with cv::waitKey(5) and cv::putText() is likely responsible for most of the "missing time" in the frame rate calculation (causing too high a frame rate).

You have a cvtColor in between, so it affects your time computing because the process time of cvtColor may vary in each loop (probably because of the other processes of windows).
Consider this example:
You get the first frame with capture at moment 0, then do a
cvtColor and that takes e.g. 10 ms, then you make a stopTime at
moment 10 ms. 23 ms later (33-10) you capture the second frame. But
this time cvtColor takes 5 ms (It could happen) and you make the
second stopTime at moment 38 (33+5), so the first tick was at moment
10 and the second tick is at moment 38. Now your fps becomes
1000/(38-10) = 35.7

Related

How would I get a more consistent animation function in SDL2?

I have recently been working with SDL2 and I was wondering if there is a more consistent way of getting a frame number for an animation function. Currently, the time the animation takes varies in the number of milliseconds it takes and looks slightly off whenever I do it multiple times.
void Character::animate(int endState)
{
if(animated && frames != 0)
{
int frameNumber = (static_cast<int>((SDL_GetTicks() / speed) % frames) + 1);
if(frameNumber != lastFrame)
{
curFrame = frameCounter;
frameCounter++;
if(frameCounter >= (frames + animIndex))
{
std::cout << SDL_GetTicks() << std::endl;
animated = false;
currentState = endState;
}
else{
gameObject->getComponent<Spritesheet>()->setFrame(frameCounter);
}
//std::cout << frameCounter;
}
lastFrame = frameNumber;
}
}
Is there a better way than SDL_GetTicks() due to it's inconsistent start time?

Why can't I get 3 webcam to run in parallel with pthreads?

I need some help with OpenCV with threading using C++. I am using a raspberry pi 3B. Which is a quad core. There are 4 USB 2.0 devices, 3 USB 2.0 webcam and USB 2.0 Arduino. The webcam’s cable has been spliced out to provide voltage on its own, so only data wires are going into the pi. The Arduino is being powered by the pi.
Now to the issue. I have 4 threads going, 3 for the webcams and 1 for the Arduino. The webcams wait for the Arduino to send a signal to the webcams in parallel. The issue is, I cannot get the 3 cameras to work simultaneously. I can get any combination of 2 cameras to work but not 3. When I try 3 webcams, I get an error of empty frames.
Link To Webcam Used
I tried with no success:
sudo modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x80
[ WARN:2] global /home/pi/opencv/modules/videoio/src/cap_v4l.cpp (1004) tryIoctl VIDEOIO(V4L2:/dev/video0): select() timeout.
Empty Frame
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.3.0) /home/pi/opencv/modules/imgcodecs/src/loadsave.cpp:738: error: (-215:Assertion failed) !_img.empty() in function 'imwrite'
Aborted
#include<stdio.h>
#include<stdlib.h>
#include<thread>
#include<iostream>
#include<fstream>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/core/core.hpp>
#include"pthread.h"
#include<stdlib.h>
#include<string>
#include<boost/date_time/posix_time/posix_time.hpp>
#include<mutex>
#include<condition_variable>
#include<atomic>
std::mutex m;
std::condition_variable suspend_cv;
std::atomic<bool> enabled (false);
struct args {
int camNum;
char* camName;
int x;
int y;
};
void* camSetup(void *inputs){
cv::VideoCapture stream(((struct args*)inputs)->camNum);
cv::Mat Frame;
//cv::Mat resizeFrame;
if(!stream.isOpened()){
std::cout << "Cannot Open Camera: " + ((struct args*) inputs)->camNum + '\n';
}
else{
std::cout << "Camera Open: " << ((struct args*) inputs)->camNum + '\n';
}
std::unique_lock<std::mutex> lock(m);
while (true){
stream >> Frame;
//cv::resize(Frame, resizeFrame, cv::Size(25, 25));
while (enabled){
// Get current time from the clock, using microseconds resolution
const boost::posix_time::ptime now =
boost::posix_time::microsec_clock::local_time();
// Get the time offset in current day
const boost::posix_time::time_duration td = now.time_of_day();
const long year = now.date().year();
const long month = now.date().month();
const long day = now.date().day();
const long hours = td.hours();
const long minutes = td.minutes();
const long seconds = td.seconds();
const long milliseconds = td.total_milliseconds() -
((hours * 3600 + minutes * 60 + seconds) * 1000);
char buf[80];
sprintf(buf, "%02ld%02ld%02ld_%02ld%02ld%02ld__%03ld",
year, month, day, hours, minutes, seconds, milliseconds);
std::string sBuf = buf;
std::string PATH = std::string("/home/pi/Desktop/") +
((struct args*)inputs)->camName +
'/' +
std::string(((struct args*)inputs)->camName) +
'_' +
sBuf +
".jpeg";
if (Frame.empty())
{
std::cout << "Empty Frame" <<std::endl;
stream >> Frame;
//cv::resize(Frame, resizeFrame, cv::Size(25, 25));
}
cv::imwrite(PATH, Frame);
if (cv::waitKey(30) >= 0){
break;
}
suspend_cv.wait(lock);
}
}
return NULL;
}
void* ardReader(void*){
ARDUINO STUFF HAPPENS…
}
int main(){
struct args *cameraA = (struct args *)malloc(sizeof(struct args));
struct args *cameraB = (struct args *)malloc(sizeof(struct args));
struct args *cameraC = (struct args *)malloc(sizeof(struct args));
cameraA->camNum = 0;
char camA[] = "camA";
cameraA->camName = camA;
cameraA->x = 100;
cameraA->y =100;
cameraB->camNum = 2;
char camB[] = "camB";
cameraB->camName = camB;
cameraB->x = 100;
cameraB->y = 300;
cameraC->camNum = 4;
char camC[] = "camC";
cameraC->camName = camC;
cameraC->x = 100;
cameraC->y = 500;
pthread_t t1, t2, t3, t4;
pthread_create(&t1, NULL, camSetup, (void *) cameraA);
pthread_create(&t2, NULL, camSetup, (void *) cameraB);
pthread_create(&t3, NULL, camSetup, (void *) cameraC);
pthread_create(&t4, NULL, ardReader, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
pthread_join(t3, NULL);
pthread_join(t4, NULL);
return 0;
}
It seems that a difference in 20mS made a big enough difference for threading/opencv. To answer my own question in case someone comes across this issue again, change the waitkey time to the following...
Incorrect for threading in this case
if (cv::waitKey(30) >= 0){
break;
}
Correct for threading in this case
if (cv::waitKey(10) == 27){
break;
}
I also move the cv::waitKey to the main thread.
Though I still had the errors this did prove to work at times.
One last thing I did was open and close the camera every time I needed to grab an image. This allowed me to get a frame guaranteed every time, this is slower but it always worked.

How to make a timer that counts down from 30 by 1 every second?

I want to make a timer that displays 30, 29 etc going down every second and then when there is an input it stops. I know you can do this:
for (int i = 60; i > 0; i--)
{
cout << i << endl;
Sleep(1000);
}
This will output 60, 59 etc. But this doesn't allow for any input while the program is running. How do I make it so you can input things while the countdown is running?
Context
This is not a homework assignment. I am making a text adventure game and there is a section where an enemy rushes at you and you have 30 seconds to decide what you are going to do. I don't know how to make the timer able to allow the user to input things while it is running.
Your game is about 1 frame per second, so user input is a problem. Normally games have higher frame rate like this:
#include <Windows.h>
#include <iostream>
int main() {
// Initialization
ULARGE_INTEGER initialTime;
ULARGE_INTEGER currentTime;
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
initialTime.LowPart = ft.dwLowDateTime;
initialTime.HighPart = ft.dwHighDateTime;
LONGLONG countdownStartTime = 300000000; // 100 Nano seconds
LONGLONG displayedNumber = 31; // Prevent 31 to be displayed
// Game loop
while (true) {
GetSystemTimeAsFileTime(&ft); // 100 nano seconds
currentTime.LowPart = ft.dwLowDateTime;
currentTime.HighPart = ft.dwHighDateTime;
//// Read Input ////
bool stop = false;
SHORT key = GetKeyState('S');
if (key & 0x8000)
stop = true;
//// Game Logic ////
LONGLONG elapsedTime = currentTime.QuadPart - initialTime.QuadPart;
LONGLONG currentNumber_100ns = countdownStartTime - elapsedTime;
if (currentNumber_100ns <= 0) {
std::cout << "Boom!" << std::endl;
break;
}
if (stop) {
std::wcout << "Stopped" << std::endl;
break;
}
//// Render ////
LONGLONG currentNumber_s = currentNumber_100ns / 10000000 + 1;
if (currentNumber_s != displayedNumber) {
std::cout << currentNumber_s << std::endl;
displayedNumber = currentNumber_s;
}
}
system("pause");
}
If you're running this on Linux, you can use the classic select() call. When used in a while-loop, you can wait for input on one or more file descriptors, while also providing a timeout after which the select() call must return. Wrap it all in a loop and you'll have both your countdown and your handling of standard input.
https://linux.die.net/man/2/select

Capturing video with monochrome camera in aruco opencv

I'm currently trying to use a monochrome camera with the aruco and opencv libraries in order to accelerate the computation and get better marker capturing. The problem i am having is that the monochrome feed is being tripled on screen when running the aruco_test program and so the resolution in diminished by two thirds and the markers are being detected three times each instead of one.
I saw feeds which talk about similar problems with monochrome cameras in opencv. Some answers suggested cropping the image (which fixes the tripling problem but not the smaller resolution) but it all seems to be caused by the conversion from either BGR2GRAY or GRAY2BGR.
Any help on what exactly is causing the images being tripled and how to bypass that part either in the aruco source code or opencv source code would be appreciated.
INFO :
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : oCam-1MGN-U
Bus info : usb-0000:00:1d.0-1.5
Driver version: 3.13.11
Capabilities : 0x84000001
Video Capture
Streaming
Device Capabilities
Device Caps : 0x04000001
Video Capture
Streaming
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1280/960
Pixel Format : 'GREY'
Field : None
Bytes per Line: 1280
Size Image : 1228800
Colorspace : Unknown (00000000)
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1280, Height 960
Default : Left 0, Top 0, Width 1280, Height 960
Pixel Aspect: 1/1
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
brightness (int) : min=0 max=127 step=1 default=64 value=64
exposure_absolute (int) : min=1 max=625 step=1 default=39 value=39
Using Aruco 2.0.19 and OpenCV 3.2
Pixel Format not being YUYV i cannot simply take the Y channel from the camera feed.
code executed :
#include <string>
#include <iostream>
#include <fstream>
#include <sstream>
#include "aruco.h"
#include "cvdrawingutils.h"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
using namespace aruco;
MarkerDetector MDetector;
VideoCapture TheVideoCapturer;
vector< Marker > TheMarkers;
Mat TheInputImage, TheInputImageCopy;
CameraParameters TheCameraParameters;
void cvTackBarEvents(int pos, void *);
pair< double, double > AvrgTime(0, 0); // determines the average time required for detection
int iThresParam1, iThresParam2;
int waitTime = 0;
class CmdLineParser{int argc; char **argv; public: CmdLineParser(int _argc,char **_argv):argc(_argc),argv(_argv){} bool operator[] ( string param ) {int idx=-1; for ( int i=0; i<argc && idx==-1; i++ ) if ( string ( argv[i] ) ==param ) idx=i; return ( idx!=-1 ) ; } string operator()(string param,string defvalue="-1"){int idx=-1; for ( int i=0; i<argc && idx==-1; i++ ) if ( string ( argv[i] ) ==param ) idx=i; if ( idx==-1 ) return defvalue; else return ( argv[ idx+1] ); }};
cv::Mat resize(const cv::Mat &in,int width){
if (in.size().width<=width) return in;
float yf=float( width)/float(in.size().width);
cv::Mat im2;
cv::resize(in,im2,cv::Size(width,float(in.size().height)*yf));
return im2;
}
int main(int argc, char **argv) {
try {
CmdLineParser cml(argc,argv);
if (argc < 2 || cml["-h"]) {
cerr << "Invalid number of arguments" << endl;
cerr << "Usage: (in.avi|live[:idx_cam=0]) [-c camera_params.yml] [-s marker_size_in_meters] [-d dictionary:ARUCO by default] [-h]" << endl;
cerr<<"\tDictionaries: "; for(auto dict:aruco::Dictionary::getDicTypes()) cerr<<dict<<" ";cerr<<endl;
cerr<<"\t Instead of these, you can directly indicate the path to a file with your own generated dictionary"<<endl;
return false;
}
/////////// PARSE ARGUMENTS
string TheInputVideo = argv[1];
// read camera parameters if passed
if (cml["-c"] ) TheCameraParameters.readFromXMLFile(cml("-c"));
float TheMarkerSize = std::stof(cml("-s","-1"));
//aruco::Dictionary::DICT_TYPES TheDictionary= Dictionary::getTypeFromString( cml("-d","ARUCO") );
/////////// OPEN VIDEO
// read from camera or from file
if (TheInputVideo.find("live") != string::npos) {
int vIdx = 0;
// check if the :idx is here
char cad[100];
if (TheInputVideo.find(":") != string::npos) {
std::replace(TheInputVideo.begin(), TheInputVideo.end(), ':', ' ');
sscanf(TheInputVideo.c_str(), "%s %d", cad, &vIdx);
}
cout << "Opening camera index " << vIdx << endl;
TheVideoCapturer.open(vIdx);
waitTime = 10;
}
else TheVideoCapturer.open(TheInputVideo);
// check video is open
if (!TheVideoCapturer.isOpened()) throw std::runtime_error("Could not open video");
///// CONFIGURE DATA
// read first image to get the dimensions
TheVideoCapturer >> TheInputImage;
if (TheCameraParameters.isValid())
TheCameraParameters.resize(TheInputImage.size());
MDetector.setDictionary(cml("-d","ARUCO"));//sets the dictionary to be employed (ARUCO,APRILTAGS,ARTOOLKIT,etc)
MDetector.setThresholdParams(7, 7);
MDetector.setThresholdParamRange(2, 0);
// MDetector.setCornerRefinementMethod(aruco::MarkerDetector::SUBPIX);
//gui requirements : the trackbars to change this parameters
iThresParam1 = MDetector.getParams()._thresParam1;
iThresParam2 = MDetector.getParams()._thresParam2;
cv::namedWindow("in");
cv::createTrackbar("ThresParam1", "in", &iThresParam1, 25, cvTackBarEvents);
cv::createTrackbar("ThresParam2", "in", &iThresParam2, 13, cvTackBarEvents);
//go!
char key = 0;
int index = 0;
// capture until press ESC or until the end of the video
do {
TheVideoCapturer.retrieve(TheInputImage);
// copy image
double tick = (double)getTickCount(); // for checking the speed
// Detection of markers in the image passed
TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
// chekc the speed by calculating the mean speed of all iterations
AvrgTime.first += ((double)getTickCount() - tick) / getTickFrequency();
AvrgTime.second++;
cout << "\rTime detection=" << 1000 * AvrgTime.first / AvrgTime.second << " milliseconds nmarkers=" << TheMarkers.size() << std::endl;
// print marker info and draw the markers in image
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i = 0; i < TheMarkers.size(); i++) {
cout << TheMarkers[i]<<endl;
TheMarkers[i].draw(TheInputImageCopy, Scalar(0, 0, 255));
}
// draw a 3d cube in each marker if there is 3d info
if (TheCameraParameters.isValid() && TheMarkerSize>0)
for (unsigned int i = 0; i < TheMarkers.size(); i++) {
CvDrawingUtils::draw3dCube(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
CvDrawingUtils::draw3dAxis(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
}
// DONE! Easy, right?
// show input with augmented information and the thresholded image
cv::imshow("in", resize(TheInputImageCopy,1280));
cv::imshow("thres", resize(MDetector.getThresholdedImage(),1280));
key = cv::waitKey(waitTime); // wait for key to be pressed
if(key=='s') waitTime= waitTime==0?10:0;
index++; // number of images captured
} while (key != 27 && (TheVideoCapturer.grab() ));
} catch (std::exception &ex)
{
cout << "Exception :" << ex.what() << endl;
}
}
void cvTackBarEvents(int pos, void *) {
(void)(pos);
if (iThresParam1 < 3) iThresParam1 = 3;
if (iThresParam1 % 2 != 1) iThresParam1++;
if (iThresParam1 < 1) iThresParam1 = 1;
MDetector.setThresholdParams(iThresParam1, iThresParam2);
// recompute
MDetector.detect(TheInputImage, TheMarkers, TheCameraParameters);
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i = 0; i < TheMarkers.size(); i++)
TheMarkers[i].draw(TheInputImageCopy, Scalar(0, 0, 255));
// draw a 3d cube in each marker if there is 3d info
if (TheCameraParameters.isValid())
for (unsigned int i = 0; i < TheMarkers.size(); i++)
CvDrawingUtils::draw3dCube(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
cv::imshow("in", resize(TheInputImageCopy,1280));
cv::imshow("thres", resize(MDetector.getThresholdedImage(),1280));
}

Measure OpenCV FPS

I'm looking for a correct way to measure openCV FPS. I've found several ways to do it. but none of them looks right for me.
The first one I've tested, uses time_t start and time_t end. I think that one is wrong once it returns me a dumped function as fps x time plot (I really can't imagine how a fps plot could be a dumped function).
Here the image of this plot.
The second I've tested uses t = (double)cvGetTickCount() to measure fps. This way is wrong once it returns 120 fps as result, but, for a 30 seconds length video captured with 120 fps shouldn't take more than 1 minute to be processed. so this is a wrong way to measure FPS.
Someone knows another way to measure FPS in openCV?
Ps. I'm trying to find circles in each frame of the video. The video frame size is 320x240 pixels.
Update 2
The code that I'm trying to measure FPS.
for(;;)
{
clock_t start=CLOCK();
Mat frame, finalFrame;
capture >> frame;
finalFrame = frame;
cvtColor(frame, frame, CV_BGR2GRAY);
GaussianBlur(frame, frame, Size(7,7), 1.5, 1.5);
threshold(frame, frame, 20, 255, CV_THRESH_BINARY);
dilate(frame, frame, Mat(), Point(-1, -1), 2, 1, 1);
erode(frame, frame, Mat(), Point(-1, -1), 2, 1, 1);
Canny(frame, frame, 20, 20*2, 3 );
vector<Vec3f> circles;
findContours(frame,_contours,_storage,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
vector<vector<Point> > contours_poly( _contours.size() );
vector<Rect> boundRect( _contours.size() );
vector<Point2f>center( _contours.size() );
vector<float>radius( _contours.size() );
int temp = 0;
for( int i = 0; i < _contours.size(); i++ )
{
if( _contours[i].size() > 100 )
{
approxPolyDP( Mat(_contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(_contours[i]) );
minEnclosingCircle( (Mat)_contours[i], center[i], radius[i] );
temp = i;
break;
}
}
double dur = CLOCK()-start;
printf("avg time per frame %f ms. fps %f. frameno = %d\n",avgdur(dur),avgfps(),frameno++ );
frameCounter++;
if(frameCounter == 3600)
break;
if(waitKey(1000/120) >= 0) break;
}
Update
Program execution using the Zaw Lin method!
I have posted a way to do that # Getting current FPS of OpenCV. It is necessary to do a bit of averaging otherwise the fps will be too jumpy.
edit
I have put a Sleep inside process() and it gives correct fps and duration(+/- 1ms).
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <opencv/cv.h>
#include <sys/timeb.h>
using namespace cv;
#if defined(_MSC_VER) || defined(WIN32) || defined(_WIN32) || defined(__WIN32__) \
|| defined(WIN64) || defined(_WIN64) || defined(__WIN64__)
#include <windows.h>
bool _qpcInited=false;
double PCFreq = 0.0;
__int64 CounterStart = 0;
void InitCounter()
{
LARGE_INTEGER li;
if(!QueryPerformanceFrequency(&li))
{
std::cout << "QueryPerformanceFrequency failed!\n";
}
PCFreq = double(li.QuadPart)/1000.0f;
_qpcInited=true;
}
double CLOCK()
{
if(!_qpcInited) InitCounter();
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
return double(li.QuadPart)/PCFreq;
}
#endif
#if defined(unix) || defined(__unix) || defined(__unix__) \
|| defined(linux) || defined(__linux) || defined(__linux__) \
|| defined(sun) || defined(__sun) \
|| defined(BSD) || defined(__OpenBSD__) || defined(__NetBSD__) \
|| defined(__FreeBSD__) || defined __DragonFly__ \
|| defined(sgi) || defined(__sgi) \
|| defined(__MACOSX__) || defined(__APPLE__) \
|| defined(__CYGWIN__)
double CLOCK()
{
struct timespec t;
clock_gettime(CLOCK_MONOTONIC, &t);
return (t.tv_sec * 1000)+(t.tv_nsec*1e-6);
}
#endif
double _avgdur=0;
double _fpsstart=0;
double _avgfps=0;
double _fps1sec=0;
double avgdur(double newdur)
{
_avgdur=0.98*_avgdur+0.02*newdur;
return _avgdur;
}
double avgfps()
{
if(CLOCK()-_fpsstart>1000)
{
_fpsstart=CLOCK();
_avgfps=0.7*_avgfps+0.3*_fps1sec;
_fps1sec=0;
}
_fps1sec++;
return _avgfps;
}
void process(Mat& frame)
{
Sleep(3);
}
int main(int argc, char** argv)
{
int frameno=0;
cv::Mat frame;
cv::VideoCapture cap(0);
for(;;)
{
//cap>>frame;
double start=CLOCK();
process(frame);
double dur = CLOCK()-start;
printf("avg time per frame %f ms. fps %f. frameno = %d\n",avgdur(dur),avgfps(),frameno++ );
if(waitKey(1)==27)
exit(0);
}
return 0;
}
You can use opencv helper cv::getTickCount()
#include <iostream>
#include <string>
#include "opencv2/core.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/video.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
int main(int ac, char** av) {
VideoCapture capture(0);
Mat frame;
for (;;) {
int64 start = cv::getTickCount();
capture >> frame;
if (frame.empty())
break;
/* do some image processing here */
char key = (char)waitKey(1);
double fps = cv::getTickFrequency() / (cv::getTickCount() - start);
std::cout << "FPS : " << fps << std::endl;
}
return 0;
}
You can use OpenCV's API to get the original FPS if you are dealing with video files. The following method will not work when capturing from a live stream:
cv::VideoCapture capture("C:\\video.avi");
if (!capture.isOpened())
{
std::cout << "!!! Could not open input video" << std::endl;
return;
}
std::cout << "FPS: " << capture.get(CV_CAP_PROP_FPS) << std::endl;
To get the actual FPS after the processing, you can try Zaw's method.
I would just measure the walltime and simply divide the frames by time elapsed. On linux:
/*
* compile with:
* g++ -ggdb webcam_fps_example2.cpp -o webcam_fps_example2 `pkg-config --cflags --libs opencv`
*/
#include "opencv2/opencv.hpp"
#include <time.h>
#include <sys/time.h>
using namespace cv;
using namespace std;
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(0))
return 0;
cap.set(CV_CAP_PROP_FRAME_WIDTH,1920);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,1080);
double wall0 = get_wall_time();
for(int x = 0; x < 500; x++)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
//imshow("this is you, smile! :)", frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
double wall1 = get_wall_time();
double fps = 500/(wall1 - wall0);
cout << "Wall Time = " << wall1 - wall0 << endl;
cout << "FPS = " << fps << endl;
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
Wall Time = 43.9243
FPS = 11.3832