Apparent FPS random drop in webcam when using OpenCV in C++ - c++

I'm coding a simple OpenCV example to get webcam frames during 5 seconds (~150 frames if the cam works at ~30fps).
After making some tests, I see that sometimes I get ~140 out of 150 frames (acceptable, I guess), but sometimes ~70. Even worse, sometimes the camera seems to suffer from those frame drops and keep in that state for hours.
To investigate the problem further, I stripped my program until I still have that issue even if I only read frames, but do not write them to disk. I've set a cron job in order to run a 5-seconds capture every minute, and I've been seen things like this:
I think the two first small drops were due to system being busy, but the big, permanent one occurred in the middle of the night. In the morning, I stopped the cron job, touched some things in the code (I can't remember exactly what) and started the test again, to see a gradual recovery followed by a new drop after 2-3 hours:
Since yesterday, I've turned off the computer for several hours, booted it up again and covered the webcam to ensure constant light conditions, but the frame count is still low, and stuck on 70. Also, it's really weird that the drop (70 frames out of 150) goes to exactly half the max frames I've seen in this camera (140 frames out of 150).
The webcam model is Logitech C525. I'm also testing in a Macbook Pro Late 2016 Facetime HD camera and there I see a constant 117 frames out of 150. A colleague of mine also sees frame drops in his laptop. Is there some problem with my code? Could it be thread priority?
// Call the program like this: ./cameraTest pixelWidth pixelHeight fps timeLimit
// timeLimit can be 1 to run a fixed 5-seconds capture, or 0 to wait for 150 frames.
#include "opencv2/opencv.hpp"
#include "iostream"
#include "thread"
#include <unistd.h>
#include <chrono>
#include <ctime>
#include <experimental/filesystem>
using namespace cv;
namespace fs = std::experimental::filesystem;
VideoCapture camera(0);
bool stop = false;
int readFrames = 0;
std::string getTimeStamp()
{
time_t rawtime;
struct tm * timeinfo;
char buffer[80];
time(&rawtime);
timeinfo = localtime(&rawtime);
strftime(buffer,sizeof(buffer),"%Y-%m-%d %H:%M:%S",timeinfo);
std::string timeStamp(buffer);
return timeStamp;
}
void getFrame()
{
Mat frame;
camera >> frame;
// camera.read(frame);
// cv::imwrite("/tmp/frames/frame" + std::to_string(readFrames) + ".jpg", frame);
readFrames++;
}
void getFrames()
{
Mat frame;
while(!stop)
{
camera >> frame;
// cv::imwrite("/tmp/frames/frame" + std::to_string(fc) + ".jpg", frame);
readFrames++;
}
}
int main(int argc, char* argv[])
{
if(argc < 5)
{
std::cout << "Usage: width height fps timeLimit" << std::endl;
return -1;
}
if(!camera.isOpened())
{
std::cout << "Couldn't open camera " << getTimeStamp() << std::endl;
return -1;
}
if (!fs::is_directory("/tmp/frames"))
{
if(system("mkdir -p /tmp/frames") != 0)
{
std::cout << "Error creating /tmp/frames/" << std::endl;
}
}
if (!fs::is_empty("/tmp/frames"))
{
system("exec rm /tmp/frames/*");
}
camera.set(CV_CAP_PROP_FRAME_WIDTH, atoi(argv[1]));
camera.set(CV_CAP_PROP_FRAME_HEIGHT, atoi(argv[2]));
camera.set(CV_CAP_PROP_FPS, atoi(argv[3]));
//camera.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
bool timeLimit(atoi(argv[4]));
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
int waitSeconds = 5;
if(timeLimit)
{
std::thread tr(getFrames);
usleep(waitSeconds * 1e6);
stop = true;
tr.join();
}
else
{
while(readFrames < 150)
{
getFrame();
}
}
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << getTimeStamp() << " " << readFrames << "/" << atoi(argv[3]) * waitSeconds << " "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() << "ms"
<< " " << atoi(argv[1]) << "," << atoi(argv[2]) << "," << atoi(argv[3])
<< std::endl;
return 0;
}

It seems it has to do with low-light conditions. As soon as I uncovered the camara, I saw the frames increase to their expected value. So maybe on poor light conditions, the camera changes some settings or further processes the images, lowering its framerate.

Related

Frame Extraction Only few seconds in the beginning of video file using OpenCV

Following my previous question, I want to extract frames from a video. But only 2 seconds at the beginning of the video.
I want to work with raw video as much as I can, that why I don't want to cut the original video and then process it.
Here's my code to extract frame. But this will extract all frame from the video :
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
#include <sstream>
using namespace cv;
using namespace std;
int c = 0;
string int2str(int &);
int main(int argc, char **argv) {
string s;
VideoCapture cap("test_video.mp4");
if (!cap.isOpened())
{
cout << "Cannot open the video file" << endl;
return -1;
}
double fps = cap.get(CV_CAP_PROP_FPS);
cout << "Frame per seconds : " << fps << endl;
namedWindow("MyVideo", CV_WINDOW_NORMAL);
resizeWindow("MyVideo", 600, 600);
while (1)
{
Mat frame;
Mat Gray_frame;
bool bSuccess = cap.read(frame);
if (!bSuccess)
{
cout << "Cannot read the frame from video file" << endl;
break;
}
s = int2str(c);
//cout<<("%d\n",frame )<<endl;
c++;
imshow("MyVideo", frame);
imwrite("frame" + s + ".jpg", frame);
if (waitKey(30) == 27)
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
string int2str(int &i) {
string s;
stringstream ss(s);
ss << i;
return ss.str();
}
And advice ? Thanks.
It seems like you already know the FPS of the video. So isn't it just a matter of counting how many frames you have extracted and breaking after you reach FPS * 2?
double fps = cap.get(CV_CAP_PROP_FPS);
int framesToExtract = fps * 2;
for (int frames = 0; frames < framesToExtract; ++frames)
{
... // your processing here
}
Also, I looked at your previous question and it seems like you have a misunderstanding of FPS?
FPS = Frames Per Second, this means how many frames there are every second of the video. So let's say your video is 120 FPS. This means there are 120 frames in one second of video, 240 in two seconds and so on.
So (VIDEO_FPS * x) gets you the amount of frames in x seconds of the video.

Why is my code running faster if i compiled it with codeBlocks

i wanted to measure the fps of my camera. I found a simple code here.
If i compile the code with codeBlocks (on Ununtu) and run the loop for 600 times, the result is 27 fps.
if i compile it from the terminal with:
g++ -Wall main.cpp -o main -I/usr/local/include/ -lopencv_core -lopencv_highgui
the result is 14 fps. Why is it so slow after compiling from the terminal?
Here is the code
#include "opencv2/opencv.hpp"
#include <time.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Start default camera
VideoCapture video(0);
// With webcam get(CV_CAP_PROP_FPS) does not work.
// Let's see for ourselves.
double fps;
// Number of frames to capture
int num_frames = 600;
// Start and end times
time_t start, end;
// Variable for storing video frames
Mat frame;
cout << "Capturing " << num_frames << " frames" << endl ;
// Start time
time(&start);
// Grab a few frames
for(int i = 0; i < num_frames; i++)
{
video >> frame;
}
// End Time
time(&end);
// Time elapsed
double seconds = difftime (end, start);
cout << "Time taken : " << seconds << " seconds" << endl;
// Calculate frames per second
fps = num_frames / seconds;
cout << "Estimated frames per second : " << fps << endl;
// Release video
video.release();
return 0;
}
You just need to compile on the command line the same way that Code::Blocks is compiling. To see what that is, go to Settings for Compiler and Debugging and enable one of the build logging options. More details on that are here: Code::blocks verbose build
I think i solved the problem.
when the light is bright, the fps are high. if it is dark, the fps are low. So maybe there is a connection to the brightness ...

How to find framerate of video using c++ opencv 2.4.10?

Actually,I am trying detect and track the vehicles from a video using C++ opencv 2.4.10.I did so.Now,I want to find the frame rate of the output video.I want to know if there is any way to find out.Can anyone suggest me any blog or tutorial about this?
Thank you.
Something like this may help.
#include <iostream>
#include <opencv2/opencv.hpp> //for opencv3
#include <opencv/cv.hpp> //for opencv2
int main(int argc, const char * argv[]) {
cv::VideoCapture video("video.mp4");
double fps = video.get(cv::CAP_PROP_FPS);
std::cout << "Frames per second : " << fps << std::endl;
video.release();
return 0;
}
You must have a loop in your code where you are doing all the video processing.
Let's say you have something similar to this pseudocode:
//code initialization
cv::VideoCapture cap("some-video-uri");
//video capture/processing loop
while (1)
{
//here we take the timestamp
auto start = std::chrono::system_clock::now();
//capture the frame
cap >> frame;
//do whatever frame processing you are doing...
do_frame_processing();
//measure timestamp again
auto end = std::chrono::system_clock::now();
//end - start is the time taken to process 1 frame, output it:
std::chrono::duration<double> diff = end-start;
std::cout << "Time to process last frame (seconds): " << diff.count()
<< " FPS: " << 1.0 / diff.count() << "\n";
}
thats it ... take into account that calculating FPS in a frame-per-frame basis will likely produce a highly variant result. You should average this result for several frames in order to get a less variant FPS.

opencv video reading slow framerate

I am trying to read a video with OpenCV in C++, but when the video is displayed, the framerate is very slow, like 10% of the original framerate.
The whole code is here:
// g++ `pkg-config --cflags --libs opencv` play-video.cpp -o play-video
// ./play-video [video filename]
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
// video filename should be given as an argument
if (argc == 1) {
cerr << "Please give the video filename as an argument" << endl;
exit(1);
}
const string videofilename = argv[1];
// we open the video file
VideoCapture capture(videofilename);
if (!capture.isOpened()) {
cerr << "Error when reading video file" << endl;
exit(1);
}
// we compute the frame duration
int FPS = capture.get(CV_CAP_PROP_FPS);
cout << "FPS: " << FPS << endl;
int frameDuration = 1000 / FPS; // frame duration in milliseconds
cout << "frame duration: " << frameDuration << " ms" << endl;
// we read and display the video file, image after image
Mat frame;
namedWindow(videofilename, 1);
while(true)
{
// we grab a new image
capture >> frame;
if(frame.empty())
break;
// we display it
imshow(videofilename, frame);
// press 'q' to quit
char key = waitKey(frameDuration); // waits to display frame
if (key == 'q')
break;
}
// releases and window destroy are automatic in C++ interface
}
I tried with a video from a GoPro Hero 3+, and with a video from my MacBook's webcam, same problem with both videos. Both videos are played without problem by VLC.
Thanks in advance.
Try reducing the waitKey frame wait time. You are effectively waiting for the frame rate time (i.e. 33 ms), plus all the time it takes to grab the frame and display it. This means that if capturing the frame and displaying it takes over 0ms (which it does), you are guaranteed to be waiting for too long. Or if you really want to be accurate, you could time how long that part takes, and wait for the remainder, e.g. something along the lines of:
while(true)
{
auto start_time = std::chrono::high_resolution_clock::now();
capture >> frame;
if(frame.empty())
break;
imshow(videofilename, frame);
auto end_time = std::chrono::high_resolution_clock::now();
int elapsed_time = std::chrono::duration_cast<std::chrono::milliseconds>(end_time - start_time).count();
//make sure we call waitKey with some value > 0
int wait_time = std::max(1, elapsed_time);
char key = waitKey(wait_time); // waits to display frame
if (key == 'q')
break;
}
The whole int wait_time = std::max(1, elapsed_time); line is just to ensure that we wait for at least 1 ms, as OpenCV needs to have a call to waitKey in there to fetch and handle events, and calling waitKey with a value <= 0 tells it to wait infinity for a user input, which we don't want either (in this case)

Using OpenCV, Boost threading and multiple cameras

I am trying to write a program that is able to capture images from two different cameras in two different threads. I want to do this because when I do this in the same thread I have to keep waiting for cvQueryFrame twice the amount of time and so i can not grab images at 30 fps (I get 15 FPS from each camera).
I have taken a look at this SO post, but this only works for one camera. Using cvQueryFrame and boost::thread together
My current program gives varying results, sometimes it gives memory leaks, usually I just don't see anything happening and sometimes it worls for a few seconds but the the image freezes again. The strange thing is that earlier when I didn't call cvShowImage, but had my imageProcessing function do something useful I could see that I was getting real time results from both cameras. I assume this means that it is possible to make this work, but that I made a stupid mistake somewhere. My OS is LINUX and I am using OpenCV 2.4
My code:
#include <iostream>
#include <cstdio>
#include <cv.h>
#include <ml.h>
#include <cvaux.h>
#include <highgui.h>
#include <vector>
#include <stdio.h>
#include "producer_consumer_queue.hpp"
//Camera settings
int cameraWidth = 1280;
int cameraHeight = 720;
int waitKeyValue = 5;
bool threads_should_exit = false;
CvCapture * capture;
CvCapture * capture2;
using namespace std;
using namespace cv;
void grabFrame(concurrent_queue<IplImage* > * frame_queue, int camNumber) {
try {
//Load first frames
cout << "grabFrame: " << camNumber << " init with " << cameraWidth << " x " << cameraHeight << endl;
IplImage* frame;
if (camNumber == 0)frame = cvQueryFrame(capture);
if (camNumber == 1)frame = cvQueryFrame(capture2);
while (frame && !threads_should_exit) {
if (camNumber == 0)frame = cvQueryFrame(capture);
if (camNumber == 1)frame = cvQueryFrame(capture2);
IplImage* frame_copy = NULL;
frame_copy = cvCloneImage(frame);
if (camNumber == 0)cvShowImage("NE", frame);
cout << "grabFrame: " << camNumber << " pushing back to queue" << endl;
frame_queue->push(frame_copy);
int k = cvWaitKey(waitKeyValue);
if (k == 1048603 || k == 27 || k == '\r') {
cout << "grabFrame: Process killed" << endl;
//release memory
threads_should_exit = true;
}
}
} catch (const concurrent_queue<IplImage* >::Canceled & e) {
cout << "grabFrame: Show thread is canceled" << endl;
return;
}
}
void processFrames(concurrent_queue<IplImage* > * frame_queue0, concurrent_queue<IplImage* > * frame_queue1) {
try {
do {
cout << "processFrames: Processing two frames" << endl;
IplImage* frm = NULL;
frame_queue0->wait_and_pop(frm);
IplImage * frm2 = NULL;
frame_queue1->wait_and_pop(frm2);
cvReleaseImage(&frm);
cvReleaseImage(&frm2);
} while (!threads_should_exit);
} catch (const concurrent_queue<IplImage* >::Canceled & e) {
cout << "processFrames: Processing thread is canceled" << endl;
return;
}
}
int main() {
capture = cvCreateCameraCapture(0);
capture2 = cvCreateCameraCapture(1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, cameraWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, cameraHeight);
cvSetCaptureProperty(capture2, CV_CAP_PROP_FRAME_WIDTH, cameraWidth);
cvSetCaptureProperty(capture2, CV_CAP_PROP_FRAME_HEIGHT, cameraHeight);
boost::thread_group frame_workers;
boost::thread_group frame_workers2;
concurrent_queue<IplImage* > frame_queue(&frame_workers);
concurrent_queue<IplImage* > frame_queue2(&frame_workers2);
boost::thread * query_thread = new boost::thread(processFrames, &frame_queue, &frame_queue2);
boost::thread * cam0_thread = new boost::thread(grabFrame, &frame_queue, 0);
usleep(10000);
boost::thread * cam1_thread = new boost::thread(grabFrame, &frame_queue2, 1);
frame_workers.add_thread(query_thread);
frame_workers.add_thread(cam0_thread);
frame_workers2.add_thread(query_thread);
frame_workers2.add_thread(cam1_thread);
while (true) {
if (threads_should_exit) {
cout << "Main: threads should be killed" << endl;
while (!frame_queue.empty()) {
usleep(10000);
}
frame_workers.remove_thread(query_thread);
frame_workers2.remove_thread(query_thread);
frame_workers.remove_thread(cam0_thread);
frame_workers2.remove_thread(cam1_thread);
frame_workers.join_all();
break;
}
usleep(10000);
}
return 0;
}
EDIT:
I added a simple function to detect a piece of paper to see if everything is working fine when I don't call on cvShowImage(). My program can detect a piece of paper fine if I don't call cvShowImage(). If I do the program has strange behavior again and freezes etc.
There should only be one thread manipulating the GUI (this is true for just about any GUI framework). You should organize your code so that cvShowImage is only invoked from the main "GUI thread".
It seems that the work that's being done in query_thread could just as easily be done inside the main thread.