OpenCV VideoWriter Framerate Issue - c++

I'm attempting to record video from a 1080p webcam into a file. I held a timer up in the video and in every trial the timestamp reported by a video player (VLC is what I used) doesn't sync up with the time in the video. It's always off a few seconds (usually in-video timer is faster than player-reported time).
As seen below, I set up the C++ program to capture video in one thread, and record in another thread. This is working fine as my CPU usage is ~200% (possible max out?). I'm on a Macbook Air w/ OS X 10.8 # 1.8 GHz Intel Core i7.
I've tried changing the framerate to 15fps and that results in very choppy/slow video. I've also tried setting CV_CAP_PROP_FRAME_WIDTH & CV_CAP_PROP_FRAME_HEIGHT to a lower resolution and it results in slow video. It appears that 1080p # 30fps results in good steady video, but it is still always plays faster than it's supposed to. I've also tried putting in waitKey(10); after record << frame; but it did not affect anything.
Any recommendations on how to make the video match up in time?
Thanks!
Aakash
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include <boost/thread.hpp>
using namespace cv;
void captureFunc(Mat *frame, VideoCapture *capture){
for(;;){
// get a new frame from camera
(*capture) >> (*frame);
}
}
int main(int, char**)
{
VideoCapture capture(0); // open the default camera
if( !capture.isOpened() ) {
printf("Camera failed to open!\n");
return -1;
}
capture.set(CV_CAP_PROP_FPS,30); //set capture rate to 30fps
Mat frame;
capture >> frame; // get first frame for size
// initialize recording of video
VideoWriter record("test.avi", CV_FOURCC('D','I','V','X'), 30, frame.size(), true);
if( !record.isOpened() ) {
printf("VideoWriter failed to open!\n");
return -1;
}
boost::thread captureThread(captureFunc, &frame, &capture); //start capture thread
sleep(1); //just to make sure capture thread is ready
for(;;)
{
// add frame to recorded video
record << frame;
}
return 0;
}

I resolved my issue after a bit of debugging; it was an issue with VideoWriter being picky on the rate at which frames were fed to it.

You need to sync your read and write functions. Your code reads as fast as possible, and writes also as fast as possible. Your output video probably looks slow because writing the output happens faster than reading the input (since capture >> is waiting for your camera), and several identical frames are recorded.
Writing without waiting or syncing means you may write the same content several times (what I think is happening here), or lose frames.
If you want to keep having threads, you may, but you will need to make your write process wait until there is something new to write.
Likewise to avoid losing frames or writing corrupted frames, you need the read process to wait until writing is done, so that frame can be safely overwritten.
Since the threads need to wait for each other anyway, there's little point in threads at all.
I'd rather recommend this much simpler way:
for (;;) {
capture >> frame;
process(frame); // whatever smart you need
record << frame;
}
If you need parallelism, you'll need much more complex sync mechanism, and maybe some kind of fifo for your frames.

Related

More efficient way to read a video frame-by-frame with Qt and openCV

I am currently using Qt and OpenCV to get a frame-by-frame video from a local file (1920*1280, 30 frames per second, uncompressed)
bool MainWindow::foo()
{
const std::string name = loadFileName.toStdString();
cv::VideoCapture cap(name);
if(!cap.isOpened())
return false;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);
cv::Mat frame;
while (cap.isOpened())
{
CHiResTimer timer; // custom timer class
timer.Start();
cap >> frame;
timer.Stop();
QTest::qWait(1);
}
frame.release();
cap.release();
return true;
}
But only the cap >> frame line takes 10-12 ms and it is too slow for me because I want to do some processing and show back 30 fps video with minimal delay. I found that pipelines from gstreamer can help with faster reading from a file, but I'm absolutely not familiar with this framework, so I don't know if it's necessary to use it for just one pipeline. Is there any other way to speed up reading (even without OpenCV)?
If you want to have 30 fps output then you have 33 ms of time per frame.
Doing it all (read, process and show) in sequence results with you having 10-12 ms for reading and 21-23 ms for further processing and showing. If that time is not enough then it is unlikely that twice quicker input (5-6 ms so 27-28 ms for processing and showing) saves you. It is only 22% more time for processing and showing.
You may need to use other ways to speed it up. For example if you have 3 separate threads that process frames (and do nothing else) then each has 100 ms for processing a frame.

Proper use of cv::VideoCapture

I've been having some issues regarding capturing video from a live stream.
I open up the video with the open function of the cv::VideoCapture. However I need to manually ask when a frame is ready in something like this:
while (true){
cv::Mat frame;
if (videoCapture.read(frame)){
// Do stuff ...
}
else{
// Video is done.
}
}
The problem with this code is that it will definitely process a single frame multiple times, the number of times depending on the camera's FPS. This is because the read() function will only return false if the camera is disconnected, according to the documentation.
So my question is, how can I know if there is a NEW frame available? That I'm not just getting the old one again?

Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.

OpenCV is missing frames while face detection takes place

I am using the
haarcascade_frontalface_alt2.xml
file for face detection in OpenCV 2.4.3 under the Visual Studio 10 framework.
I am using
Mat frame;
cv::VideoCapture capture("C:\\Users\\Xavier\\Desktop\\AVI\\Video 6_xvid.avi");
capture.set(CV_CAP_PROP_FPS,30);
for(;;)
{
capture >> frame;
//face detection code
}
The problem i'm facing is that as Haar face detection is computationally heavy, OpenCV is missing a few frames in the
capture >> frame;
instruction. To check it I wrote to a txt file a counter and found only 728 frames out of 900 for a 30 sec 30fps video.
Plz someone tell me how to fix it.
I am not an experienced openCV user, but you could try flushing the outputstream of capture to disk. Unfortunately, I think the VideoCapture class does not seem to support such an operation. Note that flushing to disk will have an impact on your performance, since it will first flush everything and only then continue executing. Therefore it might not be the best solution, but it is the easiest one if possible.
Another approach that requires more work but that should fix it is to make a separate low priority thread that writes each frame to disk. Your current thread then only needs to call this low priority thread each time it wants its data to be captured. Depending on whether the higher priority thread might change the data while the low priority thread still has to write it to disk, you might want to copy the data to a separate buffer first.

An efficient way to buffer HD video real-time without maxing out memory

I am writing a program that involves real-time processing of video from a network camera using OpenCV. I want to be able to capture (at any time during processing) previous images (e.g. say ten seconds worth) and save to a video file.
I am currently doing this using a queue as a buffer (to push 'cv::Mat' data) but this is obviously not efficient as a few seconds worth of images soon uses up all the PC memory. I tried compressing images using 'cv::imencode' but that doesn't make much difference using PNG, I need a solution that uses hard-drive memory and efficient for real-time operation.
Can anyone suggest a very simple and efficient solution?
EDIT:
Just so that everyone understands what I'm doing at the moment; here's the code for a 10 second buffer:
void run()
{
cv::VideoCapture cap(0);
double fps = cap.get(CV_CAP_PROP_FPS);
int buffer_lenght = 10; // in seconds
int wait = 1000.0/fps;
QTime time;
forever{
time.restart();
cv::mat image;
bool read = cap.read(image);
if(!read)
break;
bool locked = _mutex.tryLock(10);
if(locked){
if(image.data){
_buffer.push(image);
if((int)_buffer.size() > (fps*buffer_lenght))
_buffer.pop();
}
_mutex.unlock();
}
int time_taken = time.elapsed();
if(time_taken<wait)
msleep(wait-time_taken);
}
cap.release();
}
queue<cv::Mat> _buffer and QMutex _mutex are global variables. If you're familiar with QT, signals and slots etc, I've got a slot that grabs the buffer and saves it as a video using cv::VideoWriter.
EDIT:
I think the ideal solution will be for my queue<cv::Mat> _buffer to use hard-drive memory rather than pc memory. Not sure on which planet this is possible? :/
I suggest looking into real-time compression with x264 or similar. x264 is regularly used for real-time encoding of video streams and, with the right settings, can encode multiple streams or a 1080p video stream in a moderately powered processor.
I suggest asking in doom9's forum or similar forums.
x264 is a free h.264 encoder which can achieve 100:1 or better (vs raw) compression. The output of x264 can be stored in your memory queue with much greater efficiency than uncompressed (or losslessly compressed) video.
UPDATED
One thing you can do is store images to the hard disk using imwrite and update their filenames to the queue. When the queue is full, delete images as you pop filenames.
In your video writing slot, load the images as they are popped from the queue and write them to your VideoWriter instance
You mentioned you needed to use Hard Drive Memory
In that case, consider using the OpenCV HighGUI VideoWriter. You can create an instance of VideoWriter as below:
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'),
30, frame.size(), true);
And write image captures to in as below:
record.write(image);
Find the documentation and the sample program on the website.