C++ Multithreading OpenCV locking problems - c++

I'm phasing a frustrating issue here with multithreading and openCV.
I want to read two different video files (video.mp4 and video-copy.mp4). I want to place frames of one video on one corner(a background jpg image) of a canvas and the frames of the second video on another corner on the same canvas.
Below is the code I used to hope to accomplish this goal. It compiles well but runs unpredictably:
one time it will show both frames on the canvas but freezes, or frame from only one video are show and the other video can't be seen
another time it runs correctly but still shows the runtime errors in red at my terminal (i included the runtime errors at the bottom of this cpp file, also I'm running this on ubuntu.)
I'd like to thank you all a head of time for your help. Thanks again everyone.
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <string>
#include <thread>
#include <mutex>
using namespace std;
using namespace cv;
//The main canvas (a background image)
Mat canvas=imread("canvas.jpg", CV_LOAD_IMAGE_COLOR);
// the mutex for accessing the canvas variable
mutex mu;
// The function that plays a video on a predefined ROI on the canvas
void play(string n, Point p) //this function could be called n number of times as its own thread
{
VideoCapture cap(n); //open a local video file
int fps=cap.get(CV_CAP_PROP_FPS); // get it's frames per-seconds frame rates
char k; // for exiting the frame fetching loop
Mat temp; // a temporary holding variable for each frame from the stream
while(1)
{
unique_lock<mutex> locker(mu); // lock the mutex (shared memory is the canvas matrix)
cap>>temp; // get a frame from the stream to the temporary buffer
if(temp.empty())// if stream is empty get out of this loop
{
locker.unlock(); // unlock the mutex first before exiting this loop
break; // exit the loop now.
}
// syntax= src_image.copyto(dst_image(Rect(Point(x,y), size(src_image))))
//dest_image is the shared object between the threads.
temp.copyTo(canvas(Rect(p.x, p.y, temp.cols, temp.rows))); // past a frame onto the canvas at location (x,y) and size(of the frame)
locker.unlock();// done interacting with the shared variable, so unlock the mutex
this_thread::sleep_for(chrono::milliseconds(10));// wait briefly before looping again.
}
}
int main()
{
char k; // for breaking out of the main loop ( the display image loop)
namedWindow("MainWindow", WINDOW_AUTOSIZE); // create the main window
// create and start the first thread, pass a filename and a coordinates to place the frame on the canvas
thread P1(play, "video-copy.mp4", Point(0,0));
// create and start the second thread, they should both be able to access the canvas variable (one at a time ofcourse)
thread P2(play, "video.mp4", Point(200,300));
while(1) // keep displaying the canvas as it is changing.
{
unique_lock<mutex> locker(mu); // lock the mutex again, because we are using the canvas (shared object) again,
imshow("MainWindow", canvas); // show image
locker.unlock(); // unlock the mutex again.
k=waitKey(10); // flush the buffer and wait for 10 milliseconds
if(k==27) //exit the display image loop.
break;
}
// Join the child threads back with the parrent thread before exiting.
P1.join();
P2.join();
destroyWindow("MainWindow"); // destroy the main window
return 0; // end the main.
}
/* The errors I always get but cant find what is causing it
(MainWindow:3157): GLib-GObject-CRITICAL **: g_object_remove_weak_pointer: assertion `G_IS_OBJECT (object)' failed
[NULL # 0xb1b06ca0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b07aa0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b06ca0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b07aa0] insufficient thread locking around avcodec_open/close()
[IMGUTILS # 0xb28fde64] Picture size 0x0 is invalid
[IMGUTILS # 0xb28fde94] Picture size 0x0 is invalid
[swscaler # 0xad180300] bad dst image pointers
*/

Related

OpenCV C++ memory leak when passing Mat to thread

I am trying to pass an OpenCV Mat from an image capture thread to an image processing thread as a part of a large application. I have no requirement for the image capture thread and the image processing thread to access the Mat at the same time. As a result, I simply want to pass ownership of the Mat from the image capture thread to the image processing thread. I am using OpenCV 3.
The problem I am encountering is that my program leaks a large amount of memory.
Below I have attached a minimal example of the code I am using to pass the Mats between threads. When I run this code it finishes using somewhere between 10MB and 500MB of memory from just 300 images with a resolution of 1640x1232.
Code
#include <iostream>
#include <thread>
#include <mutex>
#include <opencv2/opencv.hpp>
#include <signal.h>
#include <stdio.h>
#define SIM_PROCESSING_TIME_MS 150
using namespace std;
using namespace cv;
static volatile int keepRunning = 1;
static volatile int runThread = 1;
// Acts as a stack where the last image to be added is the first to be processed.
// If the 'capture' rate is higher than the processing rate then some images are skipped and processed at the end once the 'capture' has stopped.
vector<Mat> unprocessedImages;
mutex unprocessedImageMutex;
void intHandler(int dummy)
{
keepRunning = 0;
}
// Simulates a function which captures an image using opencv.
Mat GetImage()
{
return imread("../0000.jpg", CV_LOAD_IMAGE_COLOR);
}
// Simulates an image processing thread.
// A delay has been used to replace any actual image processing as in my testing it didn't seem to make a difference.
void image_processing_thread()
{
int count = 0;
while(true)
{
Mat imageToProcess;
{// lock the vector and remove the last element
lock_guard<mutex> lk(unprocessedImageMutex);
if (unprocessedImages.size() > 0)
{
imageToProcess = unprocessedImages.back();
unprocessedImages.pop_back();
}
}
if(!imageToProcess.empty())
{
// We have an image to process so sleep to simulate processing
this_thread::sleep_for(std::chrono::milliseconds(SIM_PROCESSING_TIME_MS));
count++;
cout << "Processed " << count << endl;
}
else if(!runThread) //The image loading thread is done and there are no more images to process
break;
this_thread::sleep_for(chrono::milliseconds(1));
}
}
// Simulates the image capture thread.
// 'Captures' images then pushes them onto unprocessedImages which the image processing thread then reads from.
int main()
{
signal(SIGINT, intHandler);
// Start thread to process images
auto imageProcessingThread = std::thread(image_processing_thread);
// Load 300 images into memory
for (int count = 0; count < 300; count++)
{
this_thread::sleep_for(std::chrono::milliseconds(20));
auto img = GetImage();
lock_guard<mutex> lk(unprocessedImageMutex);
unprocessedImages.push_back(img);
}
// Allow processing thread to exit when it has finished
runThread = 0;
cout << "All images loaded in" << endl;
imageProcessingThread.join();
cout << "All images processed" << endl;
while (keepRunning) {}
return 1;
}
There is a bit of code so the program can be exited with a SIGINT. This code is not a part of my larger application
Attempted fixes
Calling unprocessedImages.reserve(1000) at the top of main.
Replacing unprocessedImages with a std::array and an index.
Replacing unprocessedImages with a c array and an index.
Using copyTo to move the Mats in an out of unprocessedImages.
All of the above while wrapping Mat in shared_ptr or unique_ptr (e.g. vector<unique_ptr<Mat>> unprocessedImages;).
None of these attempts had any affect on the characteristics of the memory leak.
Question
What is causing the memory leak in my program? How are you meant to pass ownership of OpenCV Mats between different threads?
Thanks, James
Edit: Added an additional attempted fix.
Edit 2: Running with Valgrind results in the above code NOT leaking. This is backed up by Valgrind's report which states that all blocks were reachable when the program ended. From the programs print outs it is clear that running it in valgrind made it a single threaded application as the print statements inside the two threads are perfectly interleaved.
Edit 3: I modified main like below. The result was that the peak and minimum memory usage on every iteration of the outer loop was the same. On this particular run the minimum memory usage was 374MB.
int main() {
signal(SIGINT, intHandler);
while(keepRunning)
{
runThread = 1;
// Start thread to process images
auto imageProcessingThread = std::thread(image_processing_thread);
/* ... */
cout << "All images processed" << endl;
}
while (keepRunning) {}
return 1;
}

"Project.exe has triggered a breakpoint" after implementing multithreading

I have a Visual Studio project that worked fine until I tried to implement multithreading. The project acquires images from a GigE camera, and after acquiring 10 images, a video is made from the acquired images.
The program flow was such that the program didn't acquire images when it was making the video. I wanted to change this, so I created another thread that makes the videos from the images. What I wanted is that the program will acquire images continuously, after 10 images are acquired, another thread runs in parallel that will make the video. This will continue until I stop the program, 10 images are acquired, video from these 10 images is made in parallel while the next 10 images are acquired and so on.
I haven't created threads before so I followed the tutorial on this website. Similar to the website, I created a thread for the function that saves the video. The function that creates the video takes the 10 images as a vector argument. I execute join on this thread just before the line where my main function terminates.
For clarity, here's pseudo-code for what I've implemented:
#include ...
#include <thread>
using namespace std;
thread threads[1];
vector<Image> images;
void thread_method(vector<Image> & images){
// Save vector of images to video
// Clear the vector of images
}
int main(int argc, char* argv[]){
// Some stuff
while(TRUE)
{
for (int i = 0; i < 10; i++)
{
//Acquire Image
//Put image pointer in images vector named images
}
threads[0] = thread(thread_method, images)
}
// stuff
threads[0].join();
cout << endl << "Done! Press Enter to exit..." << endl;
getchar();
return 0;
}
When I run the project now, a message pops up saying that the Project.exe has triggered a breakpoint. The project breaks in report_runtime_error.cpp in static bool __cdecl issue_debug_notification(wchar_t const* const message) throw().
I'm printing some cout messages on the console to help me understand what's going on. What happens is that the program acquires 10 images, then the thread for saving the video starts running. As there are 10 images, 10 images have to be appended to the video. The message that says Project.exe has triggered a breakpoint pops up after the second time 10 images are acquired, at this point the parallel thread has only appended 6 images from the first acquired set of images to the video.
The output contains multiple lines of thread XXXX has exited with code 0, after that the output says
Debug Error!
Program: ...Path\Program.exe
abort() has been called
(Press Retry to debug the application)
Program.exe has triggered a breakpoint.
I can't explain all this in a comment. I'm dropping this here because it looks like OP is heading in some bad directions and I'd like to head him off before the cliff. Caleth has caught the big bang and provided a solution for avoiding it, but that bang is only a symptom of OP's and the solution with detach is somewhat questionable.
using namespace std;
Why is "using namespace std" considered bad practice?
thread threads[1];
An array 1 is pretty much pointless. If we don't know how many threads there will be, use a vector. Plus there is no good reason for this to be a global variable.
vector<Image> images;
Again, no good reason for this to be global and many many reasons for it NOT to be.
void thread_method(vector<Image> & images){
Pass by reference can save you some copying, but A) you can't copy a reference and threads copy the parameters. OK, so use a pointer or std::ref. You can copy those. But you generally don't want to. Problem 1: Multiple threads using the same data? Better be read only or protected from concurrent modification. This includes the thread generating the vector. 2. Is the reference still valid?
// Save vector of images to video
// Clear the vector of images
}
int main(int argc, char* argv[]){
// Some stuff
while(TRUE)
{
for (int i = 0; i < 10; i++)
{
//Acquire Image
//Put image pointer in images vector named images
}
threads[0] = thread(thread_method, images)
Bad for reasons Caleth covered. Plus images keeps growing. The first thread, even if copied, has ten elements. The second has the first ten plus another ten. This is weird, and probably not what OP wants. References or pointers to this vector are fatal. The vector would be resized while other threads were using it, invalidating the old datastore and making it impossible to safely iterate.
}
// stuff
threads[0].join();
Wrong for reasons covered by Caleth
cout << endl << "Done! Press Enter to exit..." << endl;
getchar();
return 0;
}
The solution to joining on the threads is the same as just about every Stack Overflow question that doesn't resolve to "Use a std::string": Use a std::vector.
#include <iostream>
#include <vector>
#include <thread>
void thread_method(std::vector<int> images){
std::cout << images[0] << '\n'; // output something so we can see work being done.
// we may or may not see all of the numbers in order depending on how
// the threads are scheduled.
}
int main() // not using arguments, leave them out.
{
int count = 0; // just to have something to show
// no need for threads to be global.
std::vector<std::thread> threads; // using vector so we can store multiple threads
// Some stuff
while(count < 10) // better-defined terminating condition for testing purposes
{
// every thread gets its own vector. No chance of collisions or duplicated
// effort
std::vector<int> images; // don't have Image, so stubbing it out with int
for (int i = 0; i < 10; i++)
{
images.push_back(count);
}
// create and store thread.
threads.emplace_back(thread_method,std::move(images));
count ++;
}
// stuff
for (std::thread &temp: threads) // wait for all threads to exit
{
temp.join();
}
// std::endl is expensive. It's a linefeed and s stream flush, so save it for
// when you really need to get a message out immediately
std::cout << "\nDone! Press Enter to exit..." << std::endl;
char temp;
std::cin >>temp; // sticking with standard librarly all the way through
return 0;
}
To better explain
threads.emplace_back(thread_method,std::move(images));
this created a thread inside threads (emplace_back) that will call thread_method with a copy of images. Odds are good that the compiler would have recognized that this was the end of the road for this particular instance of images and eliminated the copying, but if not, std::move should give it the hint.
You are overwriting your one thread in the while loop. If it's still running, the program is aborted. You have to join or detach each thread value.
You could instead do
#include // ...
#include <thread>
// pass by value, as it's potentially outliving the loop
void thread_method(std::vector<Image> images){
// Save vector of images to video
}
int main(int argc, char* argv[]){
// Some stuff
while(TRUE)
{
std::vector<Image> images; // new vector each time round
for (int i = 0; i < 10; i++)
{
//Acquire Image
//Put image pointer in images vector named images
}
// std::thread::thread will forward this move
std::thread(thread_method, std::move(images)).detach(); // not join
}
// stuff
// this is somewhat of a lie now, we have to wait for the threads too
std::cout << std::endl << "Done! Press Enter to exit..." << std::endl;
std::getchar();
return 0;
}

Best practice for performance when multithreading with OpenCV VideoWriter in C++

I'm relatively new to C++, especially multi-threading, and have been playing with different options. I've gotten some stuff to work but ultimately I'm looking to maximize performance so maybe I think it'd be better to reach out to everyone else for what would be most effective and then go down that road.
I'm working on an application that will take a video stream and write an unmodified video file and a modified video file (there's some image processing that happens) to the local disk. There's also going to be some other threads to collect some other GPS data, etc, but nothing special.
The problem I'm running into is the framerate is limited mainly by the VideoWriter function in OpenCV. I know this can be greatly alleviated if I use a thread to write the frame to the VideoWriter object, that while the two VideoWriters can run simultaneously with each other and the image processing functions.
I've successfully created this function:
void frameWriter(Mat writeFrame, VideoWriter *orgVideo)
{
(orgVideo->write(writeFrame));
}
And it is called from within an infinite loop like so:
thread writeOrgThread(frameWriter, modFrame, &orgVideo, &orgVideoMutex);
writeOrgThread.join();
thread writeModThread(frameWriter, processMatThread(modFrame, scrnMsg1, scrnMsg2)
writeModThread.join();
Now having the .join() immediately underneath defeats the performance benefits, but without it I immediately get the error "terminate called without an active exception". I thought it would do what I needed if I put the join() functions above, so on the next loop it'd make sure the previous frame was written before writing the next, but then it behaves as if the join is not there (perhaps by the time the main task has made the full loop and gotten to the join, the thread is already terminated?). Also, using detach I think creates the issue that the threads are unsynchronized and then I run into these errors:
[mpeg4 # 0000000000581b40] Invalid pts (156) <= last (156)
[mpeg4 # 00000000038d5380] Invalid pts (158) <= last (158)
[mpeg4 # 0000000000581b40] Invalid pts (158) <= last (158)
[mpeg4 # 00000000038d5380] Invalid pts (160) <= last (160)
[mpeg4 # 0000000000581b40] [mpeg4 # 00000000038d5380] Invalid pts (160) <= last
(160)
Invalid pts (162) <= last (162)
I'm assuming this is because multiple threads are trying to access the same resource? Finally, I tried using mutex with detach to avoid above and I got a curious behavior where my sleep thread wasn't behaving properly and the frame rate was inconsistent .
void frameWriter(Mat writeFrame, VideoWriter *orgVideo, mutex *mutexVid)
{
(mutexVid->lock());
(orgVideo->write(writeFrame));
(mutexVid->unlock());
}
Obviously I'm struggling with thread synchronization and management of shared resources. I realize this is probably a rookie struggle, so if somebody tossed a tutorial link at me and told me go read a book I'd be OK with that. I guess what i'm looking for right now is some guidance as far as what specific method is going to get me the best performance in this situation and then I'll make that work.
Additionally, does anyone have a link for a very good tutorial that covers multithreading in C++ from a broader point of view (not limited to Boost or C++11 implmentation and covers mutexs, etc). It could greatly help me out with this.
Here's the 'complete' code, I stripped out some functions to make it easier to read, so don't mind the extra variable here and there:
//Standard libraries
#include <iostream>
#include <ctime>
#include <sys/time.h>
#include <fstream>
#include <iomanip>
#include <thread>
#include <chrono>
#include <mutex>
//OpenCV libraries
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
//Other libraries
//Namespaces
using namespace cv;
using namespace std;
// Code for capture thread
void captureMatThread(Mat *orgFrame, VideoCapture *stream1){
//loop infinitely
for(;;){
//capture from webcame to Mat orgFrame
(*stream1) >> (*orgFrame);
}
}
Mat processMatThread(Mat inFrame, string scrnMsg1, string scrnMsg2){
//Fancify image
putText(inFrame, scrnMsg1, cvPoint(545,450),CV_FONT_HERSHEY_COMPLEX,
0.5,CvScalar(255,255,0,255),1,LINE_8,false);
putText(inFrame, scrnMsg2, cvPoint(395,470),CV_FONT_HERSHEY_COMPLEX,
0.5,CvScalar(255,255,0,255),1,LINE_8,false);
return inFrame;
}
void frameWriter(Mat writeFrame, VideoWriter *orgVideo, mutex *mutexVid)
{
//(mutexVid->lock());
(orgVideo->write(writeFrame));
//(mutexVid->unlock());
}
long usecDiff(long usec1, long usec2){
if (usec1>usec2){
return usec1 - usec2;
}
else {
return (1000000 + usec1) - usec2;
}
}
int main()
{
//Start video capture
cout << "Opening camera stream..." << endl;
VideoCapture stream1(0);
if (!stream1.isOpened()) {
cout << "Camera failed to open!" << endl;
return 1;
}
//Message incoming image size
cout << "Camera stream opened. Incoming size: ";
cout << stream1.get(CV_CAP_PROP_FRAME_WIDTH) << "x";
cout << stream1.get(CV_CAP_PROP_FRAME_HEIGHT) << endl;
//File locations
const long fileSizeLimitBytes(10485760);
const int fileNumLimit(5);
const string outPath("C:\\users\\nag1\\Desktop\\");
string outFilename("out.avi");
string inFilename("in.avi");
//Declare variables for screen messages
timeval t1;
timeval t2;
timeval t3;
time_t now(time(0));
gettimeofday(&t1,0);
gettimeofday(&t2,0);
gettimeofday(&t3,0);
float FPS(0.0f);
const int targetFPS(60);
const long targetUsec(1000000/targetFPS);
long usec(0);
long usecProcTime(0);
long sleepUsec(0);
int i(0);
stringstream scrnMsgStream;
string scrnMsg1;
string scrnMsg2;
string scrnMsg3;
//Define images
Mat orgFrame;
Mat modFrame;
//Start Video writers
cout << "Creating initial video files..." << endl;
//Identify outgoing size, comments use incoming size
const int frame_width = 640; //stream1.get(CV_CAP_PROP_FRAME_WIDTH);
const int frame_height = 480; //stream1.get(CV_CAP_PROP_FRAME_HEIGHT);
//Message outgoing image size
cout << "Outgoing size: ";
cout << frame_width << "x" << frame_height << endl;
VideoWriter orgVideo(outPath + inFilename,CV_FOURCC('D','I','V','X'),targetFPS,
Size(frame_width,frame_height),true);
mutex orgVideoMutex;
VideoWriter modVideo(outPath + outFilename,CV_FOURCC('D','I','V','X'),targetFPS,
Size(frame_width,frame_height),true);
mutex modVideoMutex;
//unconditional loop
cout << "Starting recording..." << endl;
//Get first image to prevent exception
stream1.read(orgFrame);
resize(orgFrame,modFrame,Size(frame_width,frame_height));
// start thread to begin capture and populate Mat frame
thread captureThread(captureMatThread, &orgFrame, &stream1);
while (true) {
//Time stuff
i++;
if (i%2==0){
gettimeofday(&t1,0);
usec = usecDiff(t1.tv_usec,t2.tv_usec);
}
else{
gettimeofday(&t2,0);
usec = usecDiff(t2.tv_usec,t1.tv_usec);
}
now = time(0);
FPS = 1000000.0f/usec;
scrnMsgStream.str(std::string());
scrnMsgStream.precision(2);
scrnMsgStream << std::setprecision(2) << std::fixed << FPS;
scrnMsg1 = scrnMsgStream.str() + " FPS";
scrnMsg2 = asctime(localtime(&now));
//Get image
//Handled by captureMatThread now!!!
//stream1.read(orgFrame);
//resize image
resize(orgFrame,modFrame,Size(frame_width,frame_height));
//write original image to video
//writeOrgThread.join();
thread writeOrgThread(frameWriter, modFrame, &orgVideo, &orgVideoMutex);
//writeOrgThread.join();
writeOrgThread.detach();
//orgVideo.write(modFrame);
//write modified image to video
//writeModThread.join();
thread writeModThread(frameWriter, processMatThread(modFrame, scrnMsg1, scrnMsg2), &modVideo, &modVideoMutex);
//writeOrgThread.join();
//writeModThread.join();
writeModThread.detach();
//modVideo.write(processMatThread(modFrame, scrnMsg1, scrnMsg2));
//sleep
gettimeofday(&t3,0);
if (i%2==0){
sleepUsec = targetUsec - usecDiff(t3.tv_usec,t1.tv_usec);
}
else{
sleepUsec = targetUsec - usecDiff(t3.tv_usec,t2.tv_usec);
}
this_thread::sleep_for(chrono::microseconds(sleepUsec));
}
orgVideo.release();
modVideo.release();
return 0;
}
This is actually running on a raspberry pi (adapted to use raspberry pi camera) so my resources are limited and that's why i'm trying to minimize how many copies of the image there are and implement the parallel writing of the video files. You can see I've also experimented with placing both the 'join()'s after the "writeModThread", so at least the writing of the two files are in parallel. Perhaps that's the best we can do, but I plan to add a thread with all the image processing that I'd like to run in parallel (now you can see it called as a simple function that adds plain text).

Prevent frame dropping while saving frames to disk

I am trying to write C++ code which saves incoming video frames to disk. Asynchronously arriving frames are pushed onto queue by a producer thread. The frames are popped off the queue by a consumer thread. Mutual exclusion of producer and consumer is done using a mutex. However, I still notice frames being dropped. The dropped frames (likely) correspond to instances when producer tries to push the current frame onto queue but cannot do so since consumer holds the lock. Any suggestions ? I essentially do not want the producer to wait. A waiting consumer is okay for me.
EDIT-0 : Alternate idea which does not involve locking. Will this work ?
Producer initially enqueues n seconds worth of video. n can be some small multiple of frame-rate.
As long as queue contains >= n seconds worth of video, consumer dequeues on a frame by frame basis and saves to disk.
When the video is done, the queue is flushed to disk.
EDIT-1: The frames arrive at ~ 15 fps.
EDIT-2 : Outline of code :
Main driver code
// Main function
void LVD::DumpFrame(const IplImage *frame)
{
// Copies frame into internal buffer.
// buffer object is a wrapper around OpenCV's IplImage
Initialize(frame);
// (Producer thread) -- Pushes buffer onto queue
// Thread locks queue, pushes buffer onto queue, unlocks queue and dies
PushBufferOntoQueue();
// (Consumer thread) -- Pop off queue and save to disk
// Thread locks queue, pops it, unlocks queue,
// saves popped buffer to disk and dies
DumpQueue();
++m_frame_id;
}
void LVD::Initialize(const IplImage *frame)
{
if(NULL == m_buffer) // first iteration
m_buffer = new ImageBuffer(frame);
else
m_buffer->Copy(frame);
}
Producer
void LVD::PushBufferOntoQueue()
{
m_queingThread = ::CreateThread( NULL, 0, ThreadFuncPushImageBufferOntoQueue, this, 0, &m_dwThreadID);
}
DWORD WINAPI LVD::ThreadFuncPushImageBufferOntoQueue(void *arg)
{
LVD* videoDumper = reinterpret_cast<LVD*>(arg);
LocalLock ll( &videoDumper->m_que_lock, 60*1000 );
videoDumper->m_frameQue.push(*(videoDumper->m_buffer));
ll.Unlock();
return 0;
}
Consumer
void LVD::DumpQueue()
{
m_dumpingThread = ::CreateThread( NULL, 0, ThreadFuncDumpFrames, this, 0, &m_dwThreadID);
}
DWORD WINAPI LVD::ThreadFuncDumpFrames(void *arg)
{
LVD* videoDumper = reinterpret_cast<LVD*>(arg);
LocalLock ll( &videoDumper->m_que_lock, 60*1000 );
if(videoDumper->m_frameQue.size() > 0 )
{
videoDumper->m_save_frame=videoDumper->m_frameQue.front();
videoDumper->m_frameQue.pop();
}
ll.Unlock();
stringstream ss;
ss << videoDumper->m_saveDir.c_str() << "\\";
ss << videoDumper->m_startTime.c_str() << "\\";
ss << setfill('0') << setw(6) << videoDumper->m_frame_id;
ss << ".png";
videoDumper->m_save_frame.SaveImage(ss.str().c_str());
return 0;
}
Note:
(1) I cannot use C++11. Therefore, Herb Sutter's DDJ article is not an option.
(2) I found a reference to an unbounded single producer-consumer queue. However, the author(s) state that enqueue(adding frames) is probably not wait-free.
(3) I also found liblfds, a C-library but not sure if it will serve my purpose.
The queue cannot be the problem. Video frames arrive at 16 msec intervals, at worst. Your queue only needs to store a pointer to a frame. Adding/removing one in a thread-safe way can never take more than a microsecond.
You'll need to look for another explanation and solution. Video does forever present a fire-hose problem. Disk drives are not generally fast enough to keep up with an uncompressed video stream. So if your consumer cannot keep up with the producer then something is going go give. With a dropped frame the likely outcome when you (correctly) prevent the queue from growing without bound.
Be sure to consider encoding the video. Real-time MPEG and AVC encoders are available. After they compress the stream you should not have a problem keeping up with the disk.
Circular buffer is definitely a good alternative. If you make it use a 2^n size, you can also use this trick to update the pointers:
inline int update_index(int x)
{
return (x + 1) & (size-1);
}
That way, there is no need to use expensive compare (and consequential jumps) or divide (the single most expensive integer operation in any processor - not counting "fill/copy large chunks of memory" type operations).
When dealing with video (or graphics in general) it is essential to do "buffer management". Typically, this is a case of tracking state of the "framebuffer" and avoiding to copy content more than necessary.
The typical approach is to allocate 2 or 3 video-buffers (or frame buffers, or what you call it). A buffer can be owned by either the producer or the consumer. The transfer is ONLY the ownership. So when the video-driver signals that "this buffer is full", the ownership is now with the consumer, that will read the buffer and store it to disk [or whatever]. When the storing is finished, the buffer is given back ("freed") so that the producer can re-use it. Copying the data out of the buffer is expensive [takes time], so you don't want to do that unless it's ABSOLUTELY necessary.

Detecting an unplugged capture device (OpenCV)

I'm attempting to detect if my capture camera gets unplugged. My assumption was that a call to cvQueryFrame would return NULL, however it continues to return the last valid frame.
Does anyone know of how to detect camera plug/unplug events with OpenCV? This seems so rudimentary...what am I missing?
There is no API function to do that, unfortunately.
However, my suggestion is that you create another thread that simply calls cvCaptureFromCAM() and check it's result (inside a loop). If the camera get's disconnected then it should return NULL.
I'll paste some code just to illustrate my idea:
// This code should be executed on another thread!
while (1)
{
CvCapture* capture = NULL;
capture = cvCaptureFromCAM(-1); // or whatever parameter you are already using
if (!capture)
{
std::cout << "!!! Camera got disconnected !!!!" << std::endl;
break;
}
// I'm not sure if releasing it will have any affect on the other thread
cvReleaseCapture(&capture);
}
Thanks #karlphillip for pointing me in the right direction. Running calls to cvCaptureFromCAM in a separate thread works. When the camera gets unplugged, the return value is NULL.
However, it appears that this function is not thread-safe. But a simple mutex to lock simultaneous calls to cvCaptureFromCAM seems to do the trick. I used boost::thread, for this example, but one could tweak this easily.
At global scope:
// Create a mutex used to prevent simultaneous calls to cvCaptureFromCAM
boost::shared_mutex mtxCapture;
// Flag to notify when we're done.
// NOTE: not bothering w/mutex for this example for simplicity's sake
bool done = false;
Entry point goes something like this:
int main()
{
// Create the work and the capture monitoring threads
boost::thread workThread(&Work);
boost::thread camMonitorThread(&CamMonitor);
while (! done)
{
// Do whatever
}
// Wait for threads to close themselves
workThread.join();
camMonitorThread.join();
return 0;
}
The work thread is simple. The only caveat is that you need to lock the mutex so you don't get simultaneous calls to cvCaptureFromCAM.
// Work Thread
void Work()
{
Capture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
//TODO: check capture != NULL...
while (! done)
{
// Do work
}
// Release capture
cvReleaseCapture(&capture);
}
And finally, the capture monitoring thread, as suggested by #karlphillip, except with a locked call to cvCaptureFromCAM. In my tests, the calls to cvReleaseCapture were quite slow. I put a call to cvWaitKey at the end of the loop because I don't want the overheard of constantly checking.
void CamMonitor()
{
while (! done)
{
CvCapture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
if (capture == NULL)
done = true; // NOTE: not a thread-safe write...
else
cvReleaseCapture(&capture);
// Wait a while, we don't need to be constantly checking.
cvWaitKey(2500);
}
I will probably end up implementing a ready-state flag which will be able to detect if the camera gets plugged back in. But that's out of the scope of this example. Hope someone finds this useful. Thanks again, #karlphillip.
This still seems to be an issue.
Another solution would be to compare the returned data with the previous one. For a working camera there should always be flickering. if the data is identical you can be sure, the cam was unplugged.
Martin
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, , CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}