I'm attempting to detect if my capture camera gets unplugged. My assumption was that a call to cvQueryFrame would return NULL, however it continues to return the last valid frame.
Does anyone know of how to detect camera plug/unplug events with OpenCV? This seems so rudimentary...what am I missing?
There is no API function to do that, unfortunately.
However, my suggestion is that you create another thread that simply calls cvCaptureFromCAM() and check it's result (inside a loop). If the camera get's disconnected then it should return NULL.
I'll paste some code just to illustrate my idea:
// This code should be executed on another thread!
while (1)
{
CvCapture* capture = NULL;
capture = cvCaptureFromCAM(-1); // or whatever parameter you are already using
if (!capture)
{
std::cout << "!!! Camera got disconnected !!!!" << std::endl;
break;
}
// I'm not sure if releasing it will have any affect on the other thread
cvReleaseCapture(&capture);
}
Thanks #karlphillip for pointing me in the right direction. Running calls to cvCaptureFromCAM in a separate thread works. When the camera gets unplugged, the return value is NULL.
However, it appears that this function is not thread-safe. But a simple mutex to lock simultaneous calls to cvCaptureFromCAM seems to do the trick. I used boost::thread, for this example, but one could tweak this easily.
At global scope:
// Create a mutex used to prevent simultaneous calls to cvCaptureFromCAM
boost::shared_mutex mtxCapture;
// Flag to notify when we're done.
// NOTE: not bothering w/mutex for this example for simplicity's sake
bool done = false;
Entry point goes something like this:
int main()
{
// Create the work and the capture monitoring threads
boost::thread workThread(&Work);
boost::thread camMonitorThread(&CamMonitor);
while (! done)
{
// Do whatever
}
// Wait for threads to close themselves
workThread.join();
camMonitorThread.join();
return 0;
}
The work thread is simple. The only caveat is that you need to lock the mutex so you don't get simultaneous calls to cvCaptureFromCAM.
// Work Thread
void Work()
{
Capture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
//TODO: check capture != NULL...
while (! done)
{
// Do work
}
// Release capture
cvReleaseCapture(&capture);
}
And finally, the capture monitoring thread, as suggested by #karlphillip, except with a locked call to cvCaptureFromCAM. In my tests, the calls to cvReleaseCapture were quite slow. I put a call to cvWaitKey at the end of the loop because I don't want the overheard of constantly checking.
void CamMonitor()
{
while (! done)
{
CvCapture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
if (capture == NULL)
done = true; // NOTE: not a thread-safe write...
else
cvReleaseCapture(&capture);
// Wait a while, we don't need to be constantly checking.
cvWaitKey(2500);
}
I will probably end up implementing a ready-state flag which will be able to detect if the camera gets plugged back in. But that's out of the scope of this example. Hope someone finds this useful. Thanks again, #karlphillip.
This still seems to be an issue.
Another solution would be to compare the returned data with the previous one. For a working camera there should always be flickering. if the data is identical you can be sure, the cam was unplugged.
Martin
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, , CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}
Related
Having two thread, one product data, another one process data.
The data is not just an int or float but a complex object. In my case, it's an OpenCV Mat(an image).
If the first thread only created half-size of the image, and second thread read it, will get half size of the image? The image will be broken?
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cv::Mat tmp;
cap >> tmp;
//mutex.lock();
buffer = tmp.clone();
//mutex.unlock();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
int i;
while (true) { // process in the main thread
//mutex.lock();
cv::Mat tmp = buffer;
//mutex.unlock();
if(!tmp.data)
std::cout<<"null"<<i++<<std::endl;
else {
//std::cout<<"not null"<<std::endl;
cv::imshow("test", tmp);
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
Do I need to add a mutex around write and read to make sure the image not be broken? Like this:
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cv::Mat tmp;
cap >> tmp;
mutex.lock();
buffer = tmp.clone();
mutex.unlock();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
while (true) { // process in the main thread
mutex.lock();
cv::Mat tmp = buffer;
mutex.unlock();
if(!tmp.data)
std::cout<<"null"<<std::endl;
else {
std::cout<<"not null"<<std::endl;
cv::imshow("test", tmp);
}
}
return 0;
}
This question related to How to solves image processing cause camera io delay?
As soon as you have one thread modifying an object while another thread potentially accesses the value of that same object concurrently, you have a race condition and behavior is undefined. Yes, that can happen. And, since we're talking about an object like an entire image buffer here, almost certainly will happen. And yes, you will need to use proper synchronization to prevent it from happening.
From your description, it would seem that you basically have a situation here where one thread is producing some image and another thread has to wait for the image to be ready. In this case, the first question that you should ask yourself is: if the second thread cannot start its work before the first thread has completed its work, then what exactly are you gaining by using a second thread here? If there is still enough work that both threads can do in parallel for this all to make sense, then you will most likely want to use not just a simple mutex here, but more something like, e.g., a condition variable or a barrier…
I have an application which captures frame from camera and then it shows the picture as imshow() like that:
VideoCapture cap(0);
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
system("pause");
return -1;
}
while (true) {
bool bSuccess = cap.read(imgOriginal); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
cv::imshow("Image", imgOriginal);
if (waitKey(10) == 27)
{
break;
return 1;
}
}
And the program works well. But when I delete wait_key loop and instead of that give some other handle (for example variable which can describe if while loop is ok or even if, but instead wait_key(10) == 27 I put checkVariable == false), everything goes wrong. I get grey image instead of normal picture. Can you explain me why?
The waitKey function is not only to get a key from the user, it also does the equivalent to spin in other GUI frameworks. This means that it also "updates" any event of the window displaying the image, such as display a new image (it starts with default value gray most probably). So, you HAVE to use the function whenever you use imshow at least. It also does a small pause (number of milliseconds given as argument) so you can use it to avoid idle loops to occupy a CPU like crazy.
You can always ignore the result of waitKey if you do not need it, but it has to run.
I hope this clears your doubt.
I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large):
AVFrame* frame = av_frame_alloc();
int got_frame;
// readFrame returns 0 if file is ended, got frame = 1 if
// a complete frame has been extracted
while(readFrame(inputfile,frame, &got_frame)) {
if (got_frame) {
// I actually do some processing here
writeFrame(outputfile,frame);
}
}
av_frame_free(&frame);
The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise: there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.
What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames:
std::queue<AVFrame*> queue;
int ret = 1, got_frame;
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
}
To write frames to the output file I do:
while (!queue.empty()) {
frame = queue.front();
queue.pop();
writeFrame(outputFile,frame);
av_frame_free(&frame);
}
The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.
My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.
Any suggestions on what could be the problem?
Ah, so I'm assuming that readFrame() is a wrapper around libavformat's av_read_frame() and libavcodec's avcodec_decode_video2(), is that right?
From the documentation:
When AVCodecContext.refcounted_frames is set to 1, the frame is
reference counted and the returned reference belongs to the caller.
The caller must release the frame using av_frame_unref() when the
frame is no longer needed.
and:
When
AVCodecContext.refcounted_frames is set to 0, the returned reference
belongs to the decoder and is valid only until the next call to this
function or until closing or flushing the decoder.
Obviously, from this it follows from this that you need to set AVCodecContext.refcounted_frames to 1. The default is 0, so my gut feeling is you need to set it to 1 and that will fix your problem. Don't forget to use av_fame_unref() on the pictures after use to prevent memleaks, and also don't forget to free your AVFrame in this loop if got_frame = 0 - again to prevent memleaks:
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
else
av_frame_free(frame);
}
(Or alternatively you could implement some cache for frame so you only realloc it if the previous object was pushed in the queue.)
There's nothing obviously wrong with your pseudocode. The problem almost certainly lies in how you lock the queue between threads.
Your memory allocation seems same to me. Do you maybe do something else in between reading and writing the frames?
Is queue the same queue in the routines that read and write the frames?
I have an application which connects to an RTSP camera and processes some of the frames of video. Depending on the camera resolution and frame rate, I don't need to process all the frames and sometimes my processing takes a while. I've designed things so that when the frame is read, its passed off to a work queue for another thread to deal with. However, depending on system load/resolution/frame rate/network/file system/etc, I occasionally will find cases where the program doesn't keep up with the camera.
I've found that with ffmpeg(I'm using the latest git drop from mid october and running on windows) that being a couple seconds behind is fine and you keep getting the next frame, next frame, etc. However, once you get, say, 15-20 seconds behind that frames you get from ffmpeg occasionally have corruption. That is, what is returned as the next frame often has graphical glitches (streaking of the bottom of the frame, etc).
What I'd like to do is put in a check, somehow, to detect if I'm more than X frames behind the live stream and if so, flush the caches frames out and start fetching the latest/current frames.
My current snippet of my frame buffer reading thread (C++) :
while(runThread)
{
av_init_packet(&(newPacket));
int errorCheck = av_read_frame(context, &(newPacket));
if (errorCheck < 0)
{
// error
}
else
{
int frameFinished = 0;
int decodeCode = avcodec_decode_video2(ccontext, actualFrame, &frameFinished, &newPacket);
if (decodeCode <0)
{
// error
}
else
if (decodeCode == 0)
{
// no frame could be decompressed / decoded / etc
}
else
if ((decodeCode > 0) && (frameFinished))
{
// do my processing / copy the frame off for later processing / etc
}
else
{
// decoded some data, but frame was not finished...
// Save data and reconstitute the pieces somehow??
// Given that we free the packet, I doubt there is any way to use this partial information
}
av_free_packet(&(newPacket));
}
}
I've google'd and looked through the ffmpeg documents for some function I can call to flush things and enable me to catch up but I can't seem to find anything. This same sort of solution would be needed if you wanted to only occasionally monitor a video source(eg, if you only wanted to snag one frame per second or per minute). The only thing I could come up with is disconnecting from the camera and reconnecting. However, I still need a way to detect if the frames I am receiving are old.
Ideally, I'd be able to do something like this :
while(runThread)
{
av_init_packet(&(newPacket));
// Not a real function, but I'd like to do something like this
if (av_check_frame_buffer_size(context) > 30_frames)
{
// flush frame buffer.
av_frame_buffer_flush(context);
}
int errorCheck = av_read_frame(context, &(newPacket));
...
}
}
I'm phasing a frustrating issue here with multithreading and openCV.
I want to read two different video files (video.mp4 and video-copy.mp4). I want to place frames of one video on one corner(a background jpg image) of a canvas and the frames of the second video on another corner on the same canvas.
Below is the code I used to hope to accomplish this goal. It compiles well but runs unpredictably:
one time it will show both frames on the canvas but freezes, or frame from only one video are show and the other video can't be seen
another time it runs correctly but still shows the runtime errors in red at my terminal (i included the runtime errors at the bottom of this cpp file, also I'm running this on ubuntu.)
I'd like to thank you all a head of time for your help. Thanks again everyone.
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <string>
#include <thread>
#include <mutex>
using namespace std;
using namespace cv;
//The main canvas (a background image)
Mat canvas=imread("canvas.jpg", CV_LOAD_IMAGE_COLOR);
// the mutex for accessing the canvas variable
mutex mu;
// The function that plays a video on a predefined ROI on the canvas
void play(string n, Point p) //this function could be called n number of times as its own thread
{
VideoCapture cap(n); //open a local video file
int fps=cap.get(CV_CAP_PROP_FPS); // get it's frames per-seconds frame rates
char k; // for exiting the frame fetching loop
Mat temp; // a temporary holding variable for each frame from the stream
while(1)
{
unique_lock<mutex> locker(mu); // lock the mutex (shared memory is the canvas matrix)
cap>>temp; // get a frame from the stream to the temporary buffer
if(temp.empty())// if stream is empty get out of this loop
{
locker.unlock(); // unlock the mutex first before exiting this loop
break; // exit the loop now.
}
// syntax= src_image.copyto(dst_image(Rect(Point(x,y), size(src_image))))
//dest_image is the shared object between the threads.
temp.copyTo(canvas(Rect(p.x, p.y, temp.cols, temp.rows))); // past a frame onto the canvas at location (x,y) and size(of the frame)
locker.unlock();// done interacting with the shared variable, so unlock the mutex
this_thread::sleep_for(chrono::milliseconds(10));// wait briefly before looping again.
}
}
int main()
{
char k; // for breaking out of the main loop ( the display image loop)
namedWindow("MainWindow", WINDOW_AUTOSIZE); // create the main window
// create and start the first thread, pass a filename and a coordinates to place the frame on the canvas
thread P1(play, "video-copy.mp4", Point(0,0));
// create and start the second thread, they should both be able to access the canvas variable (one at a time ofcourse)
thread P2(play, "video.mp4", Point(200,300));
while(1) // keep displaying the canvas as it is changing.
{
unique_lock<mutex> locker(mu); // lock the mutex again, because we are using the canvas (shared object) again,
imshow("MainWindow", canvas); // show image
locker.unlock(); // unlock the mutex again.
k=waitKey(10); // flush the buffer and wait for 10 milliseconds
if(k==27) //exit the display image loop.
break;
}
// Join the child threads back with the parrent thread before exiting.
P1.join();
P2.join();
destroyWindow("MainWindow"); // destroy the main window
return 0; // end the main.
}
/* The errors I always get but cant find what is causing it
(MainWindow:3157): GLib-GObject-CRITICAL **: g_object_remove_weak_pointer: assertion `G_IS_OBJECT (object)' failed
[NULL # 0xb1b06ca0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b07aa0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b06ca0] insufficient thread locking around avcodec_open/close()
[NULL # 0xb1b07aa0] insufficient thread locking around avcodec_open/close()
[IMGUTILS # 0xb28fde64] Picture size 0x0 is invalid
[IMGUTILS # 0xb28fde94] Picture size 0x0 is invalid
[swscaler # 0xad180300] bad dst image pointers
*/