How to solve image processing camera IO delay with OpenCV - c++

I am having an OpenCV program which works like this:
VideoCapture cap(0);
Mat frame;
while(true) {
cap >> frame;
myprocess(frame);
}
The problem is if myprocess takes a long time which longer than camera's IO interval, the captured frame be delayed, cannot get the frame synchronized with the real time.
So, I think to solve this problem, should make the camera streaming and myprocess run parallelly. One thread does IO operation, another does CPU computing. When the camera finished capture, send to work thread to processing.
Is this idea right? Any better strategy to solve this problem?
Demo:
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cv::Mat tmp;
cap >> tmp;
mutex.lock();
buffer = tmp.clone(); // copy the value
mutex.unlock();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
while (cv::waitKey(20)) { // process in the main thread
mutex.lock();
cv::Mat tmp = buffer.clone(); // copy the value
mutex.unlock();
if(!tmp.data)
std::cout<<"null"<<std::endl;
else {
std::cout<<"not null"<<std::endl;
cv::imshow("test", tmp);
}
}
return 0;
}
Or use a thread keep clearing the buffer.
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cap.grab();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
int i;
while (true) { // process in the main thread
cv::Mat tmp;
cap.retrieve(tmp);
if(!tmp.data)
std::cout<<"null"<<i++<<std::endl;
else {
cv::imshow("test", tmp);
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
The second demo I thought shall be work base on https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html#videocapture-grab, but it's not...

In project with Multitarget tracking I used 2 buffers for frame (cv::Mat frames[2]) and 2 threads:
One thread for capturing the next frame and detect objects.
Second thread for tracking the detected objects and draw result on frame.
I used index = [0,1] for the buffers swap and this index was protected with mutex. For signalling about the end of work was used 2 condition variables.
First works CatureAndDetect with frames[capture_ind] buffer and Tracking works with previous frames[1-capture_ind] buffer. Next step - switch the buffers: capture_ind = 1 - capture_ind.
Do you can this project here: Multitarget-tracker.

Related

Capturing camera frames once a while

I have a system that is typically running a scan time of 100 HZ or 10 ms and performing time critical tasks. I'm trying to add a camera with opencv to once a while (depends on when a user interacts with the system so it can be anywhere from 10 seconds pauses to minutes) capture an image for quality control.
Here is what my code is doing:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
UMat frame;
for(;;){
if (timing_variable_100Hz){
cap >> frame; // get a new frame from camera
*Do something time critical*
if(some_criteria_is_met){
if(!frame.empty()) imwrite( "Image.jpg", frame);
}
}
}
return 0;
}
Now the issue I'm having is that cap >> frame takes a lot of time.
My scan time regularly runs around 3ms and now it's at 40ms. Now my question is, are there anyway to open the camera, capture, then not have to capture every frame after until I have to? I tried to move the cap >> frame inside the if(some_criteria_is_met) which allowed me to capture the first image correctly but the second image which was taken a few minutes later was a single frame past the first captured image (I hope that makes sense).
Thanks
The problem is that your camera has a framerate of less than 100fps, probably 30fps (according to the 32ms you measured), so grab wait for a new frame to be available.
Since there is no way to do a non blocking read in opencv, i think that your best option is to do the video grabbing in another thread.
Something like this, if you use c++11 (this is an example, not sure it is entirely correct):
void camera_loop(std::atomic<bool> &capture, std::atomic<bool> &stop)
{
VideoCapture cap(0);
Mat frame;
while(!stop)
{
cap.grab();
if(capture)
{
cap.retrieve(frame);
// do whatever you do with the frame
capture=false;
}
}
}
int main()
{
std::atomic<bool> capture=false, stop=false;
std::thread camera_thread(camera_loop, std::ref(capture), std::ref(stop));
for(;;)
{
// do something time critical
if(some_criteria_is_met)
{
capture=true;
}
}
stop=true;
camera_thread.join();
}
It doesn't answer your question of are there anyway to open the camera, capture, then not have to capture every frame after until I have to?, but a suggestion
You could try and have the cap >> frame in a background thread which is responsible only for capturing the frames.
Once the frame is in memory, push it to some sort of shared cyclic queue to be accessed from the main thread.

OpenCV imshow in Boost Threads

Below is the code for a tracking module. A detector is launched and when it detects the object of interest, it creates a tracker object to track the object using camshift on a number of frames until the detector finds another object. The code seems to work fine when I comment out imshow and waitkey in the tracker. But when I call those functions, I get the following error when a new tracker (apart from the first one) is created:
QMetaMethod::invoke: Unable to invoke methods with return values in queued connections
QObject::startTimer: QTimer can only be used with threads started with QThread
I don't know if this is something happening within boost or opencv. Any ideas how I can fix this problem? Is there a better way to go about showing the user the position of the object of interest? Thanks a lot!
main:
#define FACE_DETECT_RATE 10
Tracker *currentTracker;
boost::mutex displayedImageMutex;
Mat imageToShow; // image displayed by the main thread
void updateDisplayedImage(Mat image){
if (displayedImageMutex.try_lock()){
image.copyTo(imageToShow);
displayedImageMutex.unlock()
}
}
Rect detectorCallback(Rect faceRect){
// the detector object returns the face it found here
// a tracker is supposed to track the face in the next set of frames
Tracker tracker(faceRect);
tracker.start(&updateDisplayedImage);
currentTracker = &tracker;
currentTracker->join();
}
int main(int argc, char **argv){
VideoCapture capture(0);
currentTracker = NULL;
int frameCount = 0;
while (true){
Mat frame;
capture >> frame;
if (frameCount % FACE_DETECT_RATE == 0){
Detector detector;
detector.start(frame, &detectorCallback);
}
if (displayedImageMutex.try_lock()){
if (imageToShow.data){
imshow("Results", imageToShow);
waitKey(1);
}
displayedImageMutex.unlock();
}
}
}
Tracker:
class Tracker{
private:
boost::thread *trackerThread;
void run(void (*imageUpdateFunc)(Mat)){
while (true){
try{
...
//do camshift and stuff
//show results
//imshow("Stream", imageWithFace);
//waitKey(10);
imageUpdateFunc(imageWithFace);
...
boost::this_thread::interruption_point();
} catch (const boost::thread::interrupted&){
break;
}
}
}
public:
Tracker(Rect faceRect){
...
}
void start(void (*imageUpdateFunc)(Mat)){
trackerThread = new boost::thread(&Tracker::run, this, imageUpdateFunc);
}
void stop(){
trackerThread->interrupt();
}
void join(){
trackerThread->join();
}
};

OpenCV: processing multiple images in a C++ vector using pthreads

I have a large number images in a file that I need to perform various processing operations on. Here is what I am trying to do
1) Read the images into a file, and put them in a C++ vector named imageQueue (a mutable array)
2) Create a number of threads
3) Each thread grabs an image from imageQueue, and then erases that image from the vector
4) Each thread then goes ahead and processes that image
5) When finished processing, each thread grabs the next image from the vector
6) This entire process runs until there are no more images in the imageQueue, at which point the program ends. (currently I have 4 photos in the file that I am using for tests, which is why in my loops I run from i = 0 to i < 4. When I complete this, I will have many more photos.
I have named each of the images in the file 00.jpg, 01.jpg, 02.jpg....
For testing purposes, right now I am simply trying to have each thread display the image it grabbed. However, when I run this only purely white windows pop up, instead of the actual image. Any help on why this is happening and how to correct it?
Thanks!
Here is my code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <pthread.h>
#include <vector.h>
#define NUM_THREADS 2
using namespace std;
using namespace cv;
/* Function Declarations */
void* startProcessing(void* args);
/* Global Variables */
vector <Mat> imageQueue;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
int main(int argc, char** argv)
{
VideoCapture cap;
cap.open("/Users/nlauer/Documents/ImageSequence/%02d.jpg");
for(int i = 0; i < 4; i++)
{
Mat frame;
vector<Mat>::iterator it;
cap >> frame;
it = imageQueue.end();
it = imageQueue.insert(it, frame);
}
/* Create the threads */
pthread_t tids[NUM_THREADS];
for(int i = 0; i < NUM_THREADS; i++)
{
pthread_create(&tids[i], NULL, startProcessing, NULL);
}
/* Reap the threads */
for(int i = 0; i < NUM_THREADS; i++)
{
pthread_join(tids[i], NULL);
}
imageQueue.clear();
return 0;
}
void* startProcessing(void* args)
{
/* Each thread grabs an image from imageQueue, removes it from the
queue, and then processes it. The grabbing and removing are done
under a lock */
Mat image;
Mat emptyImage;
/* Get the first image for each thread */
pthread_mutex_lock(&mutex);
if(!imageQueue.empty()) {
image = imageQueue[0];
vector<Mat>::iterator it;
it = imageQueue.begin();
it = imageQueue.erase(it);
}
pthread_mutex_unlock(&mutex);
while(!image.empty())
{
/* Process the image - right now I just want to display it */
namedWindow("window", CV_WINDOW_AUTOSIZE);
imshow("window", image);
sleep(10);
/* Obtain the next image in the queue */
pthread_mutex_lock(&mutex);
if(!imageQueue.empty()) {
image = imageQueue[0];
vector<Mat>::iterator it;
it = imageQueue.begin();
it = imageQueue.erase(it);
} else {
image = emptyImage;
}
pthread_mutex_unlock(&mutex);
}
return NULL;
}
What you try to achieve in the items 3)-5) is exactly what Intel's library TBB is designed for. Take a look at tbb::parallel_for.
All you need is a class which has operator(Mat) which processes a single image, the library TBB will take care of the thread handling.
You can go even further if you use tbb::concurrent_queue instead of your vector to keep the images. Then processing can start even while reading has not finished. You might use tbb::parallel_do instead of tbb::parallel_for in this approach.
Issue has been resolved - I simply went ahead and continued with the processing as is. Something strange was happening when I tried to show the image in each thread, but as long as you let each thread finish its routine, then show the image after, everything worked fine.

capture frames after every 2 seconds

I am doing a project on face detection from video. I detected faces from the video, but it is capturing every frame, so i am getting so many images with in a second itself (so many frames got captured in a second).
Problem: I want to reduce that, capture frame after every 3 second is enough. I tried to use wait() ,sleep() functions. But they are just stop running the video for sometime,nothing else is happening. Can any one help me to overcome from this.
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
using namespace std;
IplImage *frame;
int frames;
void facedetect(IplImage* image);
void saveImage(IplImage *img,char *ex);
IplImage* resizeImage(const IplImage *origImg, int newWidth,int newHeight, bool keepAspectRatio);
const char* cascade_name="haarcascade_frontalface_default.xml";//specify classifier cascade.
int k;
int main(int argc, char** argv)
{
// OpenCV Capture object to grab frames
//CvCapture *capture = cvCaptureFromCAM(0);
CvCapture *capture=cvCaptureFromFile("video.flv");
//int frames=cvSetCaptureProperty(capture,CV_CAP_PROP_FPS, 0.5);
//double res1=cvGetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES);
//cout<<"res"<<res<<endl;
// start and end times
time_t start, end;
// fps calculated using number of frames / seconds
double fps;
// frame counter
int counter = 0;
// start the clock
time(&start);
//while(cvGrabFrame(capture))
while(1)
{
//if(capture.get(CV_CAP_PROP_POS_FRAMES) % 2 == 0)
frames=cvSetCaptureProperty(capture,CV_CAP_PROP_FPS, 0.5);
if(frames%2==0)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
facedetect(frame);
}
}
cvReleaseCapture(&capture);
return 0;
}
I gave cvWaitKey(2000) after every frame is captured.
This would have been my trial. It saves one image per 30 frames. when you say too many images in one second, I understand that you are referring to saved faces.
int counter = 0;
// start the clock
time(&start);
//while(cvGrabFrame(capture))
while(1)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
if(count%30==0)
{
facedetect(frame);
}
count++;
}
if you really meant of skipping the frames, then try this. one frame per second might be the outcome of below code.
while(1)
{
if(count%30==0)
{
frame = cvQueryFrame(capture);
cout<<"Frame"<<frame<<endl;
facedetect(frame);
}
count++;
}
You can try to call waitKey(2000) after each capturing.
Note that the function will not wait exactly 2000ms, it will wait at least 2000ms, depending on what else is running on your computer at that time.
To achieve accurate frame rate, you can set the frame rate of capturing by:
cap.set(CV_CAP_PROP_FPS, 0.5);
Me personally, i would recommend using a modulo operator on the current frame like %2 == would check for every second frame.
if(capture.get(CV_CAP_PROP_POS_FRAMES) % 2 == 0)
//your code to save
Changing 2 to 3 or 5 you can define the offset.

OpenCV Running Video from WebCam on different thread

I have 2 webcams and I want to get input from both of them at the same time. Therefore I believe I have to work with threads in c++ which is pthread. When I run my code given below, the webcam turns on for a second and the routine exits. I can't figure out what is wrong in my code.
void *WebCam(void *arg){
VideoCapture cap(0);
for (; ; ) {
Mat frame;
*cap >> frame;
resize(frame, frame, Size(640, 480));
flip(frame, frame, 1);
imshow("frame", frame);
if(waitKey(30) >= 0)
break;
}
pthread_exit(NULL);
}
int main(){
pthread_t thread1, thread2;
pthread_create(&thread1, NULL, &WebCam, NULL);
return 0;
}
this is doe for one webcam just to turn and do streaming. Once this one works than other will be just copy of it.
When you create the thread, it starts running, but your main program, which is still running, just terminates, making the child thread finish too. Try adding this after pthread_create:
pthread_join(thread1, NULL);
By the way, even if you have two cameras, you can avoid the use of threads. I am not sure, but they could be problematic when dealing with the highgui functions (imshow, waitKey), because you must make sure they are thread-safe. Otherwise, what will be the result of having two threads calling waitKey at the same time?
You could get rid of threads with a design similar to this one:
VideoCapture cap0(0);
VideoCapture cap1(1);
for(;;)
{
cv::Mat im[2];
cap0 >> im[0];
cap1 >> im[1];
// check which of im[i] is non empty and do something with it
}