I am having an OpenCV program which works like this:
VideoCapture cap(0);
Mat frame;
while(true) {
cap >> frame;
myprocess(frame);
}
The problem is if myprocess takes a long time which longer than camera's IO interval, the captured frame be delayed, cannot get the frame synchronized with the real time.
So, I think to solve this problem, should make the camera streaming and myprocess run parallelly. One thread does IO operation, another does CPU computing. When the camera finished capture, send to work thread to processing.
Is this idea right? Any better strategy to solve this problem?
Demo:
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cv::Mat tmp;
cap >> tmp;
mutex.lock();
buffer = tmp.clone(); // copy the value
mutex.unlock();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
while (cv::waitKey(20)) { // process in the main thread
mutex.lock();
cv::Mat tmp = buffer.clone(); // copy the value
mutex.unlock();
if(!tmp.data)
std::cout<<"null"<<std::endl;
else {
std::cout<<"not null"<<std::endl;
cv::imshow("test", tmp);
}
}
return 0;
}
Or use a thread keep clearing the buffer.
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cap.grab();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
int i;
while (true) { // process in the main thread
cv::Mat tmp;
cap.retrieve(tmp);
if(!tmp.data)
std::cout<<"null"<<i++<<std::endl;
else {
cv::imshow("test", tmp);
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
The second demo I thought shall be work base on https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html#videocapture-grab, but it's not...
In project with Multitarget tracking I used 2 buffers for frame (cv::Mat frames[2]) and 2 threads:
One thread for capturing the next frame and detect objects.
Second thread for tracking the detected objects and draw result on frame.
I used index = [0,1] for the buffers swap and this index was protected with mutex. For signalling about the end of work was used 2 condition variables.
First works CatureAndDetect with frames[capture_ind] buffer and Tracking works with previous frames[1-capture_ind] buffer. Next step - switch the buffers: capture_ind = 1 - capture_ind.
Do you can this project here: Multitarget-tracker.
I tried using imshow in opencv to display a but the image window is closed and displayed again in every loop. Is there a way to hold the display until the new imshow arrives so the image display doesn't look flickering?
for(int iframe = 0; iframe < 10; iframe++)
{
..some processing code..
cv::imshow("image", a[iframes]);
cv::waitKey(1);
}
check your ...some processing code...:
If you're using namedWindow() in the beginning of each loop and/or using destroyWindow() at the end of the loop, then you're effectively closing your window in each iteration.
Remove these function calls unless you're absolutely sure that you need them.
The two threads doesn't share any parameters except the display window. Here is my code. I didn't include the processing part since it's quite long.
int main(int argc, char *argv[])
{
// Some processings//
// Create the thread
cv::namedWindow("image2D");
std::thread t1(task1, "Start");
t1.join();
}
void task1(std::string msg)
{
std::cout << "Task 1: " << msg << std::endl;
// Some processings to compute img1
// Display img1
float min = -50;
float max = 100;
cv::Mat adjMap = cv::Mat::zeros(height, width, CV_8UC1);
cv::Mat img1;
float scale = 255.0 / (max - min);
zw.convertTo(adjMap, CV_8UC1, scale);
applyColorMap(adjMap, img1, cv::COLORMAP_JET);
cv::imshow("image2D", img1); // The code hangs here
cv::waitKey(1);
}
I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
I have an opencv program where the image processing is sensitive to having a stable and relatively high framerate in the video capture. Unfortunately, all the image processing I do is slowing down the framerate significantly, resulting in erroneous behavior in my program. I believe that putting the camera on a separate thread and having the image processing happen on its own thread would improve framerate, but I am not sure how to do this. Can anyone guide me through the process?
UPDATE: After doing some research on threads, I managed to implement threading to where the final video feed post-processed is displayed. However, somehow my implementation of it is causing the image processing methods to fail (ex. before I could successfully track a moving object, now it is erroneous whether or not it is tracked). I suspect this has something to do with the image processing algorithms not being able to process each frame fast enough as new frames are read in. How can I improve this implementation so that my processing methods worked as they did without multithreading?
void CaptureFrames() {
VideoCapture capture(0);
if (!capture.isOpened()) {
cout << "cannot open camera" << endl;
}
while (true) {
//CAMERAFRAME is a global Mat defined at the top of my program
capture.read(CAMERAFRAME);
if (waitKey(30) >= 0) { break; }
}
}
void ProcessFrames() {
while (true) {
Mat hsvFrame;
Mat binMat;
Mat grayFrame;
Mat grayBinMat;
if (!CAMERAFRAME.empty()) {
//do image processing functions (these worked before implementing threads and are not causing errors)
imshow("gray binary", grayBinMat);
imshow("multithread test", CAMERAFRAME);
}
if (waitKey(30) >= 0) { break; }
}
}
int main(int argc, char** argv[]) {
thread t1(CaptureFrames);
thread t2(ProcessFrames);
while(true) {
if(waitKey(30) >= 0) {break;}
}
return 0;
}
Try the older version again but remove this last line from the ProcessFrames function.
if (waitKey(30) >= 0) { break; }
On showing images don't make it wait again for 30 m-seconds, the while loop will be enough
I am working in a project in which I have to detect the Motion of a human. First of all I wrote a program for Motion Detection and got it working properly. Then I moved to Human Detection using HOGDescriptorand combined both the programs to increase the speed of the process. First I monitor for a motion and if there is any motion detected, then I crop the image by the rectangular box denoting the motion and send the cropped part alone to the Human detection function so that it can be processed quickly.
But there arises a problem. I am getting a good results some times and for sometimes I am getting a Pop Up window saying Unhandled Exception at some memory location in the .exe file.
My program is
#include <iostream>
#include <ctime>
#include<stdlib.h>
#include<vector>
#include"opencv2\opencv.hpp"
#include"opencv2\highgui\highgui.hpp"
#include"opencv2\core\core.hpp"
#include"opencv2\imgproc\imgproc.hpp"
#include<string>
#include<sstream>
using namespace std;
using namespace cv;
Mat detect1(int,VideoCapture,VideoWriter);
vector<Rect> found;
int humandet(Mat,Rect);
BackgroundSubtractorMOG2 bg[5];
int _tmain(int argc, _TCHAR* argv[])
{
Mat frame[5];
string win[5]={"Video 0","Video 1","Video 2","Video 3"};
string ip,user,pass;
stringstream ss;
string vid[5]={"D:/Recorded.avi","D:/Recorded1.avi","D:/Recorded2.avi","D:/Recorded3.avi"};
VideoWriter vidarr[5];
VideoCapture cap[5];
int n,type,j;
cout<<"Enter the no of cameras";
cin>>n;
for(int i=0,j=0;i<n;i++)
{
cout<<"Enter the camera type\n1.IP camera\n2.Non IP camera";
cin>>type;
if(type==2)
{
VideoCapture cap1(j++);
cap[i]=cap1;
cap[i].set(CV_CAP_PROP_FRAME_WIDTH,320);
cap[i].set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap[i].set(CV_CAP_PROP_FPS,2);
}
else
{
cout<<"Enter the IP add:portno, username and password";
cin>>ip>>user>>pass;
ss<<"http://"<<user<<":"<<pass<<"#"<<ip<<"/axis-cgi/mjpg/video.cgi?.mjpg";
string s(ss.str());
VideoCapture cap2(s);
cap[i]=cap2;
cap[i].set(CV_CAP_PROP_FRAME_WIDTH,320);
cap[i].set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap[i].set(CV_CAP_PROP_FPS,2);
}
VideoWriter video(vid[i],CV_FOURCC('D','I','V','X'),2,Size(320,240));
vidarr[i]=video;
}
while(9)
{
for(int i=0;i<n;i++)
{
frame[i]=detect1(i,cap[i],vidarr[i]);
imshow(win[i],frame[i]);
}
if(waitKey(30)==27)
break;
}
return 0;
}
Mat detect1(int j,VideoCapture cap,VideoWriter vid)
{
Mat frame;
Mat diff;
cap>>frame;
double large_area=0;
int large=0;
Rect bound_rect;
bg[j].nmixtures=3;
bg[j].bShadowDetection=true;
bg[j].nShadowDetection=0;
bg[j].fTau = 0.5;
bg[j].operator() (frame,diff);
vector<vector<Point>> contour;
findContours(diff,contour,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
for(unsigned int i=0;i<contour.size();i++)
{
double area=contourArea(contour[i]);
if(area>large_area)
{
large_area=area;
large=i;
bound_rect=boundingRect(contour[i]);
}
}
contour.clear();
if(large_area/100 > 2)
{
humandet(frame,bound_rect);
rectangle(frame,bound_rect,Scalar(0,0,255),2);
putText(frame,"Recording",Point(20,20),CV_FONT_HERSHEY_PLAIN,2,Scalar(0,255,0),2);
vid.write(frame);
return (frame);
}
else
return (frame);
}
int humandet(Mat frame1,Rect bound)
{
HOGDescriptor hog;
hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
if((bound.height < 100) && (bound.width < 80))
{
Mat roi;
roi.create(Size(80,100),frame1.type());
roi.setTo(Scalar::all(0));
Mat fram=frame1(bound);
fram.copyTo(roi(Rect(0,0,(bound.height-1),(bound.width-1))));
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
fram.release();
}
else if((bound.height < 200) && (bound.width < 160))
{
Mat roi;
roi.create(Size(160,200),frame1.type());
roi.setTo(Scalar::all(0));
Mat fram=frame1(bound);
fram.copyTo(roi(Rect(1,1,(bound.height-1),(bound.width-1))));
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
fram.release();
}
else
{
Mat roi;
roi=frame1;
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
roi.release();
}
for(unsigned int i=0;i<found.size();i++)
{
rectangle(frame1,found[i], Scalar(255,0,0), 2);
}
if(found.size())
{
frame1.release();
found.clear();
return 1;
}
else
return 0;
}
Before I used the cropping method, It was working good. i.e, when I passed the frame to the 'humandet' function without any changes and processed it as it is, there was no problem. But it was quite slow. So that I cropped the image and made the resolution constant and processed. Due to this the processing speed increased to a considerable amount. But it is often throwing an Exception. I think the problem is with the memory allocation. But I couldn't figure it out.
Give me a solution and a method to debug the error I made.
Thanks in advance.
Call detectMultiScale in try-catch block. This try-catch block solve my problem.
try{
hog.detectMultiScale(roi,found,0,Size(8,8),Size(32,32),1.025);
}
catch(cv::Exception & e){
return false;
}
I am also trying detect people with HogDescriptor. When I debug my code, I realize that this error occurs only when cropped image size is small. It was related with training data size. Maybe this can be useful for you:HOG detector: relation between detected roi size and training sample size
Ideal way to start debugging is to catch the exception and print the stack trace.
Please refer to this post on how to generate the stack trace How to generate a stacktrace when my gcc C++ app crashes
This will pinpoint the position from where it is generating the exception