Mutex and thread in videoCapture - c++

I have used a worker thread to get the latest frame in real time,the code is as follow.But in my code, there is a problem. The frame is the first frame all the time, It didn't update.As a result, the first frame do the remap(), and the remap result frame do the next loop remap...I don't know why the frame didn't update. If i remove the line remap() or replace this line as dilate(frame, frame..) ,the frame updates all the time. Also, if i copy the frame to image and use the image to do remap(),the frame can update.But why in this case the frame can't update.Can somebody help me?Thank you.
std::mutex mtxCam;
void task(VideoCapture cap, Mat& frame) {
while (true) {
mtxCam.lock();
cap >> frame;
mtxCam.unlock();
}
}
int main() {
Mat frame, image;
VideoCapture cap;
cap.open(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1600);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
cap >> frame;
thread t(task, cap, frame);
while (true) {
initUndistortRectifyMap(
cameraMatrix, // computed camera matrix
distCoeffs, // computed distortion matrix
Mat(), // optional rectification (none)
Mat(), // camera matrix to generate undistorted
Size(1920 * 1.3, 1080 * 1.3),
// image.size(), // size of undistorted
CV_32FC1, // type of output map
map1, map2); // the x and y mapping functions
mtxCam.lock();
remap(frame, frame, map1, map2, cv::INTER_LINEAR);
frame.copyTo(image);
mtxCam.unlock();
...//image processing loop
}
}

There are two problems here:
1) You pass a single frame and then the video capture is mapped to the same frame every time without clearing it once that frame is processed.
2) You need a signalling mechanism(semaphore) , not a locking mechanism(mutex).
Something along these lines:
while (true) {
frame.clear();
cap >> frame;
semCam.Give();
}
semCam.Take();
remap(frame, frame, map1, map2, cv::INTER_LINEAR);
frame.copyTo(image);
You are dealing with a producer-consumer problem here.
So, Thread 1 produces the frames and Thread2 consumes the frames for image processing.
Thread1 inserts the frames into queue,signals thread2 that frames are ready for processing and waits for thread2 to signal that the frames have been processed.
Algorithm:
Thread 1
FrameProcessed.Wait()
FrameQueue.insert()
FrameQueueReadyForProcessing.Give()
Thread 2
FrameQueueReadyForProcessing.Wait()
ConsumeFrames(FrameQueue.Pop())
FrameProcessed.Give()
unfortunately, C++11 has no out of the box implementation of semaphores.
But you can roll one of your own.
https://gist.github.com/yohhoy/2156481

Related

Capturing camera frames once a while

I have a system that is typically running a scan time of 100 HZ or 10 ms and performing time critical tasks. I'm trying to add a camera with opencv to once a while (depends on when a user interacts with the system so it can be anywhere from 10 seconds pauses to minutes) capture an image for quality control.
Here is what my code is doing:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
UMat frame;
for(;;){
if (timing_variable_100Hz){
cap >> frame; // get a new frame from camera
*Do something time critical*
if(some_criteria_is_met){
if(!frame.empty()) imwrite( "Image.jpg", frame);
}
}
}
return 0;
}
Now the issue I'm having is that cap >> frame takes a lot of time.
My scan time regularly runs around 3ms and now it's at 40ms. Now my question is, are there anyway to open the camera, capture, then not have to capture every frame after until I have to? I tried to move the cap >> frame inside the if(some_criteria_is_met) which allowed me to capture the first image correctly but the second image which was taken a few minutes later was a single frame past the first captured image (I hope that makes sense).
Thanks
The problem is that your camera has a framerate of less than 100fps, probably 30fps (according to the 32ms you measured), so grab wait for a new frame to be available.
Since there is no way to do a non blocking read in opencv, i think that your best option is to do the video grabbing in another thread.
Something like this, if you use c++11 (this is an example, not sure it is entirely correct):
void camera_loop(std::atomic<bool> &capture, std::atomic<bool> &stop)
{
VideoCapture cap(0);
Mat frame;
while(!stop)
{
cap.grab();
if(capture)
{
cap.retrieve(frame);
// do whatever you do with the frame
capture=false;
}
}
}
int main()
{
std::atomic<bool> capture=false, stop=false;
std::thread camera_thread(camera_loop, std::ref(capture), std::ref(stop));
for(;;)
{
// do something time critical
if(some_criteria_is_met)
{
capture=true;
}
}
stop=true;
camera_thread.join();
}
It doesn't answer your question of are there anyway to open the camera, capture, then not have to capture every frame after until I have to?, but a suggestion
You could try and have the cap >> frame in a background thread which is responsible only for capturing the frames.
Once the frame is in memory, push it to some sort of shared cyclic queue to be accessed from the main thread.

OpenCV Black and White video empty

I have some code to record a video using openCV. It works fine for recording colour video, but I'd like to record black and white.
When I call cvtColor to black and white I get an empty video. I'd really like to know what I'm doing wrong.
VideoCapture cap(1); // open the default camera
cap.set(CV_CAP_PROP_FPS, fps);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
if(!cap.isOpened()) // check if we succeeded
return -1;
VideoWriter writer(filename, CV_FOURCC('M','P','4','2'), fps, Size(1280, 720), true);
int count = 0;
for(;;)
{
count++;
Mat frame;
cap >> frame;
//cvtColor(frame, frame, CV_BGR2GRAY);
writer.write(frame);
}
The above code produces a perfectly fine video, but when cvtColor is uncommented the file is empty.
I was trying to make b/w video with XVID codec and got empty film too (5kB length), until I made FFMPEG libraries available to program (put them together with program or in PATH-ed directory, Windows OS).
OpenCV checks for FFMPEG presence and uses it if available.
And aside remark - use the second Mat for b/w frame - it will save computer time (with single object every capture causes two reallocations/reinitializations)
cvtColor(frame, bwframe, CV_BGR2GRAY);

Detecting SURF points on real time video

I am looking to detect SURF feature points in a live video feed, however, I can't seem to find any tutorials on how to achieve this.
I am able to detect them on still images:
int minHessian = 400;
cv::SurfFeatureDetector detector(minHessian);
std::vector<cv::KeyPoint> keypoints_1;
detector.detect(img_1, keypoints_1);
cv::Mat img_keypoints_1;
drawKeypoints(img_1, keypoints_1, img_keypoints_1);
But I am not sure how you apply this to a video feed using cvCaptureFromCAM()?
The frame grabbed by your webcam is nothing but a single image. Therefore, whatever you can do with your single image, you can do the same thing on that frame too using the same method.
Following is the code where you receive a frame through your webcam in an infinite for loop. Basically, you just need to read the frame and then do the same thing which you did at your single image.
Mat frame;
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
for (;;)
{
cap.read(frame); // get a new frame from camera
if (frame.empty()) continue;
//Now do the same thing with each frame which you did with your single image.
}

why a vector of cv::mat always return the first image?

I'm reading video frames from a webcam and storing them in a std::vector < cv::mat > variable.
Each time I want to calculate the frame difference between two consecutive frames, I use the following code but the result is always zero and I get a zero matrix !! I have two threads in my program, one for reading video frames and one for processing the video frames.
whenever I'm going to write into the vector I use a mutex in order to prevent any further problems in pushing back or removing frames.
Here is my pseudo code :
std::vector<cv::Mat> IMAGE_VEC;
Qmutex IMAGE_VEC_MUTEX;
Main_Thread()
{
while(1)
{
cv::Mat Frame,Reserved_Frame;
Camera.read(Frame);
Frame.copyTo(Reserved_Frame);
QMutexLocker Locker(&IMAGE_VEC_MUTEX);
IMAGE_VEC.pushback(Reserved_Frame);
Locker.unlock();
cv::imshow("Frame",Frame);
cv::waitKey();
}
}
And My process Thread is:
Process_Thread()
{
for (; IMAGE_VEC.size() > 1 ;)
{
cv::Mat frame1;
IMAGE_VEC[0].copyTo(frame1);
cv::Mat frame2;
IMAGE_VEC[1].copyTo(frame2);
cv::subtract(frame1,frame2,result);
QMutexLocker Locker(&IMAGE_VEC_MUTEX);
IMAGE_VEC[0].release();
//qDebug()<<"IMAGE_VEC Step2 size is: "<<IMAGE_VEC.size()<<"\n";
IMAGE_VEC.erase(IMAGE_VEC.begin());
Locker.unlock();
}
}
I calculated the mean and standard deviation of each frame in Process thread and they were always the same ! I could solve this problem by changing the thread process code to this code
Process_Thread()
{
for (; IMAGE_VEC.size() > 1 ;)
{
cv::Mat frame1;
IMAGE_VEC[0].copyTo(frame1);
QMutexLocker Locker(&IMAGE_VEC_MUTEX);
IMAGE_VEC[0].release();
//qDebug()<<"IMAGE_VEC Step2 size is: "<<IMAGE_VEC.size()<<"\n";
IMAGE_VEC.erase(IMAGE_VEC.begin());
Locker.unlock();
delay_ms(1000); // more than 1 second delay !!!
cv::Mat frame2;
IMAGE_VEC[0].copyTo(frame2);
cv::subtract(frame1,frame2,result);
}
}
why this happens ? why a delay more than 1 second should be used to make this code work ? I tried it with a delay less than 1 second and the result were as previous step ...
I'd appreciate your comments on this.
You may need to copy or clone your frame to another Mat, then push to your vector, change your code like
cv::Mat Frame;
Camera.read(Frame);
Mat tmp=Frame.clone(); //clone frame to new Mat
IMAGE_VEC.pushback(tmp);
Otherwise you are passing same pointer on each pushback.

Why does the function calcOpticalFlowPyrLK return the same value?

I am doing an opencv application and I am using de LucasKanada algorithm.
I use this function:
calcOpticalFlowPyrLK(pregray, gray,points[0], points[1], status, err, Size(31,31),3, termcrit, 0, 0.001);
to calculate the new position of the points, but e.g the point [1][2] has the same value that the point [0][2], not changes. why?
Without seeing how you initialize the arguments to the function it hard to answer your question. But my guess is that your prevgray image is the same as gray.
The copy operator (i.e. =) for Mat objects will only copy the headers and the pointer to the matrix, not the data itself. If you grab images from the camera make sure you copy the image data. Something like this:
VideoCapture cap;
cap.open(0);
Mat frame, gray, prevgray;
for(;;)
{
cap >> frame;
gray = rgb2gray(frame);
calcOpticalFlowPyrLK(pregray, gray,points[0], points[1], status, err,
Size(31,31),3, termcrit, 0, 0.001);
gray.copyTo(prevGray); // make sure you copy the data
// if you do
// prevgray = gray;
// then in the next iteration gray and prevgray will point to the same data
}