I've been trying to simultaneously grab frames from two different cameras, but no matter how many times I call VideoCapture::grab(), there seems to be no effect. The frame retrieved using VideoCapture::retrieve() is always the first frame captured from the last VideoCapture::retrieve().
I've tested it on both OpenCV 2.4 and 3.1, with a Logitech C920 camera on windows.
Example:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //image with green sheet is retrieved
Sleep(100000); //change red sheet with blue sheet
vCapture.retrieve(imgResult); //red sheet is retrieved
I've also tried:
for(int i = 0; i < 1000; i++){
vCapture.grab();
} //takes almost no processing time, like an empty for
vCapture.retrieve(imgResult); //same as before
Retrieve always returns true and retrieves a frame, even if no grab was called since opening vCapture.
My current solution has been retrieving the frame twice (multi-threaded) to ensure the latest frame, but it isn't reliable enough to sync both cameras. Can anyone shed some light on how to force the camera to grab the current frame?
Thanks!
Edit:
A variation of the first example, for clarity:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
vCapture.retrieve(imgResult); //image with green sheet is retrieved
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //green sheet is retrieved
vCapture.retrieve(imgResult); //red sheet is retrieved
Sleep(100000); //change red sheet with blue sheet
vCapture.retrieve(imgResult); //red sheet is retrieved
vCapture.retrieve(imgResult); //blue sheet is retrieved
Expected behavior:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
vCapture.retrieve(imgResult); //error or image with green sheet is retrieved
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //red sheet is retrieved
Per OpenCV documentation:
VideoCapture::grab: The methods/functions grab the next frame from video file or camera and return true (non-zero) in the case of success.
VideoCapture::retrieve: The methods/functions decode and return the just grabbed frame. If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer.
Please try this code with the following instructions:
before and while you start the program, hold a red sheet in front of the camera. That moment, the first .grab will be called.
Once you see the black window popping up, remove the red sheet and hold a blue sheet or something else (except the red or the green sheet) in front of the camera. Then press keyboard key 'q'.
Now you have 5 seconds time to change the scene again. Hold hold the green sheet in front of the camera and wait. The black window will be switched to one of your camera images.
int main(int argc, char* argv[])
{
cv::Mat input = cv::Mat(512,512,CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
while (cv::waitKey(10) != 'q')
{
cap.grab();
cv::imshow("input", input);
}
cv::waitKey(5000);
cap.retrieve(input);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
3 possible results:
you see the red sheet: this means that the first grab was called and fixed the image, until a retrieve was called.
you see blue sheet: this means every .grab call "removes" one image and the camera will capture a new image on the next call of .grab
you see the green sheet: this means your .retrieve doesn't need a .grab at all and just grabs images automatically.
For me, result 1 occurs, so you can't grab and grab and grab and just .retrieve the last image.
Test 2: control about everything:
looks like you are right, on my machine no matter when or how often I call grab, the next image will be the one captured at the time when I called the previous .retrieve and the calls of .grab don't seem to influence the time position of capturing at all.
Would be very interesting whether the same behaviour occurs for different (all) kind of cameras and operating systems.
I've tested on the internal camera of a T450s and Windows 7.
int main(int argc, char* argv[])
{
cv::Mat input = cv::Mat(512,512,CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
bool grabbed;
bool retrieved;
while (true)
{
char w = cv::waitKey(0);
switch (w)
{
case 'q': return 0;
case 27: return 0;
case ' ': retrieved = cap.retrieve(input); break;
case 'p': grabbed = cap.grab(); break;
}
cv::imshow("input", input);
}
return 0;
}
In addition, this simple code seems to be off 1 frame for my camera (which therefore probably has a buffersize of 2??):
while (true)
{
cap >> input;
cv::imshow("input", input);
cv::waitKey(0);
}
I ran my test and note very strange behavior of .garab and .retrive functions.
This is example:
cv::Mat input = cv::Mat(512, 512, CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
while (true)
{
cap.grab();
cap.retrieve(input, 5);
cv::imshow("input", input);
cv::waitKey(0);
}
If you press any key slowly, about every 5 seconds, and change something in front of the camera between pressing, the position of the object on the image will change every second image showing, that is, every second call of the .grab and .retrive functions.
If you press any key quickly, about every 1 seconds, and also change something in front of the camera between pressing, the position of the object on the image will be changed every image showing.
This circumstance tell about that this function can be used to sync cameras.
Related
I'm running into an odd problem with OpenCV on Linux, Ubuntu 16.04 specifically. If I use usual code to show a webcam stream like this it works fine:
// WebcamTest.cpp
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::Mat imgOriginal; // input image
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
// convert to grayscale
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);
// blur image
cv::GaussianBlur(imgGrayscale, imgBlurred, cv::Size(5, 5), 0);
// get Canny edges
cv::Canny(imgBlurred, imgCanny, 75, 150);
cv::imshow("imgOriginal", imgOriginal);
cv::imshow("imgCanny", imgCanny);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return (0);
}
This example shows the webcam stream in one imshow window and a Canny edges image in a second window. Both windows update and show the images as expected with very little if any perceptible flicker.
If you're wondering why I'm using the 1th camera instead of the usual 0th camera, I'm running this on a Jetson TX2 and the 0th camera is the one integral to the development board and I'd prefer to use an additional external webcam. For this same reason I have to use Ubuntu 16.04 but I suspect the result would be the same with Ubuntu 18.04 (have not tested this however).
If instead I have a function that takes significant processing instead of taking simple Canny edges, i.e.:
int main(void)
{
.
.
.
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::namedWindow("imgOriginal");
cv::Mat imgOriginal;
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
.
.
.
return (0);
}
The detectLicensePlate() function takes about a second to run.
The problem I'm having is, when running this program, the window only appears for the slightest amount of time, usually not long enough to even be perceptible, and never long enough to actually see the result.
The strange thing is, the window disappears, then the second or so day occurs for detectLicensePlate() to do its thing, then the window appears again for a very short time, then disappears again, and so on. It's almost as though just after cv::imshow("imgOriginal", imgOriginal);, cv::destroyAllWindows(); is implicitly being called.
The behavior I'm attempting to achieve is for the window to stay open and continue to show the previous result while processing the next. From what I recall this was the default behavior on Windows.
I should mention that I'm explicitly declaring the windows with cv::namedWindow("imgOriginal"); before the while loop in an attempt to not let it go out of scope but this does not seem to help.
Of course I can make the delay longer, i.e.
charCheckForEscKey = cv::waitKey(1500);
To wait for 1.5 seconds, but then the application gets very unresponsive.
Based on this post c++ opencv image not display inside the boost thread I tried declaring the window outside the while loop and putting detectLicensePlate() and cv::imshow() on a separate thread, as follows:
.
.
.
cv::namedWindow("imgOriginal");
boost::thread myThread;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
myThread = boost::thread(&preDetectLicensePlate, imgOriginal);
myThread.join();
.
.
.
} // end while
// separate function
void preDetectLicensePlate(cv::Mat &imgOriginal)
{
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
}
I even tried putting detectLicensePlate() on a separate thread but not cv::imshow(), and the other way around, still the same result. No matter how I change the order or use threading I can't get the window to stay open while the next round of processing is going.
I realize I could use an entirely different windowing environment, such as Qt or something else, and that may or may not solve the problem, but I'd really prefer to avoid that for various reasons.
Does anybody have any other suggestions to get an OpenCV imshow window to stay open until the window is next updated or cv::destroyAllWindows() is called explicitly?
I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}
I have an object that lights up a certain color, say blue. It then blinks a different color every 5 seconds, say red. I have written a program to track the object using color based detection and draw a circle over it using opencv, but every time the object blinks red, I momentarily lose the ability to track it because the computer no longer sees the object. My question is, can I continuously track the blue object and have my program recognize the object as blue even while it is blinking red?
Code:
Note: I have an "object" class that has getters/setters for color, hsv min and max for its color, name, etc. trackObject() uses findContours method to track and draw graphics on the object.
int main(int argc, char** argv)
{
resizeWindow("trackbar", 500, 0);
VideoCapture stream1(0);
if (!stream1.isOpened()) {
cout << "cannot open camera";
}
while (true) {
Mat cameraFrame;
Mat hsvFrame;
Mat binMat;
Mat grey;
Mat grayBinMat;
stream1.read(cameraFrame);
if (cameraFrame.empty()) {
break;
}
object blue("BLUE");
cvtColor(cameraFrame, hsvFrame, CV_BGR2HSV, 0);
inRange(hsvFrame, blue.getHSVMin(), blue.getHSVMax(), binMat);
morphOps(binMat);
trackobject(cameraFrame, binMat, blue);
object red("RED");
cvtColor(cameraFrame, hsvFrame, CV_BGR2HSV, 0);
inRange(hsvFrame, red.getHSVMin(), red.getHSVMax(), binMat);
morphOps(binMat);
trackobject(cameraFrame, binMat, red);
imshow("cam", cameraFrame);
//imshow("gray bin", grayBinMat);
imshow("hsv bin", binMat);
imshow("hsv", hsvFrame);
//1. waitKey waits (x ms) before resuming program 2. returns the ascii value of any key pressed during the wait time
if (waitKey(30) >= 0) {
break;
}
}
return 0;
}
I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}
I'm trying to write a .avi video using OpenCV 2.4.3.
What I am going to do is:
load a .avi video
do some stuff over each frame, eventually discarding some of them
save the new video
Basically, what I do is:
use a face detector and find faces
if no faces are detected, skip that frame
if there are faces, draw a rectangle on each one and put some text over it
The code looks like this (skipping the main elaboration part, which doesn't touch the Mat alt frame):
VideoCapture cam("1.avi");
VideoWriter writer("1_det.avi",
cam.get(CV_CAP_PROP_FOURCC),
cam.get(CV_CAP_PROP_FPS),
cv::Size(cam.get(CV_CAP_PROP_FRAME_WIDTH),
cam.get(CV_CAP_PROP_FRAME_HEIGHT)));
while(cam.read(image2.img)) {
Mat alt = image2.img.clone();
// finds faces, then:
if(faces.size()==0) {
for(int k=0;k<5;k++) cam.read(alt);
continue;
}
for(int f=0;f<faces.size();++f) {
// do some stuff here, then draw some results:
putText(alt, ss.str(), Point(faces[f].x,faces[f].y), FONT_HERSHEY_SCRIPT_SIMPLEX, 1, Scalar::all(255), 1);
rectangle(alt, Point(faces[f].x,faces[f].y), Point(faces[f].x+faces[f].width,faces[f].y+faces[f].height), colors[f] , 3, 8, 0 );
}
writer << alt;
}
Now, since I discard a significant number of frames, I would like to save the video at a lower FPS, like cam.get(CV_CAP_PROP_FPS)/2 but if I try to do something like that, the video shows the first frame repeated over and over again, for the entire duration of the video (which is correct though).
Is there something big I'm missing about videos..?
Any input will be greatly appreciated.
Thanks