I'm currently working on a project and at the moment I need to pull successive frames from a video then find and match features on them. The problem Is that when I call VideoCapture::read(Mat &image) It overwrites both images that I want to compare with the same image. I think It could be because the same buffer is being used and therefore both values are pointing to the same space. I'm just not certain how to get around this.
Here's the problem code: (don't worry about the poor exception handling)
Mat m1, m2;
VideoCapture cap(argv[1]);
if(!cap.isOpened()){
throw std::exception("Could not open the file");
}
int num = 0;
while(num < 20){
try{
cap.read(m1);
cap.read(m2);
num++;
match(m1,m2,num);
}catch(std::exception){
std::cout << "Oh no!";
}
}
match(m1,m2,num) does the feature detection business and outputs an image "Image_%d.jpg" , num. This image is both images side by side with matches displayed. This image is the same image twice in a row though. match() does work because I have tested it with still images, so I am confident the problem lies in the cap.read code.
Any help/suggestions would be greatly appreciated.
Well it was as easy as making sure each image was a deep copy of the captures image.
m1 >> cap
m1 = m1.clone();
did the trick, although less elegantly than I hoped for.
Related
community,
Using c++, opencv and gnuplot my goal is to showcase sobel values in a video between images with Depth of Field and ones without.
I've been saving the frame as cv::Mat, converted it into grayscale, blurred it with a 3x3 kernel and applied sobel on it. I normalized the result and it showed fine in imshow. With the help of gnuplot-iostream i want to create an 3d-image similiar to this one in gnuplot examplepicture, showing the intensity, which is between 0 and 255 after normalization.
Gnuplot doesnt seem to natively support cv::Mat, so i tried couple of ways to insert it, all creating just one line and/or wrong scaling. This is the code im using to convert it into an vector, which gnuplot seems to take with no issues.
if (sobelxy.isContinuous()) {
imgvec.assign(sobelxy.datastart, sobelxy.dataend);
}
else {
for (int i = 0; i < sobelxy.rows; i++) {
imgvec.insert(imgvec.end(), sobelxy.ptr<double>(i), sobelxy.ptr<double>(i) + sobelxy.cols);
}
}
I could access pixels one by one, however this is very performanceheavy and not well suited for a video, thus im wondering if theres an option to either preprocess the vector so that gnuplot gives me correct result or use certain parameters in gnuplot to read the vector correctly.
Thank you in advance.
I am trying to click an image from Webcam using OpenCV. My code is as follows.
VideoCapture cap0(0);
cap0.set(CV_CAP_PROP_FRAME_WIDTH,320);
cap0.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap0 >> frame;
string fileName = "/0.jpg";
cout << fileName << endl;
imwrite(fileName, frame);
I am getting this image as output
You can see some weird lines in output., What is the possible reason and how i can eliminate these Please point me to the right direction.
Thanks
This looks like a problem in the acquisition driver, which doesn't transfer the right row data on every 41th row (43rd for the first pair !?), maybe using block transfers.
It seems that there is valid image data in these rows, but I can't identify where they could be coming from.
It may be electromagnetic interference in your case, try check it in usual conditions without electric line near camera, or make sheld for camera.
I'm looking to make a program that once run, will continuously look for an template image (stored in directory of program) to match in realtime with the screen. Once found, it will click on the image (ie the center of the coords of the best match). The images will be exact copies (size/color) so finding the match should not be very hard.
This process then continues with many other images and then resets to start again with the first image again but once I have the first part working I can just copy the code.
I have downloaded the OpenCV library as it has image matching tools but I am lost. Any help with writing some stub code or pointing me to a helpful resource is much appreciated. I have checked a lot of the OpenCV docs with no luck.
Thanks you.
If you think that the template image would not be very different in the current frame then you should use matchTemplate() of openCV. Its very easy to use and will give you good results.
Have a look here for complete explanation http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
void start()
{
VideoCapture cap(0);
Mat image;
namedWindow(wndname,1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
"Load your template image here"
"Declare a result image here"
"Perform the templateMatching() between template image and frame and store the results of correlation in result image declared above"
char c = cvWaitKey(33);
if( c == 27 ) break;
}
}
I am currently planning on splitting my image into 3 channels so i can get the RGB values of an image to plot a scatter graph so i can model is using a normal distribtion calculating the covariance matrix, mean, etc.
then calculate distance between the background points and the actual image to segment the image.
Now in my first task, i have wrote the following code.
VideoCapture cam(0);
//int id=0;
Mat image, Rch,Gch,Bch;
vector<Mat> rgb(3); //RGB is a vector of 3 matrices
namedWindow("window");
while(1)
{
cam>>image;
split(image,rgb);
Bch = rgb[0];
Gch = rgb[1];
Rch = rgb[2];
but as soon as it reaches the split function, i step through it, it causes a unhandled exception error. access violation writing location 0xfeeefeee
i am still new to opencv, so am not used to dealing with unhandled exception error.
thanks
It sounds as if split expects there to be three instances of Mat in the rgb vector.
But you have only prepared it to hold three items - it is actually empty.
Try adding three items to the vector and run again.
Although this is an old issue I would like to share the solution that worked for me. Instead of vector<Mat> rgb(3); I used Mat channels[3];. I realized there is something wrong with using vector when I was not able to use split even on an image loaded with imread. Unfortunately, I cannot explain why this change works, but if someone can that would be great.
I am writing C++ code with OpenCV where I'm trying to detect a chessboard on an image (loaded from a .jpg file) to warp the perspective of the image. When the chessboard is found by findChessboardCorners(), the rest of my code is working perfectly. But sometimes the function does not detect the pattern, and this behavior seems to be random.
For example, there is one image that works on it's original resolution 2560x1920, but not if I scale it down with GIMP first to 800x600. However, another image seems to do the opposite: doesn't work in original resolution, but does work scaled down.
Here's the bit of my code that does the detection:
Mat grayimg = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
if (img.data == NULL) {
printf("Unable to read image");
return 0;
}
bool patternfound = findChessboardCorners(grayimg, patternsize, corners,
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK);
if (!patternfound) {
printf("Chessboard not found");
return 0;
}
Is there some kind of bug in opencv causing this behavior? Does anyone has any tips on how to pre-process your image, so the function will work more consistently?
I already tried playing around with the parameters CALIB_CB_ADAPTIVE_THRESH, CALIB_CB_NORMALIZE_IMAGE, CALIB_CB_FILTER_QUADS and CALIB_CB_FAST_CHECK. I'm also having the same results when I pass in a color image.
Thanks in advance
EDIT: I'm using OpenCV version 2.4.1
I had a very hard time getting findChessboardCorners to work until I added a white boarder around the chessboard.
I found that as hint somewhere in the more recent documenation.
Before adding the border, it would sometimes be impossible to recognize the keyboard, but with the white border it works every time.
Welcome to the joys of real-world computer vision :-)
You don't post any images, and findChessboardCorners is a bit too high-level to debug. I suggest to display (in octave, or matlab, or with more OpenCV code) the location of the detected corners on top of the image, to see if enough are detected. If none, try to run cvCornerHarris by itself on the image.
Sometimes the cause of the problem is the excessive graininess of the image: try to blur is just a little and see if it helps.
Actually, try to remove the CALIB_CB_FAST_CHECK option, and give it a try.
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK is not the same as CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK, you should use | (binary or)