OpenCV c++ template recognition - c++

I have a template and I want to know if the template is present in an image. Well I have googled a lot and came to the conclusion that I need to use cvMatchTemplate and cvMinMaxLoc.
Here is my code:
image = cvLoadImage("C:/images/flower.jpg",1);
templat = cvLoadImage("C:/images/flo.jpg",1);
image2=cvCreateImage( cvSize(image->width, image->height), IPL_DEPTH_8U, 1 );
result=cvCreateImage( cvSize(image->width, image->height), IPL_DEPTH_8U, 1 );
cvZero(result);
cvZero(image2);
cvCvtColor(image,image2,CV_BGR2GRAY);
cvMatchTemplate(image2, templat,result,CV_TM_CCORR_NORMED);
double min_val=0, max_val=0;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0), 1);
cvShowImage( "src", image );
cvShowImage( "result image", result);
cvWaitKey(0);
My problem is when I run the above code,a message box is displayed saying:
Unhandled exception at 0x747d812f in matching.exe: Microsoft C++ exception: cv::Exception at memory location 0x001ff6ec..
and in the black screen there is a message:
OpenCV Error: Sizes of input arguments do not match <image and template should have the same type> in unknown function, file..\..\..\..\ocv\opencv\scr\cv\cvtempl.cpp, line 356.
Please note that flower.jpg is a coloured image and flo.jpg is the gray scale of that image.
Any ideas of what is happening?

You need to convert both flower.jpg and flo.jpg to single-channel image. Even if flo.jpg is grayscale, you're loading it as three-channel image. Also the result image should be IPL_DEPTH_32F instad of IPL_DEPTH_8U.
Here is the correct code (untested):
IplImage* image = cvLoadImage("C:/images/flower.jpg", 1);
IplImage* templat = cvLoadImage("C:/images/flo.jpg", 1);
IplImage* image2 = cvCreateImage(cvSize(image->width, image->height), IPL_DEPTH_8U, 1);
IplImage* templat2 = cvCreateImage(cvSize(templat->width, templat->height), IPL_DEPTH_8U, 1);
cvCvtColor(image, image2, CV_BGR2GRAY);
cvCvtColor(templat, templat2, CV_BGR2GRAY);
int w = image->width - templat->width + 1;
int h = image->height - templat->height + 1;
result = cvCreateImage(cvSize(w, h), IPL_DEPTH_32F, 1);
cvMatchTemplate(image2, templat, result, CV_TM_CCORR_NORMED);
double min_val, max_val;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0), 1);
cvShowImage("src", image);
cvShowImage("result image", result);
cvWaitKey(0);

Template matching assumes that both image and template have identical number of channels and channel depth. The simplest way to do is to load both of them in grayscale:
Mat I = imread("lena.png", 0);
Mat T = imread("template.png", 0);
Notes: I would command to use OpenCV2.0 C++ interface. So instead of cvLoadImage use imread. The old interface is no longer developed.

Related

Stitching 2 images with overlapping area using opencv

I want to stitch 2 images using opencv(i don't want to use stitcher class), so far i've done keypoint detection, description, matching and warping
there are input images:
left
right
myOutput
stitcherClassOutput
here is my code after finding good matches with surf algorithm:
for (int j = 0; j < good_matches.size(); j++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints1[good_matches[j].queryIdx].pt);
scene.push_back(keypoints2[good_matches[j].trainIdx].pt);
}
H = findHomography(Mat(scene), Mat(obj),match_mask, CV_RANSAC);
cv::Mat result;
warpPerspective(image2, result, H, cv::Size(image2.cols + image1.cols, image2.rows*2), INTER_CUBIC);
Mat final(Size(image2.cols * 2 + image2.cols, image2.rows * 2), CV_8UC3);
Mat roi1(final, Rect(0, 0, image1.cols, image1.rows));
Mat roi2(final, Rect(0, 0, result.cols, result.rows));
result.copyTo(roi2);
image1.copyTo(roi1);
imshow("Result", final);
so my question is, what should i add to my code for my output to look more like the one from stitcher class

Opencv c++Template/Pattern Matching Scale and Rotation invariant

I want to see if a template in present in an image using openCv and c++. However due to different distance at which the image is taken and different position of the image, the match does not occur correctly.
here is my code:
IplImage* image = cvLoadImage("C:/images/Photo0734.jpg", 1);
IplImage* templat = cvLoadImage("C:/images/templatecoin.jpg", 1);
int percent =25;// declare a destination IplImage object with correct size,
depth and channels
IplImage* image3 = cvCreateImage( cvSize((int)((image->width*percent)/100) ,
(int)((image->height*percent)/100) ),image->depth, image->nChannels );
//use cvResize to resize source to a destination image
cvResize(image, image3);
IplImage* image2 = cvCreateImage(cvSize(image3->width, image3->height),
IPL_DEPTH_8U, 1);
IplImage* templat2 = cvCreateImage(cvSize(templat->width,
templat->height), IPL_DEPTH_8U, 1);
cvCvtColor(image3, image2, CV_BGR2GRAY);
cvCvtColor(templat, templat2, CV_BGR2GRAY);
int w = image3->width - templat->width + 1;
int h = image3->height - templat->height + 1;
result = cvCreateImage(cvSize(w, h), IPL_DEPTH_32F, 1);
cvMatchTemplate(image2, templat2, result, CV_TM_CCORR_NORMED);
double min_val, max_val;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image3, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0,1,1), 1);
cvShowImage("src", image3);
//cvShowImage("result image", result);
cvWaitKey(0);
Please note that I am Unable to use "Mat". Is it possible to use IplImage* and enable the code to be invariant to scaling and rotation? help me.
Let have a look to that :
SIFT Wiki
SIFT example
OpenCV SIFT documentation
I think that can be usefull for you.

cvBlob/Opencv: Why is my output variable empty?

When doing:
IplImage blobimg = image;
IplImage *labelImg=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_LABEL, 1);
IplImage *test=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_8U, 3);
unsigned int result=cvLabel(&blobimg, labelImg, blobs);
cvRenderBlobs(labelImg, blobs, &blobimg,test,CV_BLOB_RENDER_BOUNDING_BOX);
Mat imgMat(test);
imshow("Depth", imgMat);
I notice that my test variable is empty :
I think I have to do this instead:
cvRenderBlobs(labelImg, blobs, &blobimg,&blobimg,CV_BLOB_RENDER_BOUNDING_BOX);
But cvRenderBlobs destImg has to have 3 channels and IPL_DEPTH_8U and my image has only 1 channel since it's a gray image.
Can someone tell me why this is and how I can fix this ?
Edit
Where image comes from:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Mat image = *depthImage;
Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that image, wherever it's coming from, isn't also a pointer to an IplImage struct?
IplImage *blobimg = image;
I use this portion of code in my project and it works, see if it can help:
//BYTE* blobMap = ... blobMap holds an image
CvMat mat = cvMat( HEIGHT, WIDTH, CV_8UC1, blobMap);
IplImage *img = cvCreateImage(cvSize(HEIGHT,WIDTH), IPL_DEPTH_8U, 1);
cvGetImage(&mat, img);
cvThreshold(img, img, 10, 255, CV_THRESH_BINARY);
IplImage *labelImg = cvCreateImage(cvGetSize(img),IPL_DEPTH_LABEL,1);
CvBlobs blobs;
unsigned int result = cvLabel(img, labelImg, blobs);
cvFilterByArea(blobs, 1000, 1680*HEIGHT);
IplImage *imgOut = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvRenderBlobs(labelImg, blobs, img, imgOut);
cvNamedWindow("test", 1);
cvShowImage("test", imgOut);
cvWaitKey(0);
cvDestroyWindow("test");
I also don't like the way you pass the Mat to an IplImage, are you sure that your input image (blobimg) is ok?

Cant get output result using cvCornerHarris()

I just want to try the openCV function -- cvCornerHarris. Here is my c++ code:
//image file
char imagePath[256] = "./images/lena512color.tiff";
printf("%s\n", imagePath);
IplImage* srcImg = cvLoadImage(imagePath, 1);
if(NULL == srcImg){
printf("Can not open image file(s).\n");
return -1;
}
IplImage* srcImgGry = cvCreateImage(cvGetSize(srcImg), IPL_DEPTH_8U, 1);
cvCvtColor(srcImg, srcImgGry, CV_RGB2GRAY);
// Canny and Harris expect grayscale (8-bit) input.
// And output of harris image must be 32-bit float .
IplImage* harrisImg = cvCreateImage(cvGetSize(srcImg), IPL_DEPTH_32F, 1);
IplImage* cannyImg = cvCreateImage(cvGetSize(srcImg), IPL_DEPTH_8U, 1);
//// Corner detection using Harris-corner
cvCornerHarris(srcImgGry, harrisImg, 5, 5, 0.04);
cvCanny(srcImgGry, cannyImg, 50, 100, 3);
// (5)Display the result
cvNamedWindow ("Img", CV_WINDOW_AUTOSIZE);
cvShowImage ("Img", srcImgGry);
cvNamedWindow ("Harris", CV_WINDOW_AUTOSIZE);
cvShowImage ("Harris", harrisImg);
cvNamedWindow ("Canny", CV_WINDOW_AUTOSIZE);
cvShowImage ("Canny", cannyImg);
cvWaitKey (0);
cvDestroyWindow ("Harris");
cvDestroyWindow ("Img");
cvReleaseImage (&srcImg);
cvReleaseImage (&srcImgGry);
cvReleaseImage (&harrisImg);
cvReleaseImage (&cannyImg);
I can get a expected output image of cvCanny (cannyImg) but the output image of cvCornerHarris (harrisImg)is an black image with nothing on it.
Please help to explain how to use this function cvCornerHarris. Thanks!
It's all about parameters! People tend to believe that there are magical parameters that will work for all types of images and scenarios. Unfortunately, this doesn't happen in the real world.
The parameters used to process one image may not produce the same level of results when applied to other type of image. Now, consider the following code:
IplImage* colored = cvLoadImage("house.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("Can not open image file(s).\n");
return -1;
}
IplImage* gray = cvCreateImage(cvGetSize(colored), IPL_DEPTH_8U, 1);
cvCvtColor(colored, gray, CV_RGB2GRAY);
IplImage* harris = cvCreateImage(cvGetSize(colored), IPL_DEPTH_32F, 1);
cvCornerHarris(gray, harris, 3, 11, 0.07);
cvNamedWindow("Harris", CV_WINDOW_AUTOSIZE);
cvShowImage ("Harris", harris);
As you can see below, these parameters produced a decent result (to my point of view). However, keep in mind that they won't probably work for you. Bad parameters will produce a black image (i.e. will detect nothing) as you have observed on your tests.
The answer is: take a look at the docs to see what those parameters mean and how they influence the result. Most importantly, play with them until they produce images that satisfy your needs.
Input image:
(source: 123desenhosparacolorir.com)
Output:

cvDCT in openCV

i was wondering how to use cvDCT void in opencv c++
if anyone have an example
The manual explains both the parameters and the inner workings of the function.
You can use the following code:
IplImage* src0 = cvLoadImage(strPath, CV_LOAD_IMAGE_GRAYSCALE);
IplImage* src = cvCreateImage(cvGetSize(src0), IPL_DEPTH_32F, 1);
cvConvert(src0, src);
IplImage* dst = cvCreateImage(cvGetSize(src0), IPL_DEPTH_32F, 1);
cvDCT(src, dst, 0);
//cvDCT (dst, src, 1);
//cvConvert(src, src0);
cvShowImage("Source", src0);