Image comparison method with C++ and OpenCV - c++

I am new to OpenCV. I would like to know if we can compare two images (one of the images made by photoshop i.e source image and the otherone will be taken from the camera) and find if they are same or not.
I tried to compare the images using template matching. It does not work. Can you tell me what are the other procedures which we can use for this kind of comparison?

Comparison of images can be done in different ways depending on which purpose you have in mind:
if you just want to compare whether two images are approximately equal (with a few
luminance differences), but with the same perspective and camera view, you can simply
compute a pixel-to-pixel squared difference, per color band. If the sum of squares over
the two images is smaller than a threshold the images match, otherwise not.
If one image is a black-white variant of the other, conversion of the color images is
needed (see e.g. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale). Afterwarts simply perform the step above.
If one image is a subimage of the other, you need to perform registration of the two
images. This means determining the scale, possible rotation and XY-translation that is
necessary to lay the subimage on the larger image (for methods to register images, see:
Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A. , Mutual-information-based registration of
medical images: a survey, IEEE Transactions on Medical Imaging, 2003, Volume 22, Issue 8,
pp. 986 – 1004)
If you have perspective differences, you need an algorithm for deskewing one image to
match the other as well as possible. For ways of doing deskewing look for example in
http://javaanpr.sourceforge.net/anpr.pdf from page 15 and onwards.
Good luck!

You should try SIFT. You apply SIFT to your marker (image saved in memory) and you get some descriptors (points robust to be recognized). Then you can use FAST algorithm with the camera frames in order to find the coprrespondent keypoints of the marker in the camera image.
You have many threads about this topic:
How to get a rectangle around the target object using the features extracted by SIFT in OpenCV
How to search the image for an object with SIFT and OpenCV?
OpenCV - Object matching using SURF descriptors and BruteForceMatcher
Good luck

Related

Image classification in video stream with contours with Opencv

Please I need your help with this problem, I want to create a program to differentiate between the two forms(2 images), with a camera in real time, here are the methods. I found but I’m not sure they’re going to work because I want the detection to be feasible if the object is inclined by 90 degrees or 180 degrees by example, I have to use machine learning in this problem but I am open to any proposition, also I do not have many images in the database.
Here are the methods I found but I'm not sure they will work;
1 - Apply Canny filter to extract contours.
2 - Use a features extractors such SIFT, Fourier Descriptors, Haralick's Features, Hough Transform to extract more details which could be summarised in a short vector.
3-Then train SVM or ANN with this vector.
The goal is to detect two cases : Open or Close
Also i dont know that contours are the best way to solve this problem because the background changes a lot
The original images are valves with different shape, here is an example :

OpenCv 3d Stitching Panorama

I have 7 images from gopro (5 cameras in rig and one for top and one for bottom, They all are gopro camera). I want to stitch all these images together to create a 3d panorama. I have been able to stitch 5 images in Rig by using opencv stitching_detailed.cpp. Link to file:
https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/stitching_detailed.cpp
But I'm not sure how to stitch top and bottom (For me right now bottom is not that important but i have to do something about top). Any idea how this can be done? Please let me know if i can use same stitching_detailed.cpp to stitch top also.
Following Link contains images which I'm using. It also contains results i got from stitching images in rig.
https://drive.google.com/folderview?id=0B_Bl8s2ePunQcnBaM3A4WDlDcXM&usp=sharing
So first you need to understand how stitching_detailed.cpp works.
1. features keypoint is detected in each image using SURF/ORB/SIFT or so. Then for each image pair, best feature matches are found and homography matrix is calculated and number of inliers for each pair is obtained
(*finder)(img, features[i]);
BestOf2NearestMatcher matcher(try_cuda, match_conf);
matcher(features, pairwise_matches);
All these pairs are pased into leaveBiggestComponent to obtain largest set of images, which belong to a panorama.
camera parameters or each image is calculated from above obtained set and warping and blending is done.
step 1 will find homography for each pair and generate number of inliers. Step 2 will remove all those image pair for which confidence factor(number of inliers) is less than threshold. Since cam7 img has so less features and almost no overlapping region with any other image, it will get rejected in leavebiggestcomponent step.
You can see features and mtaching in this link (I have used orbfeatures)
https://drive.google.com/open?id=0B2wDitsftUG9QnhCWFIybENkbDA
Also I have not changed the image size, but i guess reducing the image size a bit(by half maybe) will yield more feature points
What you can do is reduce the time interval in which you take frame for stitching. To obtain good results, there must be atleast 40% overlapping region between images.

Stitching images from 2 overlapping cameras stationary relative to each-other

I'm new to CV, and trying to stitch together a video of two cameras which are stationary one relative to the other. The details:
The cameras are one beside the other and I can adjust the rotation angle between them. The cameras will be moving with respect to the world, so the scene will be changing.
The amount of frames to be stitched is roughly 300 (each frame is composed of two pictures, one from each camera).
I don't need to do the stitching in real time, but I want to do it as fast as possible using the fact that I know the relative positions of the cameras. Resolution of each picture is relatively high, around 900x600.
Right now I'm at the stage where I have code to stitch 2 single pictures, courtesy of http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
The main stages are:
Using SURF detector to find SURF descriptor in both images
matching the SURF descriptor using FLANN Matcher
Postprocessing matches to find good matches
Using RANSAC to estimate the Homography matrix using the matched SURF descriptors
Warping the images based on the homography matrix
My question is: How can I optimize the process based on the fact that I already know the camera positions?
Ideally I would like to do some initial calculation once to find the transform between the camera perspectives, and then reuse it. But not sure with my rudimentary CV knowledge if this is indeed possible, and what transform I could use if so.
I understand that calculating the homography matrix once and reusing it won't work, since the scene is changing.
Two other possibilities:
I found a similar case (but stationary scene) where the transform is computed once and reused. Which transform is this, and could it work in my case?
The other possibility I found is to use the initial knowledge to find the overlapping region between two pictures, and ignore the rest of the pictures to save time. Relevant thread
Any help would be greatly appreciated!
Ron

correspond values between two images in opencv / c++

I am new in image processing and opencv. I have two images. I want to find correspond values in the image 2 with the image 1. and then show it. is there any function in opencv to find correspond values between images?
thanks in advance.
Mat corrVals;
bitwise_and(image2, image1>0, corrVals);
image1>0 will create temporary binary image with values 0 and 255. Than the only thing you need is to perform AND operation between pixels of your images, and store result somewhere. This is done by bitwise_and.
This is similar to approach suggested by #Mailerdaimon but uses much cheaper operations.
You can threshold you image1 such that all Values you want are 1 and all other are 0.
Than you multiply image1 with image2.
cv::multiply(image1, image2, result, scale, dtype)
This will return an image with all values greater than zero from image2 that are marked in image1.
It is hard to say without looking at your images. This is a well studied problem in computer vision and OpenCV contains several algorithms for this. The problem you're looking at can be very easy or very hard, depending on:
your images, are the normal images? just shapes? binary?
where on the images lie the corresponding pixels
how fast you need this to run
how much variation there is between images, is it exactly the same pixel value?
is there camera movement?
is there variation in illumination?
You can start by looking at stereo matching and optical flow inside OpenCV.

stitching aerial images

I am trying to stitch 2 aerial images together with very little overlap, probably <500 px of overlap. These images have 3600x2100 resolution. I am using the OpenCV library to complete this task.
Here is my approach:
1. Find feature points and match points between the two images.
2. Find homography between two images
3. Warp one of the images using the homgraphy
4. Stitch the two images
Right now I am trying to get this to work with two images. I am having trouble with step 3 and possibly step 2. I used findHomography() from the OpenCV library to grab my homography between the two images. Then I called warpPerspective() on one of my images using the homgraphy.
The problem with the approach is that the transformed image is all distorted. Also it seems to only transform a certain part of the image. I have no idea why it is not transforming the whole image.
Can someone give me some advice on how I should approach this problem? Thanks
In the results that you have posted, I can see that you have at least one keypoint mismatch. If you use findHomography(src, dst, 0), it will mess up your homography. You should use findHomography(src, dst, CV_RANSAC) instead.
You can also try to use warpAffine instead of warpPerspective.
Edit: In the results that you posted in the comments to your question, I had the impression that the matching worked quite stable. That means that you should be able to get good results with the example as well. Since you mostly seem to have to deal with translation you could try to filter out the outliers with the following sketched algorithm:
calculate the average (or median) motion vector x_avg
calculate the normalized dot product <x_avg, x_match>
discard x_match if the dot product is smaller than a threshold
To make it work for images with smaller overlap, you would have to look at the detector, descriptors and matches. You do not specify which descriptors you work with, but I would suggest using SIFT or SURF descriptors and the corresponding detectors. You should also set the detector parameters to make a dense sampling (i.e., try to detect more features).
You can refer to this answer which is slightly related: OpenCV - Image Stitching
To stitch images using Homography, the most important thing that should be taken care of is finding of correspondence points in both the images. Lesser the outliers in the correspondence points, the better is the generated homography.
Using robust techniques such as RANSAC along with FindHomography() function of OpenCV(Use CV_RANSAC as option) will still generate reasonable homography provided percentage of inliers is more than percentage of outliers. Also make sure that there are at-least 4 inliers in the correspondence points that passed to the FindHomography function.