How to match two different image in C++ - c++

I'm trying to reconstruct a 3D model of a anatomical structure. So I want to match key points in pair of X ray images. I tried it by using following code. But it didn't give correct results.
Mat tmp = cv::imread( "1.jpg", 1 );
Mat in = cv::imread( "2.jpg", 1 );
cv::SiftFeatureDetector detector( 0.0001, 1.0 );
cv::SiftDescriptorExtractor extractor;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect( tmp, keypoints1 );
detector.detect( in, keypoints2 );
Mat feat1,feat2;
drawKeypoints(tmp,keypoints1,feat1,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
drawKeypoints(in,keypoints2,feat2,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite( "feat1.bmp", feat1 );
imwrite( "feat2.bmp", feat2 );
int key1 = keypoints1.size();
int key2 = keypoints2.size();
printf("Keypoint1=%d \nKeypoint2=%d", key1, key2);
Mat descriptor1,descriptor2;
extractor.compute( tmp, keypoints1, descriptor1 );
extractor.compute( in, keypoints2, descriptor2 );
BruteForceMatcher<L2<float> > matcher;
std::vector< DMatch > matches;
matcher.match( descriptor1, descriptor2, matches );
double max_dist = 0; double min_dist = 100;
Mat img_matches;
for( int i = 0; i < descriptor1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptor1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.03) )
{ good_matches.push_back( matches[i]); }
}
drawMatches( tmp, keypoints1, in, keypoints2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
namedWindow("SIFT", CV_WINDOW_AUTOSIZE );
imshow("SIFT", img_matches);
imwrite("sift_1.jpg",img_matches);
waitKey(0);
return 0;
These are the two images
This is what i got from this code
This is very close to my expected result but it also matching wrong points. This shows few points but i need more points.

Feature detectors like SIFT or SURF are designed to work and match images that have a rich and distinctive texture. They are not designed to work with very spares binary inputs like your examples.
You might want to try them on the original X-Rays for more image context.
Alternatively, you might try a more direct global alignment model between the images.
Check out this link for some options for alignment with the findTransformECC() function.
Also see the article here.

I think you may try to use ITK, ITK is designed to complete image registration with 2D or 3D images.

Related

Image Stitching warsPerspective size issue

I am trying to stitch two images. tech stack is opecv c++ on vs 2017.
The image that I had considered are:
image1 of code :
and
image2 of code:
I have found the homoography matrix using this code. I have considered image1 and image2 as given above.
int minHessian = 400;
Ptr<SURF> detector = SURF::create(minHessian);
vector< KeyPoint > keypoints_object, keypoints_scene;
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Mat img_keypoints;
drawKeypoints(gray_image1, keypoints_object, img_keypoints);
imshow("SURF Keypoints", img_keypoints);
Mat img_keypoints1;
drawKeypoints(gray_image2, keypoints_scene, img_keypoints1);
imshow("SURF Keypoints1", img_keypoints1);
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_object, descriptors_scene;
detector->compute(gray_image1, keypoints_object, descriptors_object);
detector->compute(gray_image2, keypoints_scene, descriptors_scene);
//-- Step 3: Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
vector< DMatch > matches;
matcher->match(descriptors_object, descriptors_scene, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist: %f \n", max_dist);
printf("-- Min dist: %f \n", min_dist);
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
Mat result, H;
for (int i = 0; i < descriptors_object.rows; i++)
{
if (matches[i].distance < 3 * min_dist)
{
good_matches.push_back(matches[i]);
}
}
Mat img_matches;
drawMatches(gray_image1, keypoints_object, gray_image2, keypoints_scene, good_matches, img_matches, Scalar::all(-1),
Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow("Good Matches", img_matches);
std::vector< Point2f > obj;
std::vector< Point2f > scene;
cout << "Good Matches detected" << good_matches.size() << endl;
for (int i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
// Find the Homography Matrix for img 1 and img2
H = findHomography(obj, scene, RANSAC);
The next step would be to warp these. I used perspectivetransform function to find the corner of image1 on the stitched image. I had considered this as the number of columns to be used in the Mat result.This is the code I wrote ->
vector<Point2f> imageCorners(4);
imageCorners[0] = Point(0, 0);
imageCorners[1] = Point(image1.cols, 0);
imageCorners[2] = Point(image1.cols, image1.rows);
imageCorners[3] = Point(0, image1.rows);
vector<Point2f> projectedCorners(4);
perspectiveTransform(imageCorners, projectedCorners, H);
Mat result;
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
Mat half(result, Rect(0, 0, image2.cols, image2.rows));
image2.copyTo(half);
imshow("result", result);
I am getting a stitched output of these images. But the issue is with the size of the image. I was doing a comparison by combining the two original images manually with the result of the above code. The size of the result from code is more. What should I do to make it of perfect size? The ideal size should be image1.cols + image2.cols - overlapping length.
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
This line seems problematic.
You should choose the extremum points for the size.
Rect rec = boundingRect(projectedCorners);
warpPerspective(image1, result, H, rec.size());
But you will lose the parts if rec.tl() falls to negative axes, so you should shift the homography matrix to fall in the first quadrant.
See Warping to perspective section of my answer to Fast and Robust Image Stitching Algorithm for many images in Python.

Panoromic image construction

I am trying to construct a panoromic view from different images.
Initially I tried to stitch two images as part of panoromic construction.
The two input images I am trying to stitch are:
I used ORB feature descriptor to find features in the image,then I found out Homography matrix between these two images.
My code is:
int main(int argc, char **argv){
Mat img1 = imread(argv[1],1);
Mat img2 = imread(argv[2],1);
//-- Step 1: Detect the keypoints using orb Detector
std::vector<KeyPoint> kp2,kp1;
// Default parameters of ORB
int nfeatures=500;
float scaleFactor=1.2f;
int nlevels=8;
int edgeThreshold=15; // Changed default (31);
int firstLevel=0;
int WTA_K=2;
int scoreType=ORB::HARRIS_SCORE;
int patchSize=31;
int fastThreshold=20;
Ptr<ORB> detector = ORB::create(
nfeatures,
scaleFactor,
nlevels,
edgeThreshold,
firstLevel,
WTA_K,
scoreType,
patchSize,
fastThreshold );
Mat descriptors_img1, descriptors_img2;
//-- Step 2: Calculate descriptors (feature vectors)
detector->detect(img1, kp1,descriptors_img1);
detector->detect(img2, kp2,descriptors_img2);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(img1, kp1, descriptors_img1 );
extractor->compute(img2, kp2, descriptors_img2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
if ( descriptors_img1.empty() )
cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);
if ( descriptors_img2.empty() )
cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);
descriptors_img1.convertTo(descriptors_img1, CV_32F);
descriptors_img2.convertTo(descriptors_img2, CV_32F);
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_img1,descriptors_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_img1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_img1.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches(img1,kp1,img2,kp2,good_matches,img_matches,Scalar::all(-1),
Scalar::all(-1),vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( kp1[ good_matches[i].queryIdx ].pt );
scene.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
After wards some people told me to include the following code
cv::Mat result;
warpPerspective( img1, result, H, cv::Size( img1.cols+img2.cols, img1.rows) );
cv::Mat half(result, cv::Rect(0, 0, img2.cols, img2.rows) );
img2.copyTo(half);
imshow("result",result);
The result I got is
I also tried using inbuilt opencv stitch function. And I got the result
I am trying to implement stitch function so I dont want to use inbuilt opencv stitch function.
Can any one tell me where I went wrong and correct my code.Thanks in advance
Image stitching includes the following steps:
Feature finding
Find camera parameters
Warping
Exposure compensation
Seam Finding
Blending
You have to do all these steps in order to get the perfect result.
In your code you have only done the first part, that is feature finding.
You can find a detailed explanation on how image stitching works in Learn OpenCV
Also I have the code on Github
Hope this helps.

C++ - OpenCV feature detection with ORB

I'm trying to extract and match features with OpenCV using ORB for detecting and FLANN for matching, and i get a really weird result. After loading my 2 images and converting them to grayscale, here's my code:
// Initiate ORB detector
Ptr<FeatureDetector> detector = ORB::create();
// find the keypoints and descriptors with ORB
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(gray_image1, keypoints_object, descriptors_object );
extractor->compute(gray_image2, keypoints_scene, descriptors_scene );
// Flann needs the descriptors to be of type CV_32F
descriptors_scene.convertTo(descriptors_scene, CV_32F);
descriptors_object.convertTo(descriptors_object, CV_32F);
FlannBasedMatcher matcher;
vector<DMatch> matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
vector< Point2f > obj;
vector< Point2f > scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC );
// Use the Homography Matrix to warp the images
cv::Mat result;
warpPerspective(image1,result,H,Size(image1.cols+image2.cols,image1.rows));
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
image2.copyTo(half);
imshow( "Result", result );
And this is a screen shot of the weird result i'm getting:
screen shot
What might be the problem?
Thanks!
You are experiencing the results of a bad matching: The homography which fits the data is not "realistic" and thus distorts the image.
You can debug your matching with imshow( "Good Matches", img_matches ); as done in the example.
There are multiple approaches to improve your matches:
Use the crossCheck option
Use the SIFT ratio test
Use the OutputArray mask in cv::findHompgraphy to identify totally wrong homography computations
... and so on...
ORB are binary feature vectors which don't work with Flann. Use Brute Force (BFMatcher) instead.

How to discover coordinates (0,0) after to apply warpperspective (opencv)?

I am to apply a warpperspective using opencv to mount a mosaic with varios images, but, i am with a very problem...
When i applied a cvWarpPerspective, a generate image don't show in window.
Appear just a part of image and i need to know how to discover a coordinates (0,0) of my image after to apply a warpperspective.
Is possible see, that in first image, a part of image is cut if to compare with a second image presented here.
Therefore, my problem is: how to discover coordinates of start after to apply a warpperspective ?
I need help to solve this problem.
How can i solve this problem using tool of opencv ?
How can i solve this problem using opencv ?
This is my code:
#include <stdio.h>
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
void readme();
/** #function main */
int main( int argc, char** argv )
{
// Load the images
Mat image1= imread( "f.jpg");
Mat image2= imread( "e.jpg" );
Mat gray_image1;
Mat gray_image2;
// Convert to Grayscale
cvtColor( image1, gray_image1, CV_RGB2GRAY );
cvtColor( image2, gray_image2, CV_RGB2GRAY );
imshow("first image",image2);
imshow("second image",image1);
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 100;
SurfFeatureDetector detector( minHessian );
std::vector< KeyPoint > keypoints_object, keypoints_scene;
detector.detect( gray_image1, keypoints_object );
detector.detect( gray_image2, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( gray_image1, keypoints_object, descriptors_object );
extractor.compute( gray_image2, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
std::vector< Point2f > obj;
std::vector< Point2f > scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC);
// Use the Homography Matrix to warp the images
cv::Mat result;
warpPerspective(image1,result,H,cv::Size());
imshow("WARP", result);
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
image2.copyTo(half);
Mat key;
//drawKeypoints(image1,keypoints_scene,key,Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//drawMatches(image2, keypoints_scene, image1, keypoints_object, matches, result);
imshow( "Result", result );
imwrite("teste.jpg", result);
waitKey(0);
return 0;
}
/** #function readme */
void readme()
{ std::cout << " Usage: Panorama < img1 > < img2 >" << std::endl; }
In this image appears a second image cut. See
I want that my image appears in this form:
The below modification should solve your problem of removing black part of the stitched image.
Try changing this line:
warpPerspective(image1,result,H,cv::Size());
to
warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows));
This creates the result matrix with the number of rows equal to that of image1, thereby avoiding the unwanted rows to be created.

Opencv DMatch distance out of range

I'm using FlannBasedMatcher according to a http://opencv.itseez.com/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography . I'm getting error "vector subscript out of range" everytime it run this line " double dist = matches[i].distance; " Any one can help out? I been stuck here for quite some time..
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
You want to use this:
vector< vector<DMatch> > matches;
It's a vector of vector of DMatch!
Maybe you can't get enough features from your images? Check keypoint's size.
In case you never solved this, make sure you are using the right kind of vectors the first one should be an std::vector whereas the second one is a cv::vector. So if you are not using any namespaces it should look like this:
std::vector<cv::vector<cv::DMatch>> matches;