I am to apply a warpperspective using opencv to mount a mosaic with varios images, but, i am with a very problem...
When i applied a cvWarpPerspective, a generate image don't show in window.
Appear just a part of image and i need to know how to discover a coordinates (0,0) of my image after to apply a warpperspective.
Is possible see, that in first image, a part of image is cut if to compare with a second image presented here.
Therefore, my problem is: how to discover coordinates of start after to apply a warpperspective ?
I need help to solve this problem.
How can i solve this problem using tool of opencv ?
How can i solve this problem using opencv ?
This is my code:
#include <stdio.h>
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
void readme();
/** #function main */
int main( int argc, char** argv )
{
// Load the images
Mat image1= imread( "f.jpg");
Mat image2= imread( "e.jpg" );
Mat gray_image1;
Mat gray_image2;
// Convert to Grayscale
cvtColor( image1, gray_image1, CV_RGB2GRAY );
cvtColor( image2, gray_image2, CV_RGB2GRAY );
imshow("first image",image2);
imshow("second image",image1);
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 100;
SurfFeatureDetector detector( minHessian );
std::vector< KeyPoint > keypoints_object, keypoints_scene;
detector.detect( gray_image1, keypoints_object );
detector.detect( gray_image2, keypoints_scene );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( gray_image1, keypoints_object, descriptors_object );
extractor.compute( gray_image2, keypoints_scene, descriptors_scene );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
std::vector< Point2f > obj;
std::vector< Point2f > scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC);
// Use the Homography Matrix to warp the images
cv::Mat result;
warpPerspective(image1,result,H,cv::Size());
imshow("WARP", result);
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
image2.copyTo(half);
Mat key;
//drawKeypoints(image1,keypoints_scene,key,Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//drawMatches(image2, keypoints_scene, image1, keypoints_object, matches, result);
imshow( "Result", result );
imwrite("teste.jpg", result);
waitKey(0);
return 0;
}
/** #function readme */
void readme()
{ std::cout << " Usage: Panorama < img1 > < img2 >" << std::endl; }
In this image appears a second image cut. See
I want that my image appears in this form:
The below modification should solve your problem of removing black part of the stitched image.
Try changing this line:
warpPerspective(image1,result,H,cv::Size());
to
warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows));
This creates the result matrix with the number of rows equal to that of image1, thereby avoiding the unwanted rows to be created.
Related
i'm writing an algorithm to stablize a video, my current process is:
Get features with SURF
Build feature Kd-Tree then match each two consecutive frames. I'm using the Knnmatch from openCV, but i'm not sure if it uses Kd-trees.
cvFindHomography to compute the homography
Warp with the inverse of the homography matrix
The code right now is as follows:
void Stab::process()
{
Mat gray;
Mat img_keypoints_1;
cvtColor(frame, gray, COLOR_BGR2GRAY);
if(first == false)
{
prev_frame = frame.clone();
first = true;
}
//-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints1, keypoints2;
Mat descriptors1, descriptors2;
detector->detectAndCompute( prev_frame, noArray(), keypoints1, descriptors1 );
detector->detectAndCompute( frame, noArray(), keypoints2, descriptors2 );
//-- Step 2: Matching descriptor vectors with a FLANN based matcher
// Since SURF is a floating-point descriptor NORM_L2 is used
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
std::vector< std::vector<DMatch> > knn_matches;
matcher->knnMatch( descriptors1, descriptors2, knn_matches, 2 );
//-- Filter matches using the Lowe's ratio test
const float ratio_thresh = 0.7f;
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
{
good_matches.push_back(knn_matches[i][0]);
}
}
Mat HInv;
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( size_t i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints1[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, RANSAC );
HInv = H.inv();
warpPerspective(frame, im_out, HInv, prev_frame.size());
prev_frame = frame.clone();
//waitKey(30);
}
But, the results i'm getting are pretty bad, is there any optimization that can be done or should i switch to some other method?
I am trying to construct a panoromic view from different images.
Initially I tried to stitch two images as part of panoromic construction.
The two input images I am trying to stitch are:
I used ORB feature descriptor to find features in the image,then I found out Homography matrix between these two images.
My code is:
int main(int argc, char **argv){
Mat img1 = imread(argv[1],1);
Mat img2 = imread(argv[2],1);
//-- Step 1: Detect the keypoints using orb Detector
std::vector<KeyPoint> kp2,kp1;
// Default parameters of ORB
int nfeatures=500;
float scaleFactor=1.2f;
int nlevels=8;
int edgeThreshold=15; // Changed default (31);
int firstLevel=0;
int WTA_K=2;
int scoreType=ORB::HARRIS_SCORE;
int patchSize=31;
int fastThreshold=20;
Ptr<ORB> detector = ORB::create(
nfeatures,
scaleFactor,
nlevels,
edgeThreshold,
firstLevel,
WTA_K,
scoreType,
patchSize,
fastThreshold );
Mat descriptors_img1, descriptors_img2;
//-- Step 2: Calculate descriptors (feature vectors)
detector->detect(img1, kp1,descriptors_img1);
detector->detect(img2, kp2,descriptors_img2);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(img1, kp1, descriptors_img1 );
extractor->compute(img2, kp2, descriptors_img2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
if ( descriptors_img1.empty() )
cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);
if ( descriptors_img2.empty() )
cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);
descriptors_img1.convertTo(descriptors_img1, CV_32F);
descriptors_img2.convertTo(descriptors_img2, CV_32F);
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_img1,descriptors_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_img1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_img1.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches(img1,kp1,img2,kp2,good_matches,img_matches,Scalar::all(-1),
Scalar::all(-1),vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( kp1[ good_matches[i].queryIdx ].pt );
scene.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
After wards some people told me to include the following code
cv::Mat result;
warpPerspective( img1, result, H, cv::Size( img1.cols+img2.cols, img1.rows) );
cv::Mat half(result, cv::Rect(0, 0, img2.cols, img2.rows) );
img2.copyTo(half);
imshow("result",result);
The result I got is
I also tried using inbuilt opencv stitch function. And I got the result
I am trying to implement stitch function so I dont want to use inbuilt opencv stitch function.
Can any one tell me where I went wrong and correct my code.Thanks in advance
Image stitching includes the following steps:
Feature finding
Find camera parameters
Warping
Exposure compensation
Seam Finding
Blending
You have to do all these steps in order to get the perfect result.
In your code you have only done the first part, that is feature finding.
You can find a detailed explanation on how image stitching works in Learn OpenCV
Also I have the code on Github
Hope this helps.
I'm trying to reconstruct a 3D model of a anatomical structure. So I want to match key points in pair of X ray images. I tried it by using following code. But it didn't give correct results.
Mat tmp = cv::imread( "1.jpg", 1 );
Mat in = cv::imread( "2.jpg", 1 );
cv::SiftFeatureDetector detector( 0.0001, 1.0 );
cv::SiftDescriptorExtractor extractor;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect( tmp, keypoints1 );
detector.detect( in, keypoints2 );
Mat feat1,feat2;
drawKeypoints(tmp,keypoints1,feat1,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
drawKeypoints(in,keypoints2,feat2,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
imwrite( "feat1.bmp", feat1 );
imwrite( "feat2.bmp", feat2 );
int key1 = keypoints1.size();
int key2 = keypoints2.size();
printf("Keypoint1=%d \nKeypoint2=%d", key1, key2);
Mat descriptor1,descriptor2;
extractor.compute( tmp, keypoints1, descriptor1 );
extractor.compute( in, keypoints2, descriptor2 );
BruteForceMatcher<L2<float> > matcher;
std::vector< DMatch > matches;
matcher.match( descriptor1, descriptor2, matches );
double max_dist = 0; double min_dist = 100;
Mat img_matches;
for( int i = 0; i < descriptor1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptor1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.03) )
{ good_matches.push_back( matches[i]); }
}
drawMatches( tmp, keypoints1, in, keypoints2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
namedWindow("SIFT", CV_WINDOW_AUTOSIZE );
imshow("SIFT", img_matches);
imwrite("sift_1.jpg",img_matches);
waitKey(0);
return 0;
These are the two images
This is what i got from this code
This is very close to my expected result but it also matching wrong points. This shows few points but i need more points.
Feature detectors like SIFT or SURF are designed to work and match images that have a rich and distinctive texture. They are not designed to work with very spares binary inputs like your examples.
You might want to try them on the original X-Rays for more image context.
Alternatively, you might try a more direct global alignment model between the images.
Check out this link for some options for alignment with the findTransformECC() function.
Also see the article here.
I think you may try to use ITK, ITK is designed to complete image registration with 2D or 3D images.
I'm trying to extract and match features with OpenCV using ORB for detecting and FLANN for matching, and i get a really weird result. After loading my 2 images and converting them to grayscale, here's my code:
// Initiate ORB detector
Ptr<FeatureDetector> detector = ORB::create();
// find the keypoints and descriptors with ORB
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(gray_image1, keypoints_object, descriptors_object );
extractor->compute(gray_image2, keypoints_scene, descriptors_scene );
// Flann needs the descriptors to be of type CV_32F
descriptors_scene.convertTo(descriptors_scene, CV_32F);
descriptors_object.convertTo(descriptors_object, CV_32F);
FlannBasedMatcher matcher;
vector<DMatch> matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
vector< Point2f > obj;
vector< Point2f > scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC );
// Use the Homography Matrix to warp the images
cv::Mat result;
warpPerspective(image1,result,H,Size(image1.cols+image2.cols,image1.rows));
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
image2.copyTo(half);
imshow( "Result", result );
And this is a screen shot of the weird result i'm getting:
screen shot
What might be the problem?
Thanks!
You are experiencing the results of a bad matching: The homography which fits the data is not "realistic" and thus distorts the image.
You can debug your matching with imshow( "Good Matches", img_matches ); as done in the example.
There are multiple approaches to improve your matches:
Use the crossCheck option
Use the SIFT ratio test
Use the OutputArray mask in cv::findHompgraphy to identify totally wrong homography computations
... and so on...
ORB are binary feature vectors which don't work with Flann. Use Brute Force (BFMatcher) instead.
I am trying to compile a sample openCV project which uses SURF for image matching.
The code is listed bellow :
#include <stdio.h>
#include <iostream>
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
//#include "opencv2/core/core.hpp"
//#include "opencv2/features2d/features2d.hpp"
//#include "opencv2/highgui/highgui.hpp"
using namespace cv;
void readme();
/** #function main */
int main()
{
/*
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
*/
Mat img_1 = imread("D:\\A.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_2 = imread("D:\\backImg.jpg", CV_LOAD_IMAGE_GRAYSCALE);
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector;
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance < 2*min_dist )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
for( int i = 0; i < good_matches.size(); i++ )
{ printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx ); }
waitKey(0);
return 0;
}
/** #function readme */
void readme()
{ std::cout << " Usage: ./SURF_FlannMatcher <img1> <img2>" << std::endl; }
When I try to compile I receive an error for SurfFeatureDetector as undeclared identifier. When I right click on it and go to definition it opens it. Should I include something else? It is located in features2d.hpp which is included in cv.h file. Could you please help me with this matter?
Just add Nonfree module in your header files and it will solve your problem if you are using opencv 2.4.2..
Your compiler and editor are confused by the two OpenCV versions installed on your system.
First, make sure that all the settings ( include paths in Visual Studio, lib path in Visual studio linker settings and bin path -probably an environment variable) point to the same version.
Next, make sure to include all the needed headers. In OpenCV 2.4 and above, SURF and SIFT have been moved to nonfree module, so you also have to install it. Do not forget that some functions may be moved to legacy.
And if you uninstall one version of OpenCV, the editor (which doesn't have all the parsing capabilities of the compiler) will not be confused anymore.
had the same problem
just include
#include <opencv2/nonfree/nonfree.hpp>
and solved.
i had the same problem, I do the following steps
delete different versions of opencv in programfiles.
check environment variable (only one and right path of opencv version)
in my project debug folder, I add "cvextern.dll" and "cvextern_gpu.dll" (adding features2d.dll not enough)