I'm trying to stitch 2 images just for start for panography. I"ve already found keypoints, found homography using RANSAC but I can't figure out how to align these 2 images (i'm new at opencv). Now part of code
vector<Point2f> points1, points2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
points1.push_back( keypoints1[ good_matches[i].queryIdx ].pt );
points2.push_back( keypoints2[ good_matches[i].trainIdx ].pt );
}
/* Find Homography */
Mat H = findHomography( Mat(points2), Mat(points1), CV_RANSAC );
/* warp the image */
warpPerspective(mImg2, warpImage2, H, Size(mImg2.cols*2, mImg2.rows*2), INTER_CUBIC);
and I need to stitch Mat mImg1 where is loaded the first image and Mat warpImage2 where is the warped second image. Can you pls show me how to do that? I also have the warped image cut up and I know I have to change the homography matrix but for now I need to align these two images. Thank you for helping.
Edit: With Martin Beckett's help I added this code
//Point a cv::Mat header at it (no allocation is done)
Mat final(Size(mImg2.cols*2 + mImg1.cols, mImg2.rows*2),CV_8UC3);
//velikost img1
Mat roi1(final, Rect(0, 0, mImg1.cols, mImg1.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
warpImage2.copyTo(roi2);
mImg1.copyTo(roi1);
imshow("final", final);
and it's working now
You create a new larger image of the correct combined size, then make ROIs of the size of the existing images in the positions you want them in the final image and copy the existing images to the ROIs.
Related
I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.
I have created a program that can stitch multiple images together and am now looking to improve the efficiency of it. Depending on the size of the stitched image, eventually it is to large and contains too many keypoints that the machine runs out of allocatable memory. To compensate for this my goal is to store all the keypoints and descriptors as they are found so that I don't need to find them again in the master stitched image and only need to find them in the new image being stitched. I had this process working in python but haven't had the same luck in C++.
In order to do this I need to perform a perspectiveTransform() on the keypoints and therefor convert them from vector<keypoint> to vector<point2f> and back to vector<keypoint>. I have been able to achieve this and can confirm it works (pick to follow). I am not sure if the same process needs to be done to the descriptors (currently I have don it but either its wrong or not effective).
Issue: When I run this the keypoints and descriptors don't appear to work and I throw an error I created being: "Not enough matches found" even though I know at least the keypoints are making its way to the function.
Here is the code for the keypoint and descriptor transforms. The code first calculates the warpPerspective to be applied to image one as homography will warp the second image only. The rest of the codd deals with keypoints and descriptors.
tuple<Mat, vector<KeyPoint>, Mat> stitchMatches(Mat image1,Mat image2, Mat homography, vector<KeyPoint> kp1, vector<KeyPoint> kp2 , Mat desc1, Mat desc2){
Mat result, destination, descriptors_updated;
vector<Point2f> fourPoint;
vector<KeyPoint> keypoints_updated;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
//perspectiveTransform(Mat(fourPoint), destination, homography);
//- Get points used to determine Htr
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
//- Htr use to map image one to result in line with the alredy warped image 1
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_8UC3,cv::Scalar(0,0,0));
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//-- Variables to hold the keypoints at the respective stages
vector<Point2f> kp1Local,kp2Local;
vector<KeyPoint> kp1updated, kp2updated;
//Localize the keypoints to allow for perspective change
KeyPoint::convert(kp1, kp1Local);
KeyPoint::convert(kp2, kp2Local);
//perform persepctive transform on the keypoints of type vector<point2f>
perspectiveTransform(kp1Local, kp1Local, (Htr));
perspectiveTransform(kp2Local, kp2Local, (Htr*homography));
//convert keypoints back to type vector<keypoint>
for( size_t i = 0; i < kp1Local.size(); i++ ) {
kp1updated.push_back(KeyPoint(kp1Local[i], 1.f));
}
for( size_t i = 0; i < kp2Local.size(); i++ ) {
kp2updated.push_back(KeyPoint(kp2Local[i], 1.f));
}
//Add to master of list of keypoints to be passed along during next iteration of image
keypoints_updated.reserve(kp1updated.size() + kp2updated.size());
keypoints_updated.insert(keypoints_updated.end(),kp1updated.begin(),kp1updated.end());
keypoints_updated.insert(keypoints_updated.end(),kp2updated.begin(),kp2updated.end());
//WarpPerspective of decriptors to match that of the images and cooresponding keypoints
Mat desc1New, desc2New;
warpPerspective(desc2, desc2New, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(desc1, desc1New, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//create a new Mat including the descriports from desc1 and desc2
descriptors_updated.push_back(desc1New);
descriptors_updated.push_back(desc2New);
//------------TEST TO see if keypoints have moved
Mat img_keypoints;
drawKeypoints( result, keypoints_updated, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
imshow("Keypoints 1", img_keypoints );
waitKey();
destroyAllWindows();
return {result, keypoints_updated, descriptors_updated};
}
The following code is my master stitching program that does the actual stitching.
tuple<Mat,vector<KeyPoint>,Mat> stitch(Mat img1,Mat img2 ,vector<KeyPoint> keypoints, Mat descriptors, String featureDetection,String featureExtractor,String keypointsMatcher,String showMatches){
Mat desc, desc1, desc2, homography, result, croppedResult,descriptors_updated;
std::vector<KeyPoint> keypoints_updated, kp1, kp2;
std::vector<DMatch> matches;
//-Base Case[2]
if (keypoints.empty()){
//-Detect Keypoints and their descriptors
tie(kp1,desc1) = KeyPointDescriptor(img1, featureDetection,featureExtractor);
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//Find matches and calculated homography based on keypoints and descriptors
std::tie(matches,homography) = matchFeatures(kp1, desc1,kp2, desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true"){
drawMatchedImages( img1, kp1, img2, kp2, matches);
}
//stitch the images and update the keypoint and descriptors
std::tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,kp1,kp2,desc1,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
//base case[3:n]
else{
//Get keypoints and descriptors of new image and add to respective lists
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//find matches and determine homography
std::tie(matches,homography) = matchFeatures(keypoints_updated,descriptors_updated,kp2,desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true")
drawMatchedImages( img1, keypoints, img2, kp2, matches);
//stitch the images and update the keypoint and descriptors
tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,keypoints,kp2,descriptors,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
}
Lastly here is the image of the keypoints that are being transformed onto the stitched image. Any help is so greatly appreciated!
After combing through the I just happened to find I was using the wrong variable at one point!:)
I'm trying to find matching point of interest on 2 images. Final of this project is to build panorama.
I have this code
SIFT detector(0);
src1 = imread( folder + inputName1 , 1 );
cvtColor( src1, src1_gray, CV_BGR2GRAY );
// Detect first image
vector<KeyPoint> keypoints1;
detector.detect(src1_gray, keypoints1);
//Draw keypoints back to source image
drawKeypoints(src1,keypoints1,src1,Scalar::all(-1), 1);
imwrite(folder + outputName1,src1);
src2 = imread( folder + inputName2 , 1 );
cvtColor( src2, src2_gray, CV_BGR2GRAY );
// Detect second image
vector<KeyPoint> keypoints2;
detector.detect(src2_gray, keypoints2);
//Draw keypoints back to source image
drawKeypoints(src2,keypoints2,src2,Scalar::all(-1), 1);
imwrite(folder + outputName2,src2);
vector<DMatch> matches;
Mat output;
drawMatches(src1,keypoints1,src2,keypoints2,matches,output);
imwrite(folder + "matches.jpg",output);
But in final image matches.jpg, all points are shown, and vector matches is empty.
What I am doing wrong? I thought, that only matching points will be in final image, and in vector matches I find coordinates to draw lines between points.
Or should I use RANSAC for find matching points?
You have not matched any points. Look at this example: http://docs.opencv.org/doc/user_guide/ug_features2d.html.
You need to extract descriptors and then match them with FLANN for instance. Then you can draw your matches ;)
can somebody help me, how we can calculate the new positions of keypoints in the transformed image,the keypoints were detected in original image. I am using opencv homography matrix and warpPerspective to make the transformed image.
Here is a code..
...
std::vector< Point2f > points1,points2;
for( int i = 0; i < matches1.size(); i++ )
{
points1.push_back( keypoints_input1[matches1[i].queryIdx ].pt );
points2.push_back( keypoints_input2[matches1[i].trainIdx ].pt );
}
/* Find the Homography Matrix for current and next frame*/
Mat H1 = findHomography( points2, points1, CV_RANSAC );
/* Use the Homography Matrix to warp the images*/
cv::Mat result1;
warpPerspective(input2, result1, H1, Size(input2.cols+150, input2.rows+150),
INTER_CUBIC);
...
}
Now I want to calculate the new positions of points2 in the result1 image.
For example in the below transformed image , we know the corner points. Now I want to calculate the new position of the keypoints say before transformation {(x1,y1),(x2,y2),(x3,y3)...}, How we can calculate it?
Update: opencv 'perspectiveTransform' does what I trying to do.
Let's call I' the image obtained by warping image I using homography H.
If you extracted keypoints mi = (xi, yi, 1) in original image I, you can get the keypoints m'i in the warped image I' using the homography transform: S * m'i = H * mi. Notice the scale factor S, if you want the keypoints coordinates in pixels, you have to scale m'i so that the third element is 1.
If you want to understand where the scale factor comes from, have a look at Homogeneous Coordinates.
Also, there is an OpenCV function to apply this transformation to an array of points: perspectiveTransform(documentation).
I am trying to use SURF but I am having trouble finding way to do so in C. The documentation only seems to have stuff for C++ in terms of.
I have been able to detect SURF feature:
IplImage *img = cvLoadImage("img5.jpg");
CvMat* image = cvCreateMat(img->height, img->width, CV_8UC1);
cvCvtColor(img, image, CV_BGR2GRAY);
// detecting keypoints
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;
//Extract SURF points by initializing parameters
CvSURFParams params = cvSURFParams(1, 1);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
printf("Image Descriptors: %d\n", imageDescriptors->total);
//draw the keypoints on the captured frame
for( i = 0; i < imageKeypoints->total; i++ )
{
CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, i );
CvPoint center;
int radius;
center.x = cvRound(r->pt.x);
center.y = cvRound(r->pt.y);
radius = cvRound(r->size*1.2/9.*2);
cvCircle( image, center, radius, CV_RGB(0,255,0), 1, 8, 0 );
}
But I can't find the method that I need to compare the descriptors of 2 images. I found this code in C++ but I'm having trouble translating it:
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
I would appreciate if someone could lead me on to a descriptor matcher or even better, let me know where I can find the OpenCV documentation in C only.
This link might give you a hint. https://projects.developer.nokia.com/opencv/browser/opencv/opencv-2.3.1/samples/c/find_obj.cpp . Look in the function naiveNearestNeighbor
Check out the blog post from thioldhack. Contains a sample code. Its for QT, but you can easily do it for VC++ or any other. You will need to match the Key points using K-nearest neighbour algorithm. It has all.
The slightly longer but surest way is to compile OpenCV on your computer with debug information and just to step into the C++ implementation with a debugger. You can also copy it aside to your project and start peeling it like an onion till you get to pure C.