can somebody help me, how we can calculate the new positions of keypoints in the transformed image,the keypoints were detected in original image. I am using opencv homography matrix and warpPerspective to make the transformed image.
Here is a code..
...
std::vector< Point2f > points1,points2;
for( int i = 0; i < matches1.size(); i++ )
{
points1.push_back( keypoints_input1[matches1[i].queryIdx ].pt );
points2.push_back( keypoints_input2[matches1[i].trainIdx ].pt );
}
/* Find the Homography Matrix for current and next frame*/
Mat H1 = findHomography( points2, points1, CV_RANSAC );
/* Use the Homography Matrix to warp the images*/
cv::Mat result1;
warpPerspective(input2, result1, H1, Size(input2.cols+150, input2.rows+150),
INTER_CUBIC);
...
}
Now I want to calculate the new positions of points2 in the result1 image.
For example in the below transformed image , we know the corner points. Now I want to calculate the new position of the keypoints say before transformation {(x1,y1),(x2,y2),(x3,y3)...}, How we can calculate it?
Update: opencv 'perspectiveTransform' does what I trying to do.
Let's call I' the image obtained by warping image I using homography H.
If you extracted keypoints mi = (xi, yi, 1) in original image I, you can get the keypoints m'i in the warped image I' using the homography transform: S * m'i = H * mi. Notice the scale factor S, if you want the keypoints coordinates in pixels, you have to scale m'i so that the third element is 1.
If you want to understand where the scale factor comes from, have a look at Homogeneous Coordinates.
Also, there is an OpenCV function to apply this transformation to an array of points: perspectiveTransform(documentation).
Related
So i m working with opencv to find an object inside an image i have scene image and the object image i could detect the object with template matching but i need to give a rotation tolerance which is not avaible with template matching i m going to use feature 2D to detect features with findhomography and decomposehomography so i can get the translation and rotation but i want to understand how can i find decomposehomography parametre K – The input intrinsic camera calibration matrix.
Thanks
I found the solution so I will post it
first of all the use of feature desciption algorithm is needed i choosed to work with SURF algorithm
after finding the object and the scene keypoints you need to find the homography matrix with findhomography after that you can calculate the angle using this code
float a = homography.at<double>(0, 0);
float b = homography.at<double>(0, 1);
float theta = atan2(b, a) * (180 / M_PI);
cout << theta;
for the translation after finding the homography matrix you can find the location of the object in the image and compare the initial position with the position found
std::vector<Point2f> obj_corners(4);
//-- Get the corners from the image_1 ( the object to be "detected" )
obj_corners[0] = Point2f(0, 0);
obj_corners[1] = Point2f((float)patch.cols, 0);
obj_corners[2] = Point2f((float)patch.cols, (float)patch.rows);
obj_corners[3] = Point2f(0, (float)patch.rows);
std::vector<Point2f> scene_corners(4);
//it will return the scene_corners
//these corners are the location of the patch in the master
perspectiveTransform(obj_corners, scene_corners, homography);
the scene_corners contains the coordination of the object in the image
I am trying to perform 3D Reconstruction(Structure From Motion) from Multiple Images of Planar Markers. I very new to MVG and openCV.
As far I have understood I have to do the following steps:
Identify corresponding 2D corner points in the one images.
Calculate the Camera Pose of the first image us cv::solvePNP(assuming the
origin to be center of the marker).
Repeat 1 and 2 for the second image.
Estimate the relative motion of the camera by Rot_relative = R2 - R1,
Trans_relative = T2-T1.
Now assume the first camera to be the origin construct the 3x4 Projection
Matrix for both views, P1 =[I|0]*CameraMatrix(known by Calibration) and P2 =
[Rot_relative |Trans_relative ].
Use the created projection matrices and 2D corner points to triangulate the
3D coordinate using cv::triangulatePoints(P1,P2,point1,point2,OutMat)
The 3D coordinate can be found by dividing the each rows of OutMat by the 4th
row.
I was hoping to keep my "First View" as my origin and iterate
through n views repeating steps from 1-7(I suppose its called Global SFM).
I was hoping to get (n-1)3D points of the corners with "The first View as origin" which we could optimize using Bundle Adjustment.
But the result I get is very disappointing the 3D points calculated are displaced by a huge factor.
These are questions:
1.Is there something wrong with the steps I followed?
2.Should I use cv::findHomography() and cv::decomposeHomographyMat() to find the
relative motion of the camera?
3.Should point1 and point2 in cv::triangulatePoints(P1,P2,point1,point2,OutMat)
be normalized and undistorted? If yes, how should the "Outmat" be interpreted?
Please anyone who has insights towards the topic, can you point out my mistake?
P.S. I have come to above understanding after reading "MultiView Geometry in Computer Vision"
Please find the code snippet below:
cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f>
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
{
cv::Mat rVectFirstView, tVecFristView;
cv::Mat rVectSecondView, tVecSecondView;
cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);
cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);
cv::Rodrigues(rVectFirstView, RotMatFirstView);
cv::Rodrigues(rVectSecondView, RotMatSecondView);
cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;
cv::Mat RelativePose;
cv::hconcat(RelativeRot, RelativeTrans, RelativePose);
cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);
cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
cv::Mat X;
cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
X.row(0) = X.row(0) / X.row(3);
X.row(1) = X.row(1) / X.row(3);
X.row(2) = X.row(2) / X.row(3);
return X;
}
Currently I am using the following sequence
vector<vector<Point>> contours
1. findContours(srcMat, contours, ...)
2. convert contours to Point2f
3. findHomography(src, dst, RANSAC)
4. warpPerspective(srcMat, destMat, homo)
5. findContours
I would like to avoid step#4, while also transforming the Mat since I use some ROI relative to the contours from the transformed Mat.
The answer to running warpPerspective but on contours is to use cv::perspectiveTransform with the translation matrix.
The limitation is that it can transform only one contour at a time. Sample below.
vector<vector<Point2f>> contours; // from findContours
Mat trnsmat = getPerspectiveTransform(srcPoints, destPoints);
for (int i=0; i< contours.size(); i++)
cv::perspectiveTransform(contours[i], contours[i], trnsmat);
I assume your goal is to project the contour co-ordinates into the transformed space without using the entire image?
Load the contour co-ordinates into RoiMat structure and multiply it with the homography matrix computed using your findHomography function.
There's no need to warp the entire original image for the same.
If you want to view the transformed ROI on the image, probably pick a few interest points(for reference) from the original image and add it to the RoiMat structure.
In Python you can isolate each contour with the code below and afterwards do whatever processing you've to perform on each contour.
contours, hierarchy = cv2.findContours(image, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
contor_img = image[y:y+h, x:x+w]
Now you can process each contour individually(contor_img)
I have an image from which I want to get a vertical ROI, apply some transformations and add to another image.
I read a lot of questions and answer on StackOverflow and other forums, but I'm still stuck with this problem. For the moment I'm using the C interface of OpenCV, but I could use the C++ one if needed (I would have to write a conversion function, since I'm working with CGImageRef in Cocoa).
To get from the top image (see below) to the bottom image, I guess I have to :
Get the ROI on the first image ;
Scale it down ;
Get the intersection points on the lines between the center and the 2 circles for my "width" angle (the angle is fixed) ;
Distort the image so the corners stick to my intersection points ;
Rotate around the center point and put it in the output image.
For the moment, I manage well to do this :
Getting the ROI ;
Scaling it with cvResize ;
Getting the intersection points shouldn't be too complicated, as it is pure geometry and I implemented it yet for another purpose.
But, I have no idea at all of how to distort the resulting image of my ROI, and I don't know if it is even possible in OpenCV. Would I have to use a kind of perspective correction ?
And, I've been trying the few good posts solutions I found by here to rotate with the rotated bounding box, but with no good results for the moment.
EDIT :
Well, I managed to do the first part of the work :
Getting a ROI in a basis image ;
Rotating and placing it at a fixed distance from the center.
I used the method explained and coded in this post : https://stackoverflow.com/a/16285286/1060921
I only added a variable to set the rotation point and get my inner circle.
NB : I set the ROI BEFORE to call the method, so the ROI in the post method is... the image size. Then I place it at the center of my final image with a cvAdd.
Here I get one pixel slices of my camera input. What I want to do now is to distort bigger slices, for example from 2 pixels on the inner circle to 5 pixels on the outer one.
See this tutorial which uses warpPerspective to correct perspective distortion.
EDIT: In your case warpAffine should be better and simpler solution.
So, you could do something like this, just use four points instead of three:
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( ... );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
I'm trying to stitch 2 images just for start for panography. I"ve already found keypoints, found homography using RANSAC but I can't figure out how to align these 2 images (i'm new at opencv). Now part of code
vector<Point2f> points1, points2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
points1.push_back( keypoints1[ good_matches[i].queryIdx ].pt );
points2.push_back( keypoints2[ good_matches[i].trainIdx ].pt );
}
/* Find Homography */
Mat H = findHomography( Mat(points2), Mat(points1), CV_RANSAC );
/* warp the image */
warpPerspective(mImg2, warpImage2, H, Size(mImg2.cols*2, mImg2.rows*2), INTER_CUBIC);
and I need to stitch Mat mImg1 where is loaded the first image and Mat warpImage2 where is the warped second image. Can you pls show me how to do that? I also have the warped image cut up and I know I have to change the homography matrix but for now I need to align these two images. Thank you for helping.
Edit: With Martin Beckett's help I added this code
//Point a cv::Mat header at it (no allocation is done)
Mat final(Size(mImg2.cols*2 + mImg1.cols, mImg2.rows*2),CV_8UC3);
//velikost img1
Mat roi1(final, Rect(0, 0, mImg1.cols, mImg1.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
warpImage2.copyTo(roi2);
mImg1.copyTo(roi1);
imshow("final", final);
and it's working now
You create a new larger image of the correct combined size, then make ROIs of the size of the existing images in the positions you want them in the final image and copy the existing images to the ROIs.