rotation and translation of object opencv - c++

So i m working with opencv to find an object inside an image i have scene image and the object image i could detect the object with template matching but i need to give a rotation tolerance which is not avaible with template matching i m going to use feature 2D to detect features with findhomography and decomposehomography so i can get the translation and rotation but i want to understand how can i find decomposehomography parametre K – The input intrinsic camera calibration matrix.
Thanks

I found the solution so I will post it
first of all the use of feature desciption algorithm is needed i choosed to work with SURF algorithm
after finding the object and the scene keypoints you need to find the homography matrix with findhomography after that you can calculate the angle using this code
float a = homography.at<double>(0, 0);
float b = homography.at<double>(0, 1);
float theta = atan2(b, a) * (180 / M_PI);
cout << theta;
for the translation after finding the homography matrix you can find the location of the object in the image and compare the initial position with the position found
std::vector<Point2f> obj_corners(4);
//-- Get the corners from the image_1 ( the object to be "detected" )
obj_corners[0] = Point2f(0, 0);
obj_corners[1] = Point2f((float)patch.cols, 0);
obj_corners[2] = Point2f((float)patch.cols, (float)patch.rows);
obj_corners[3] = Point2f(0, (float)patch.rows);
std::vector<Point2f> scene_corners(4);
//it will return the scene_corners
//these corners are the location of the patch in the master
perspectiveTransform(obj_corners, scene_corners, homography);
the scene_corners contains the coordination of the object in the image

Related

3D Reconstruction Of Planar Markers usin OpenCV

I am trying to perform 3D Reconstruction(Structure From Motion) from Multiple Images of Planar Markers. I very new to MVG and openCV.
As far I have understood I have to do the following steps:
Identify corresponding 2D corner points in the one images.
Calculate the Camera Pose of the first image us cv::solvePNP(assuming the
origin to be center of the marker).
Repeat 1 and 2 for the second image.
Estimate the relative motion of the camera by Rot_relative = R2 - R1,
Trans_relative = T2-T1.
Now assume the first camera to be the origin construct the 3x4 Projection
Matrix for both views, P1 =[I|0]*CameraMatrix(known by Calibration) and P2 =
[Rot_relative |Trans_relative ].
Use the created projection matrices and 2D corner points to triangulate the
3D coordinate using cv::triangulatePoints(P1,P2,point1,point2,OutMat)
The 3D coordinate can be found by dividing the each rows of OutMat by the 4th
row.
I was hoping to keep my "First View" as my origin and iterate
through n views repeating steps from 1-7(I suppose its called Global SFM).
I was hoping to get (n-1)3D points of the corners with "The first View as origin" which we could optimize using Bundle Adjustment.
But the result I get is very disappointing the 3D points calculated are displaced by a huge factor.
These are questions:
1.Is there something wrong with the steps I followed?
2.Should I use cv::findHomography() and cv::decomposeHomographyMat() to find the
relative motion of the camera?
3.Should point1 and point2 in cv::triangulatePoints(P1,P2,point1,point2,OutMat)
be normalized and undistorted? If yes, how should the "Outmat" be interpreted?
Please anyone who has insights towards the topic, can you point out my mistake?
P.S. I have come to above understanding after reading "MultiView Geometry in Computer Vision"
Please find the code snippet below:
cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f>
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
{
cv::Mat rVectFirstView, tVecFristView;
cv::Mat rVectSecondView, tVecSecondView;
cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);
cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);
cv::Rodrigues(rVectFirstView, RotMatFirstView);
cv::Rodrigues(rVectSecondView, RotMatSecondView);
cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;
cv::Mat RelativePose;
cv::hconcat(RelativeRot, RelativeTrans, RelativePose);
cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);
cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
cv::Mat X;
cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
X.row(0) = X.row(0) / X.row(3);
X.row(1) = X.row(1) / X.row(3);
X.row(2) = X.row(2) / X.row(3);
return X;
}

OpenCV Center Homography

I am trying to create a stitching algorithm. I have been successful in creating it with a few tweaks needed. The photos below are examples of my stitching program so far. I am able to provide it with an unordered list of image (so long as image is in flight path or side by side it will work regardless of their orientation to one another.
The issue is if the images are reversed some of the image doesn't make it into the final product. Here is the code for the actual stitching. Assume that finding keypoints, matching, and homography is done correctly.
By altering this code is there a way to centre the first image to the destination blank image and still stitch to it. Also, I got this code on stack overflow (Opencv Image Stitching or Panorama ) and am not fully sure how it works and would love if someone could explain it.
Thanks for any help in advance!
Mat stitchMatches(Mat image1,Mat image2, Mat homography){
Mat result;
vector<Point2f> fourPoint;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
Mat destination;
perspectiveTransform(Mat(fourPoint), destination, homography);
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_32F);
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_CONSTANT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
return result;`
It's normally easy to center an image; you simply create a bigger matrix padded with zeros (or whatever color you want), and define an ROI in the center with the same size of your image, and place it in there. However, you cannot in general do this with your two images. The problem is that if an image is shifted, or rotated, so that parts of it are outside your destination image bounds, then your returned warped image from warpPerspective is cut off at those bounds. What you need to do is create the padded image, insert the image that is not being warped wherever you like, and modify the transformation (homography, in this case) by adding in the translation to those pixels.
For example, if your centered image has it's top-left point at (400,500) in the padded image, then you need to add a translation of (400, 500) to your homography so the pixels get mapped to the correct space, and as long as your padded image is large enough, none of it will be cut off.
You will need to create a translational homography and compose it with your original homography to add the translation in. For example, suppose your anchor point for the non-warped image inside the padded image is at (x,y). Translation in an homography is given by the last two columns; if your homography is a 3x3 matrix H then (using normal mathematical indexing) H(1,3) is your translation in x and H(2,3) is the translation in y given by your homography. So we need to create a new identity homography H_t and add those translations in:
1 0 x
H_t = 0 1 y
0 0 1
Then you can compose this with your original homography H (using matrix multiplication): H_n = H_t * H. Using the new homography H_n we can warp the image into this padded space with that added translation to move it to the correct spot using warpPerspective as usual.
You can also automate this to pad the image precisely as much as it needs, so that you don't have excess padding and the padding will stretch only as needed. See my answer here for a detailed explanation of how to calculate that and warp your images into the padded space.

opencCV Calculation of distortion co-efficients(Uncalibrated) to Undistort

I'm new to openCV using version 2.4.9
I am trying to generate a 3D projection of points from a sequence of images without any knowledge of the camera parameters nor have camera used with me to calibrate. The camera used had a fish eye lens.
I used goodFeaturesToTrack() for detecting feature points followed by LK implementation in openCV to track the feature points in the sequence of images. Using these points I was successfully able to estimate the Fundamental Matrix from findFundamentalMat() and implemented stereoRectifyUncalibrated() to generate rectification homography matrices H1 and H2.Then I have computed Rotation matrix R from H as
R = cameraMatrix^{-1}*H*cameraMatrix
Now I need to undistort my images after rectification. Either by initUndistortRectifyMap() and remap() or directly by undistort(), but both the functions also require "distortion co-efficients" to compute corrected image.
I tried to find various methods to estimate those parameters, neither the documentation of the camera model is made available by the company, nor I could find any other method apart from calibrating camera using chessboards or circles grid.
How do I do it??
Am I doing it right?
Is there any other better method?
Can someone kindly help?
Thanks in Advance.
//Code
//Fundamental Matrix
Mat fundamental_matrix = findFundamentalMat(points[0], points[1], FM_RANSAC, 3, 0.99);
cout<<"F:\n" <<fundamental_matrix<<endl;
//Rectification Homographies
Mat H1, H2,F;
F = fundamental_matrix;
stereoRectifyUncalibrated(points[0],points[1], F, image.size(), H1, H2, 3);
cout<<"H1:\n" <<H1<<endl;
cout<<"H2:\n" <<H2<<endl;
//calculating Rotation matrix from homographic maps
Mat fInv= fundamental_matrix.inv();
R = (fInv)*H1*fundamental_matrix;
// Mat distCoeffs = Mat::zeros(8, 1, CV_64F);
initUndistortRectifyMap(fundamental_matrix, distCoeffs, R, fundamental_matrix, image.size() ,CV_32FC1, map1, map2);
//How to compute distCoeffs without a camera nor prior knowledge.Thank You

Calculating new positions of keypoints

can somebody help me, how we can calculate the new positions of keypoints in the transformed image,the keypoints were detected in original image. I am using opencv homography matrix and warpPerspective to make the transformed image.
Here is a code..
...
std::vector< Point2f > points1,points2;
for( int i = 0; i < matches1.size(); i++ )
{
points1.push_back( keypoints_input1[matches1[i].queryIdx ].pt );
points2.push_back( keypoints_input2[matches1[i].trainIdx ].pt );
}
/* Find the Homography Matrix for current and next frame*/
Mat H1 = findHomography( points2, points1, CV_RANSAC );
/* Use the Homography Matrix to warp the images*/
cv::Mat result1;
warpPerspective(input2, result1, H1, Size(input2.cols+150, input2.rows+150),
INTER_CUBIC);
...
}
Now I want to calculate the new positions of points2 in the result1 image.
For example in the below transformed image , we know the corner points. Now I want to calculate the new position of the keypoints say before transformation {(x1,y1),(x2,y2),(x3,y3)...}, How we can calculate it?
Update: opencv 'perspectiveTransform' does what I trying to do.
Let's call I' the image obtained by warping image I using homography H.
If you extracted keypoints mi = (xi, yi, 1) in original image I, you can get the keypoints m'i in the warped image I' using the homography transform: S * m'i = H * mi. Notice the scale factor S, if you want the keypoints coordinates in pixels, you have to scale m'i so that the third element is 1.
If you want to understand where the scale factor comes from, have a look at Homogeneous Coordinates.
Also, there is an OpenCV function to apply this transformation to an array of points: perspectiveTransform(documentation).

OpenCV rotate, distort and translate ROI in new image

I have an image from which I want to get a vertical ROI, apply some transformations and add to another image.
I read a lot of questions and answer on StackOverflow and other forums, but I'm still stuck with this problem. For the moment I'm using the C interface of OpenCV, but I could use the C++ one if needed (I would have to write a conversion function, since I'm working with CGImageRef in Cocoa).
To get from the top image (see below) to the bottom image, I guess I have to :
Get the ROI on the first image ;
Scale it down ;
Get the intersection points on the lines between the center and the 2 circles for my "width" angle (the angle is fixed) ;
Distort the image so the corners stick to my intersection points ;
Rotate around the center point and put it in the output image.
For the moment, I manage well to do this :
Getting the ROI ;
Scaling it with cvResize ;
Getting the intersection points shouldn't be too complicated, as it is pure geometry and I implemented it yet for another purpose.
But, I have no idea at all of how to distort the resulting image of my ROI, and I don't know if it is even possible in OpenCV. Would I have to use a kind of perspective correction ?
And, I've been trying the few good posts solutions I found by here to rotate with the rotated bounding box, but with no good results for the moment.
EDIT :
Well, I managed to do the first part of the work :
Getting a ROI in a basis image ;
Rotating and placing it at a fixed distance from the center.
I used the method explained and coded in this post : https://stackoverflow.com/a/16285286/1060921
I only added a variable to set the rotation point and get my inner circle.
NB : I set the ROI BEFORE to call the method, so the ROI in the post method is... the image size. Then I place it at the center of my final image with a cvAdd.
Here I get one pixel slices of my camera input. What I want to do now is to distort bigger slices, for example from 2 pixels on the inner circle to 5 pixels on the outer one.
See this tutorial which uses warpPerspective to correct perspective distortion.
EDIT: In your case warpAffine should be better and simpler solution.
So, you could do something like this, just use four points instead of three:
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( ... );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );