projection matrix from homography - c++

I'm working on stereo-vision with the stereoRectifyUncalibrated() method under OpenCV 3.0.
I calibrate my system with the following steps:
Detect and match SURF feature points between images from 2 cameras
Apply findFundamentalMat() with the matching paairs
Get the rectifying homographies with stereoRectifyUncalibrated().
For each camera, I compute a rotation matrix as follows:
R1 = cameraMatrix[0].inv()*H1*cameraMatrix[0];
To compute 3D points, I need to get projection matrix but i don't know how i can estimate the translation vector.
I tried decomposeHomographyMat() and this solution https://stackoverflow.com/a/10781165/3653104 but the rotation matrix is not the same as what I get with R1.
When I check the rectified images with R1/R2 (using initUndistortRectifyMap() followed by remap()), the result seems correct (I checked with epipolar lines).
I am a little lost with my weak knowledge in vision. Thus if somebody could explain to me. Thank you :)

The code in the link that you have provided (https://stackoverflow.com/a/10781165/3653104) computes not the Rotation but 3x4 pose of the camera.
The last column of the pose is your Translation vector

Related

How to find the angle formed by blades of a wind turbine when the yaw is changed?

This is a continuation of the question from Here-How to find angle formed by the blades of a wind turbine with respect to a horizontal imaginary axis?
I've decided to use the following methodology for this-
 Getting a frame from a camera and putting it in a loop.
 Performing Canny edge detection.
 Perform HoughLinesP to detect lines in the image.
Finding Blade Angle:
 Perform Probabilistic Hough Lines Transform on the image. Restrict the blade lines to the length of the blades, as known already.
 The returned value will have the start and end points of the lines detected. Since there are no background noises, this gives the starting and end point of the blade lines and the image will have the blade lines.
 Now, find the dot product with a vector (1,0) by finding the vectors of the blade lines detected or we can use atan2 to find the relative angle of all the points detected with respect to a horizontal.
Problem:
When the yaw angle of the turbine is changed and it is not directly facing the camera, how do I calculate the blade angle formed?
The idea is to basically map the angles when rotated back into the form when viewed head on. From what I've been able to understand, I thought I'd find the homography matrix, decompose the matrix to get rotation, convert to Euler angles to calculate shift from the original axis, then shift all the axes with that angle. However, it's just a vague idea with no concrete planning to go upon.
Or I begin with trying to find the projection matrix, then get camera matrix and rotation matrix? I am lost on this account completely and feel overwhelmed with the many functions...
Other things I came across was the perspective transform,solvepnp..
It would be great if anyone could suggest another way to deal with this? Any links of code snippets would be helpful. I'm not that familiar with OpenCV and would be grateful for any help.
Thanks!
Edit:
[Edit by Spektre]
Assume the tip of the blades plus the center (or the three "roots" of the blades") lie on a common plane.
Fit a homography between those points and the corresponding ones in a reference pose for the turbine (cv::findHomography in OpenCv)
Decompose the homography into rotation and translation using an estimated or assumed camera calibration (cv::decomposeHomographyMat).
Convert the rotation into Euler angles.

Obtaining world coordinates of an object from its image coordinates

I have been following this documentation to use OpenCV. In the formula below, I have successfully calculated both the intrinsic as well as the extrinsic matrices(I have made use of the solvePnP() procedure to obtain these matrices). Since, the object is lying on the ground I have substituted Z = 0. Then, I just removed the third column of the extrinsic matrix and multiplied it with intrinsic matrix to obtain a 3X3 projection matrix. I took it's inverse, and multiplied it by image coordinates i.e. su,sv and s.
However, all points in the world coordinates seem to be off by 1 mm or lesser, and hence I am getting not so accurate co-ordinates. Does anyone know where I might be going wrong?
Thanks
The camera calibration will probably always somewhat inaccurate, because for more than 2 calibration images instead of getting one true solution to equation system acquired from calibration images, You get the solution with the smallest error.
The same goes to cv::solvePnP() . You use one of three methods of optimising the many possible solutions for given equation system.
I do not understand how did You get the intrinsic and extrinsic matrices from cv::solvePnP() , which is used to calculate the rotation and translation of the object in camera coordinate system.
What You can do:
Try to get better intrinsic parameters
Try other methods for solvePnP like EPNP or check the RANSAC version

3D reconstruction from 2 images with baseline and single camera calibration

my semester project is to Calibrate Stereo Cameras with a big baseline (~2m).
so my approach is to run without exact defined calibration pattern like the chessboard cause it had to be huge and would be hard to handle.
my problem is similar to this: 3d reconstruction from 2 images without info about the camera
Program till now:
Corner detection left image goodFeaturesToTrack
refined corners cornerSubPix
Find corner locations in right image calcOpticalFlowPyrLK
calculate fundamental matrix F findFundamentalMat
calculate H1, H2 rectification homography matrix stereoRectifyUncalibrated
Rectify images warpPerspective
Calculate Disparity map sgbm
so far so good it works passably but rectified images are "jumping" in perspective if i change the number of corners..
don't know if this if form imprecision or mistakes i mad or if it cant be calculated due to no known camera parameters or no lens distortion compensation (but also happens on Tsukuba pics..)
suggestions are welcome :)
but not my main problem, now i want to reconstruct the 3D points.
but reprojectImageTo3D needs the Q matrix which i don't have so far. so my question is how to calculate it? i have the baseline, distance between the two cameras. My feeling says if i convert des disparity map in to a 3d point cloud the only thing im missing is the scale right? so if i set in the baseline i got the 3d reconstruction right? then how to?
im also planing to compensate lens distortion as the first step for each camera separately with a chessboard (small and close to one camera at a time so i haven't to be 10-15m away with a big pattern in the overlapping area of both..) so if this is helping i could also use the camera parameters..
is there a documentation besides the http://docs.opencv.org? that i can see and understand what and how the Q matrix is calculated or can i open the source code (probably hard to understand for me ^^) if i press F2 in Qt i only see the function with the transfer parameter types.. (sorry im really new to all of this )
left: input with found corners
top h1, h2: rectify images (looks good with this corner count ^^)
SGBM: Disparity map
so i found out what the Q matrix constrains here:
Using OpenCV to generate 3d points (assuming frontal parallel configuration)
all these parameters are given by the single camera calibration:
c_x , c_y , f
and the baseline is what i have measured:
T_x
so this works for now, only the units are not that clear to me, i have used them form single camera calib which are in px and set the baseline in meters, divided the disparity map by 16, but it seams not the right scale..
by the way the disparity map above was wrong ^^ and now it looks better. you have to do a anti Shearing Transform cause the stereoRectifyUncalibrated is Shearing your image (not documented?).
described in this paper at "7 Shearing Transform" by Charles Loop Zhengyou Zhang:
http://research.microsoft.com/en-us/um/people/Zhang/Papers/TR99-21.pdf
Result:
http://i.stack.imgur.com/UkuJi.jpg

Compensate rotation from successive images

Lets say I have image1 and image2 obtained from a webcam. For taking the image2, the webcam undergoes a rotation (yaw, pitch, roll) and a translation.
What I want: Remove the rotation from image2 so that only the translation remains (to be precise: my tracked points (x,y) from image2 will be rotated to the the same values as in image1 so that only the translation component remains).
What I have done/tried so far:
I tracked corresponding features from image1 and image2.
Calculated the fundamental matrix F with RANSAC to remove outliers.
Calibrated the camera so that I got a CAM_MATRIX (fx, fy and so on).
Calculated Essential Matrix from F with CAM_Matrix (E = cam_matrix^t * F * cam_matrix)
Decomposed the E matrix with OpenCV's SVD function so that I have a rotation matrix and translation vector.
-I know that there are 4 combinations and only 1 is the right translation vector/rotation matrix.
So my thought was: I know that the camera movement from image1 to image2 won't be more than lets say about 20°/AXIS so I can eliminate at least 2 possibilities where the angles are too far off.
For the 2 remaining I have to triangulate the points and see which one is the correct one (I have read that I only need 1 , but due possible errors/outliers it should be done with some more to be sure which one is the right). I think I could use the OpenCV's triangulation function for this? Is my thought right so far? Do I need to calculate the projection error?
Let's move on and assume that I finally obtained the right R|t matrix.
How do I continue? I tried to multiply the normal, as well as transposed rotation matrix which should reverse the rotation (?) (for testing purpose I just tried both possible combinations of R|t, I have not done the triangulation in code yet) with a tracked point in image2. but the calculated point is way too far off from what it should be. Do I need the calibration matrix here as well?
So how can I invert the rotation applied to image2? (to be exact, apply the inverse rotation to my std::vector<cv::Point2f> array which contains the tracked (x,y) points from image2)
Displaying the de-rotated image would be also nice to have. This is done with warpPerspective function? Like in this post ?
(I just don't fully understand what the purpose of A1/A2 and dist in the T matrix is or how I can adopt this solution to solve my problem.)

warping images and problem with the matrix of Homography opencv

I have a problem when I'm getting the results of using the function cvFindHomograhpy().
The results it gives negative coordinates, I will explain now what I'm doing.
I'm working on video stabilization by using optical flow method. I have estimated the location of the features in the first and second frame. Now, my aim is to warp the images in order to stabilize the images. Before this step I should calculate the homography matrix between the frames which I used the function mentioned above but the problem that I'm getting this results which it doesn't seem to be realistic because it has negative values and these values can be changed to more weird results.
0.482982 53.5034 -0.100254
-0.000865877 63.6554 -0.000213824
-0.0901095 0.301558 1
After obtaining these results I get a problem to apply for image Warping by using CvWarpPerspective(). The error shows that there is a problem with using the matrices. Incorrect transforming from "cvarrTomat"?
So where is the problem? Can you give me another suggestion if it's available?
Notice: if you can help me about implementing the warping in c++ it would be great.
Thank you
A poor homography estimation can generate warping error inside CvWarpPerspective().
The homography values you have posted show that you have a full projective transformation that move points at infinity to points in 2d euclidean plane and it could be wrong.
In video stabilization to compute a good homography model usually other features are used such as harris corner, hessian affine or SIFT/SURF combined with a robust model estimator such as RANSAC or LMEDS.
Check out this link for a matlab example on video stabilization...