How can I find the angle between two chessboard planes? - c++

I have two chessboard poses obtained with solvePnp:
Mat rotationVector1, translationVector1;
solvePnP(chess1WorldPoints, chess1ImagePoints, intrinsicMatrix, distortCoefficients, rotationVector1, translationVector1);
Mat rotationVector2, translationVector2;
solvePnP(chess2WorldPoints, chess2ImagePoints, intrinsicMatrix, distortCoefficients, rotationVector2, translationVector2);
How can I check if the planes of the poses are parallel, or find the angle between these planes?
More info
I tried obtaining Euler angles and computing the difference between each alpha, beta and gamma but that only tells me relative rotation for each axis I think:
Vec3d eulerAnglesPose1;
Mat rotationMatrix1;
Rodrigues(rotationVector1, rotationMatrix1);
getEulerAngles(rotationMatrix1, eulerAngles1);
Vec3d eulerAnglesPose2;
Mat rotationMatrix2;
Rodrigues(rotationVector2, rotationMatrix2);
getEulerAngles(rotationMatrix2, eulerAngles2);
I used the getEulerAngles implementation from Camera Rotation SolvePnp :
void getEulerAngles(Mat &rotCamerMatrix, Vec3d &eulerAngles)
{
Mat cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ;
double* _r = rotCamerMatrix.ptr<double>();
double projMatrix[12] =
{
_r[0],_r[1],_r[2],0,
_r[3],_r[4],_r[5],0,
_r[6],_r[7],_r[8],0
};
decomposeProjectionMatrix(Mat(3, 4, CV_64FC1, projMatrix), cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ, eulerAngles);
}
Edit
In my case a rotation-translation pair (R, T) gives the correspondence between a coordinate system where the camera is at (0,0,0) (the camera coordinate system) to a coordinate system where (0,0,0) is something I defined in the first two parameters of solvePnp (the world coordinate system). So I have two world coordinate systems relative to the same camera coordinate system.
If I could switch from coord. system 2 to coord. system 1 I could use the Z=0 planes for each one to find the normals and solve my problem.
I think that for example switching from coord. system 2 to camera system should be done like in this post:
Rinv = R' (just the transpose as it's a rotation matrix)
Tinv = -Rinv * T (T is 3x1 column vector)
Then if Pw = [X Y Z] is a point in world coord. system 2 I can get its camera system coords.with:
Pc = [ Rinv Tinv] * [X Y Z 1] transposed.
Pc looks like [a b c d]
Following the same logic again I can get the coordinates of Pc relative to coord. system 1:
Pw1 = [ R1 T1] * Pc
Should I normalize Pc or just normalize Pw1 at the end?

I found how to translate points between coordinate systems in this OpenCV Demo.
The explanation from "Demo 3: Homography from the camera displacement" (the section spanning from title right until the first lines of code) shows how to translate between coordinate systems using matrix multiplication. I just had to apply it to my situation (I had CMO1 and CMO2 and needed to find O1MO2).
This way I can get two planes in the same coord. system, get their normals and find the angle between them.
Also it helped to realize that the extrinsic matrix [R T] translates a 3D point from the world coord. system to the camera coord. system (where camera is at (0,0,0)), not the other way around.

Related

kitti dataset camera projection matrix

I am looking at the kitti dataset and particularly how to convert a world point into the image coordinates. I looked at the README and it says below that I need to transform to camera coordinates first then multiply by the projection matrix. I have 2 questions, coming from a non computer vision background
I looked at the numbers from calib.txt and in particular the matrix is 3x4 with non-zero values in the last column. I always thought this matrix = K[I|0], where K is the camera's intrinsic matrix. So, why is the last column non-zero and what does it mean? e.g P2 is
array([[7.070912e+02, 0.000000e+00, 6.018873e+02, 4.688783e+01],
[0.000000e+00, 7.070912e+02, 1.831104e+02, 1.178601e-01],
[0.000000e+00, 0.000000e+00, 1.000000e+00, 6.203223e-03]])
After applying projection into [u,v,w] and dividing u,v by w, are these values with respect to origin at the center of image or origin being at the top left of the image?
README:
calib.txt: Calibration data for the cameras: P0/P1 are the 3x4
projection
matrices after rectification. Here P0 denotes the left and P1 denotes the
right camera. Tr transforms a point from velodyne coordinates into the
left rectified camera coordinate system. In order to map a point X from the
velodyne scanner to a point x in the i'th image plane, you thus have to
transform it like:
x = Pi * Tr * X
Refs:
How to understand the KITTI camera calibration files?
Format of parameters in KITTI's calibration file
http://www.cvlibs.net/publications/Geiger2013IJRR.pdf
Answer:
I strongly recommend you read those references above. They may solve most, if not all, of your questions.
For question 2: The projected points on images are with respect to origin at the top left. See ref 2 & 3, the coordinates of a far 3d point in image are (center_x, center_y), whose values are provided in the P_rect matrices. Or you can verify this with some simple codes:
import numpy as np
p = np.array([[7.070912e+02, 0.000000e+00, 6.018873e+02, 4.688783e+01],
[0.000000e+00, 7.070912e+02, 1.831104e+02, 1.178601e-01],
[0.000000e+00, 0.000000e+00, 1.000000e+00, 6.203223e-03]])
x = [0, 0, 1E8, 1] # A far 3D point
y = np.dot(p, x)
y[0] /= y[2]
y[1] /= y[2]
y = y[:2]
print(y)
You will see some output like:
array([6.018873e+02, 1.831104e+02 ])
which is quite near the (p[0, 2], p[1, 2]), a.k.a. (center_x, center_y).
For all the P matrices (3x4), they represent:
P(i)rect = [[fu 0 cx -fu*bx],
[0 fv cy -fv*by],
[0 0 1 0]]
Last column are baselines in meters w.r.t. the reference camera 0. You can see the P0 has all zeros in the last column because it is the reference camera.
This post has more details:
How Kitti calibration matrix was calculated?

C++ Opencv Calibration of the camera with different resolution

My camera has different resolutions
1280*480
640*240
320*120
I have used the algorithm of Opencv3 to calibrate the camera with the resolution of 1280*480 and I have got the camera matrix (fx fy cx cy)and the distortion matrix (k1 k2 p1 p2 k3)for this resolution.
But now I want to use these camera matrix and distortion matrix to calibrate the camera with the resolution of 320*120. I don't know how to apply these two matrix of the resolution 1280*480 to the resolution of 320*120.
PS I haven't calibrated the camera with the resolution of 320*120 directly because the image is too small and the algorithm of Opencv can't find the chessboard.
I want to know how the camera matrix (fx fy cx cy)and the distortion matrix (k1 k2 p1 p2 k3) will change if i change the resolution 1280*480 to 320*120.
The algorithm of opencv is the following one:
http://docs.opencv.org/3.0-beta/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
You don't need to change the distortion matrix. As for the camera matrix (the one containing fx, fy, cx, cy), you just need to divide them by 4 in your case. The general formula is:
fx' = (dimx' / dimx) * fx
fy' = (dimy' / dimy) * fy
fx' is the value for your new resolution, fx is the value you already have for your original resolution, dimx' is the new resolution along the x axis, dimx is the original one. The same applies to fy.
cx and cy are calculated analogically because all these values are expressed in pixel coordinates.
As per the OpenCV docs regarding the camera matrix:
if an image from the camera is scaled by a factor, all of these
parameters should be scaled (multiplied/divided, respectively) by the
same factor.

How to calculate extrinsic parameters of one camera relative to the second camera?

I have calibrated 2 cameras with respect to some world coordinate system. I know rotation matrix and translation vector for each of them relative to the world frame. From these matrices how to calculate rotation matrix and translation vector of one camera with respect to the other??
Any help or suggestion please. Thanks!
Here is an easier solution, since you already have the 3x3 rotation matrices R1 and R2, and the 3x1 translation vectors t1 and t2.
These express the motion from the world coordinate frame to each camera, i.e. are the matrices such that, if p is a point expressed in world coordinate frame, then the same point expressed in, say, camera 1 frame is p1 = R1 * p + t1.
The motion from camera 1 to 2 is then simply the composition of (a) the motion FROM camera 1 TO the world frame, and (b) of the motion FROM the world frame TO camera 2. You can easily compute this composition as follows:
Form the 4x4 roto-translation matrices Qw1 = [R1 t1] and Qw2 = [ R2 t2 ], both with the 4th row equal to [0 0 0 1]. These matrices completely express the roto-translation FROM the world coordinate frame TO camera 1 and 2 respectively.
The motion FROM camera 1 TO the world frame is simply Q1w = inv(Qw1). Here inv() is the algebraic inverse matrix, i.e. the one such that inv(X) * X = X * inv(X) = IdentityMatrix, for every nonsingular matrix X.
The roto-translation from camera 1 to 2 is then Q12 = Q1w * Qw2, and viceversa, the one from camera 2 to 1 is Q21 = Q2w * Qw1 = inv(Qw2) * Qw1.
Once you have Q12 you can extract from it the rotation and translation parts, if you so wish, respectively from its upper 3x3 submatrix and right 3x1 sub-column.
First convert your rotation matrix into a rotation vector. Now you have 2 3d vectors for each camera, call them A1,A2,B1,B2. You have all 4 of them with respect to some origin O. The rule you need is
A relative to B = (A relative to O)- (B relative to O)
Apply that rule to your 2 vectors and you will have their pose relative to one another.
Some documentation on converting from rotation matrix to euler angles can be found here as well as many other places. If you are using openCV you can just use Rodrigues. Here is some matlab/octave code I found.
Here is very simple and easy solution. I suppose your 1st camera has R1 and T1, 2nd camera has R2 and T2 rotation matrixes and translation vector according to common reference point.
Translation from 1st to 2nd camera, rotation from 1st to 2nd camera can be calculated by following two line matlab code;
R=R2*R1';
T=T2-R*T1;
but note, that is true if you have just one R and T for each camera. (I mean rotations and translation for one unique world reference). if you have more reference translations and rotations, you should calcuate R,T for every single reference point. Probably they will be very close to each other. But those might be sligtly different. Then you can calculate mean of Translation vector and convert all found rotation matrix to rotation vector, caluculate its mean and then convert them as rotation matrix.

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

I am doing camera calibration from tsai algo. I got intrensic and extrinsic matrix, but how can I reconstruct the 3D coordinates from that inormation?
1) I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system.
2) I can use the
OpenCV documentation approach:
as I know u, v, R , t , I can compute X,Y,Z.
However both methods end up in different results that are not correct.
What am I'm doing wrong?
If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as
H = K*[r1, r2, t], //eqn 8.1, Hartley and Zisserman
with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.
Then normalize dividing everything by t3.
What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.
Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:
p = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize
Hope this helps.
As nicely stated in the comments above, projecting 2D image coordinates into 3D "camera space" inherently requires making up the z coordinates, as this information is totally lost in the image. One solution is to assign a dummy value (z = 1) to each of the 2D image space points before projection as answered by Jav_Rock.
p = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize
One interesting alternative to this dummy solution is to train a model to predict the depth of each point prior to reprojection into 3D camera-space. I tried this method and had a high degree of success using a Pytorch CNN trained on 3D bounding boxes from the KITTI dataset. Would be happy to provide code but it'd be a bit lengthy for posting here.

c++ opengl converting model coordinates to world coordinates for collision detection

(This is all in ortho mode, origin is in the top left corner, x is positive to the right, y is positive down the y axis)
I have a rectangle in world space, which can have a rotation m_rotation (in degrees).
I can work with the rectangle fine, it rotates, scales, everything you could want it to do.
The part that I am getting really confused on is calculating the rectangles world coordinates from its local coordinates.
I've been trying to use the formula:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points,
(x', y') are the rotated coordinates,
and t is the angle measured in radians
from the x-axis. The rotation is
counter-clockwise as written.
-credits duffymo
I tried implementing the formula like this:
//GLfloat Ax = getLocalVertices()[BOTTOM_LEFT].x * cosf(DEG_TO_RAD( m_orientation )) - getLocalVertices()[BOTTOM_LEFT].y * sinf(DEG_TO_RAD( m_orientation ));
//GLfloat Ay = getLocalVertices()[BOTTOM_LEFT].x * sinf(DEG_TO_RAD( m_orientation )) + getLocalVertices()[BOTTOM_LEFT].y * cosf(DEG_TO_RAD( m_orientation ));
//Vector3D BL = Vector3D(Ax,Ay,0);
I create a vector to the translated point, store it in the rectangles world_vertice member variable. That's fine. However, in my main draw loop, I draw a line from (0,0,0) to the vector BL, and it seems as if the line is going in a circle from the point on the rectangle (the rectangles bottom left corner) around the origin of the world coordinates.
Basically, as m_orientation gets bigger it draws a huge circle around the (0,0,0) world coordinate system origin. edit: when m_orientation = 360, it gets set back to 0.
I feel like I am doing this part wrong:
and t is the angle measured in radians
from the x-axis.
Possibly I am not supposed to use m_orientation (the rectangles rotation angle) in this formula?
Thanks!
edit: the reason I am doing this is for collision detection. I need to know where the coordinates of the rectangles (soon to be rigid bodies) lie in the world coordinate place for collision detection.
What you do is rotation [ special linear transformation] of a vector with angle Q on 2d.It keeps vector length and change its direction around the origin.
[linear transformation : additive L(m + n) = L(m) + L(n) where {m, n} € vector , homogeneous L(k.m) = k.L(m) where m € vector and k € scalar ] So:
You divide your vector into two pieces. Like m[1, 0] + n[0, 1] = your vector.
Then as you see in the image, rotation is made on these two pieces, after that your vector take
the form:
m[cosQ, sinQ] + n[-sinQ, cosQ] = [mcosQ - nsinQ, msinQ + ncosQ]
you can also look at Wiki Rotation
If you try to obtain eye coordinates corresponding to your object coordinates, you should multiply your object coordinates by model-view matrix in opengl.
For M => model view matrix and transpose of [x y z w] is your object coordinates you do:
M[x y z w]T = Eye Coordinate of [x y z w]T
This seems to be overcomplicating things somewhat: typically you would store an object's world position and orientation separately from its set of own local coordinates. Rotating the object is done in model space and therefore the position is unchanged. The world position of each coordinate is the same whether you do a rotation or not - add the world position to the local position to translate the local coordinates to world space.
Any rotation occurs around a specific origin, and the typical sin/cos formula presumes (0,0) is your origin. If the coordinate system in use doesn't currently have (0,0) as the origin, you must translate it to one that does, perform the rotation, then transform back. Usually model space is defined so that (0,0) is the origin for the model, making this step trivial.