How to calculate extrinsic parameters of one camera relative to the second camera? - computer-vision

I have calibrated 2 cameras with respect to some world coordinate system. I know rotation matrix and translation vector for each of them relative to the world frame. From these matrices how to calculate rotation matrix and translation vector of one camera with respect to the other??
Any help or suggestion please. Thanks!

Here is an easier solution, since you already have the 3x3 rotation matrices R1 and R2, and the 3x1 translation vectors t1 and t2.
These express the motion from the world coordinate frame to each camera, i.e. are the matrices such that, if p is a point expressed in world coordinate frame, then the same point expressed in, say, camera 1 frame is p1 = R1 * p + t1.
The motion from camera 1 to 2 is then simply the composition of (a) the motion FROM camera 1 TO the world frame, and (b) of the motion FROM the world frame TO camera 2. You can easily compute this composition as follows:
Form the 4x4 roto-translation matrices Qw1 = [R1 t1] and Qw2 = [ R2 t2 ], both with the 4th row equal to [0 0 0 1]. These matrices completely express the roto-translation FROM the world coordinate frame TO camera 1 and 2 respectively.
The motion FROM camera 1 TO the world frame is simply Q1w = inv(Qw1). Here inv() is the algebraic inverse matrix, i.e. the one such that inv(X) * X = X * inv(X) = IdentityMatrix, for every nonsingular matrix X.
The roto-translation from camera 1 to 2 is then Q12 = Q1w * Qw2, and viceversa, the one from camera 2 to 1 is Q21 = Q2w * Qw1 = inv(Qw2) * Qw1.
Once you have Q12 you can extract from it the rotation and translation parts, if you so wish, respectively from its upper 3x3 submatrix and right 3x1 sub-column.

First convert your rotation matrix into a rotation vector. Now you have 2 3d vectors for each camera, call them A1,A2,B1,B2. You have all 4 of them with respect to some origin O. The rule you need is
A relative to B = (A relative to O)- (B relative to O)
Apply that rule to your 2 vectors and you will have their pose relative to one another.
Some documentation on converting from rotation matrix to euler angles can be found here as well as many other places. If you are using openCV you can just use Rodrigues. Here is some matlab/octave code I found.

Here is very simple and easy solution. I suppose your 1st camera has R1 and T1, 2nd camera has R2 and T2 rotation matrixes and translation vector according to common reference point.
Translation from 1st to 2nd camera, rotation from 1st to 2nd camera can be calculated by following two line matlab code;
R=R2*R1';
T=T2-R*T1;
but note, that is true if you have just one R and T for each camera. (I mean rotations and translation for one unique world reference). if you have more reference translations and rotations, you should calcuate R,T for every single reference point. Probably they will be very close to each other. But those might be sligtly different. Then you can calculate mean of Translation vector and convert all found rotation matrix to rotation vector, caluculate its mean and then convert them as rotation matrix.

Related

Place a billboard at a given distance so that it occupies a certain size on screen

I have a rectangle that I place on the screen using a simple scale matrix (S). Now I would like to place this rectangle into "3D space", but have it appear just like before on the screen. I found that I can do so by applying the view and projection matrices in inverse. Something like:
S' = (V⁻¹ P⁻¹ S)
Matrix = P V (V⁻¹ P⁻¹ S)
This works fine so far. My rectangle is like a billboard now and I can treat it like any other object, apply P and V and it will show up correctly. However, there is a degeneracy here: I don't specify at which depth the object is placed. It could be twice as far away but x times bigger!
The reason that this is important is that I want to animate the rectangle, say rotate it around the Z axis or move in 3D space. Then I want it to come to a stop and be positioned pixel-perfect on the screen.
How can I place a flat object at a given z distance, such that it appears on screen in a certain way? I already have with the scale matrix that I need to display it in OpenGL without any 3D transformation, that is the matrix for displaying it in NDC or screen coordinates. I also have the projection and view matrices I want to use. How can I go from this to the desired model matrix?
How can I place a flat object at a given z distance, such that it appears on screen in a certain way [...]
Actually you want to draw the object in view space. Define a model matrix for the object that contains only one translation component (0, 0, -z) and skip the view matrix when drawing the object.
Usually the order to transform 3D vertex v to 2D point p is written as a matrix multiplication. [Depending on the API you are using, the order might be reversed. The notation I used here is glsl - friendly]
p = P * V * M * S * v
v = vertex (usually 3D of the form x,y,z,1)
P = projection matrix
V = view (camera) matrix
M = model matrix (or world transformation)
S is a 4x4 object-scaling matrix
matrices are usually 4x4 with the last line/column 0,0,0,1
The model matrix M can be decomposed into a number of sub components such as T translation, S scale and R rotation. Of course here the order matters.
To rotate an object or vertex around itself, first rotate it (as if it is at the origin already) and then translate it to the position it needs to be, for example using vector v with coordinates (x,y,z).
R is a typical 3x3 rotation matrix embedded in a 4x4 with 0,0,0,1 on the last line
v is a 3D vector with coordinates (x,y,z)
T is a 4x4 translation matrix, all zeroes except the last column where it has x,y,z,1
M = T * R (first R, then T)
To rotate an object around an arbitrary point q, first translate it so that it is relative to that point (i.e. q is at the origin, so you subtracted q from v). Then rotate around q (at the origin) by applying R. Lastly place the object it back where it should be (so add q again).
R is your typical 3x3 rotation matrix embedded in a 4x4 with 0,0,0,1 on the last line
v is a 3D vector with coordinates (x,y,z)
T is a 4x4 translation matrix, all zeroes except the last column where it has x,y,z,1
L is another 4x4 translation matrix, but now with q instead of v
L' is the inverse transformation of L
M = T * L * R * L'
Also, scaling your object first (i.e. S is at the end of the multiplications) before translations will keep the translation in world units. Scale after all transformations, in fact scales all translations too, and the object will move over scaled distances.

Rotating an Object Around an Axis

I have a circular shape object, which I want to rotate like a fan along it's own axis.
I can change the rotation in any direction i.e. dx, dy, dz using my transformation matrix.
The following it's the code:
Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
Matrix4f.translate(translation, matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix, matrix);
Matrix4f.scale(new Vector3f(scale,scale,scale), matrix, matrix);
My vertex code:
vec4 worldPosition = transformationMatrix * vec4(position,1.0);
vec4 positionRelativeToCam = viewMatrix*worldPosition;
gl_Position = projectionMatrix *positionRelativeToCam;
Main Game Loop:
Object.increaseRotation(dxf,dyf,dzf);
But, it's not rotating along it's own axis. What am I missing here?
I want something like this. Please Help
You should Get rid of Euler angles for this.
Object/mesh geometry
You need to be aware of how your object is oriented in its local space. For example let assume this:
So in this case the main rotation is around axis z. If your mesh is defined so the rotation axis is not aligned to any of the axises (x,y or z) or the center point is not (0,0,0) than that will cause you problems. The remedy is either change your mesh geometry or create a special constant transform matrix M0 that will transform all vertexes from mesh LCS (local coordinate system) to a different one that is axis aligned and center of rotation has zero in the axis which is also the axis of rotation.
In the latter case any operation on object matrix M would be done like this:
M'=M.M0.operation.Inverse(M0)
or in reverse or in inverse (depends on your matrix/vertex multiplication and row/column order conventions). If you got your mesh already centered and axis aligned then do just this instead:
M'=M.operation
The operation is transform matrix of the change increment (for example rotation matrix). The M is the object current transform matrix from #2 and M' is its new version after applying operation.
Object transform matrix
You need single Transform matrix for each object you got. This will hold the position and orientation of your object LCS so it can be converted to world/scene GCS (global coordinate system) or its parent object LCS
rotating your object around its local axis of rotation
As in the Understanding 4x4 homogenous transform matrices is mentioned for standard OpenGL matrix convetions you need to do this:
M'=M*rotation_matrix
Where M is current object transform matrix and M' is the new version of it after rotation. This is the thing you got different. You are using Euler angles rx,ry,rz instead of accumulating the rotations incrementally. You can not do this with Euler angles in any sane and robust way! Even if many modern games and apps are still trying hard to do it (and failing for years).
So what to do to get rid of Euler angles:
You must have persistent/global/static matrix M per object
instead of local instance per render so you need to init it just once instead of clearing it on per frame basis.
On animation update apply operation you need
so:
M*=rotation_around_z(angspeed*dt);
Where angspeed is in [rad/second] or [deg/second] of your fan speed and dt is time elapsed in [seconds]. For example if you do this in timer then dt is the timer interval. For variable times you can measure the time elapsed (it is platform dependent I usually use PerformanceTimers or RDTSC).
You can stack more operations on top of itself (for example your fan can also turning back and forward around y axis to cover more area.
For object direct control (by keyboard,mouse or joystick) just add things like:
if (keys.get( 38)) { redraw=true; M*=translate_z(-pos_speed*dt); }
if (keys.get( 40)) { redraw=true; M*=translate_z(+pos_speed*dt); }
if (keys.get( 37)) { redraw=true; M*=rotation_around_y(-turn_speed*dt); }
if (keys.get( 39)) { redraw=true; M*=rotation_around_y(+turn_speed*dt); }
Where keys is my key map holding on/off state for every key in the keyboard (so I can use more keys at once). This code just control object with arrows. For more info on the subject see related QA:
Computer Graphics: Moving in the world
Preserve accuracy
With incremental changes there is a risc of loosing precision due to floating point errors. So add a counter to your matrix class which counts how many times it has been changed (incremental operation applied) and if some constant count hit (for example 128 operations) Normalize your matrix.
To do that you need to ensure orthogonormality of your matrix. So eaxh axis vector X,Y,Z must be perpendicular to the other two and its size has to be unit. I do it like this:
Choose main axis which will have unchanged direction. I am choosing Z axis as that is usually my main axis in my meshes (viewing direction, rotation axis etc). so just make this vector unit Z = Z/|Z|
exploit cross product to compute the other two axises so X = (+/-) Z x Y and Y = (+/-) Z x X and also normalize them too X = X/|X| and Y = Y/|Y|. The (+/-) is there because I do not know your coordinate system conventions and the cross product can produce opposite vector to your original direction so if the direction is opposite change the multiplication order or negate the result (this is done while coding time not in runtime!).
Here example in C++ how my orthonormal normalization is done:
void reper::orto(int test)
{
double x[3],y[3],z[3];
if ((cnt>=_reper_max_cnt)||(test)) // here cnt is the operations counter and test force normalization regardless of it
{
use_rep(); // you can ignore this
_rep=1; _inv=0; // you can ignore this
axisx_get(x);
axisy_get(y);
axisz_get(z);
vector_one(z,z);
vector_mul(x,y,z); // x is perpendicular to y,z
vector_one(x,x);
vector_mul(y,z,x); // y is perpendicular to z,x
vector_one(y,y);
axisx_set(x);
axisy_set(y);
axisz_set(z);
cnt=0;
}
}
Where axis?_get/set(a) just get/set a as axis from/to your matrix. The vector_one(a,b) returns a = b/|b| and vector_mul(a,b,c) return a = b x c

Drawing Euler Angles rotational model on a 2d image

I'm currently attempting to draw a 3d representation of euler angles within a 2d image (no opengl or 3d graphic windows). The image output can be similar to as below.
Essentially I am looking for research or an algorithm which can take a Rotation Matrix or a set of Euler angles and then output them onto a 2d image, like above. This will be implemented in a C++ application that uses OpenCV. It will be used to output annotation information on a OpenCV window based on the state of the object.
I think I'm over thinking this because I should be able to decompose the unit vectors from a rotation matrix and then extract their x,y components and draw a line in cartesian space from (0,0). Am i correct in this thinking?
EDIT: I'm looking for an Orthographic Projection. You can assume the image above has the correct camera/viewing angle.
Any help would be appreciated.
Thanks,
EDIT: The example source code can now be found in my repo.
Header: https://bitbucket.org/jluzwick/tennisspindetector/src/6261524425e8d80772a58fdda76921edb53b4d18/include/projection_matrix.h?at=master
Class Definitions: https://bitbucket.org/jluzwick/tennisspindetector/src/6261524425e8d80772a58fdda76921edb53b4d18/src/projection_matrix.cpp?at=master
It's not the best code but it works and shows the steps necessary to get the projection matrix described in the accepted answer.
Also here is a youtube vid of the projection matrix in action (along with scale and translation added): http://www.youtube.com/watch?v=mSgTFBFb_68
Here are my two cents. Hope it helps.
If I understand correctly, you want to rotate 3D system of coordinates and then project it orthogonally onto a given 2D plane (2D plane is defined with respect to the original, unrotated 3D system of coordinates).
"Rotating and projecting 3D system of coordinates" is "rotating three 3D basis vectors and projecting them orthogonally onto a 2D plane so they become 2D vectors with respect to 2D basis of the plane". Let the original 3D vectors be unprimed and the resulting 2D vectors be primed. Let {e1, e2, e3} = {e1..3} be 3D orthonormal basis (which is given), and {e1', e2'} = {e1..2'} be 2D orthonormal basis (which we have to define). Essentially, we need to find such operator PR that PR * v = v'.
While we can talk a lot about linear algebra, operators and matrix representation, it'd be too long of a post. It'll suffice to say that :
For both 3D rotation and 3D->2D projection operators there are real matrix representations (linear transformations; 2D is subspace of 3D).
These are two transformations applied consequently, i.e. PR * v = P * R * v = v', so we need to find rotation matrix R and projection matrix P. Clearly, after we rotated v using R, we can project the result vector vR using P.
You have the rotation matrix R already, so we consider it is a given 3x3 matrix. So for simplicity we will talk about projecting vector vR = R * v.
Projection matrix P is a 2x3 matrix with i-th column being a projection of i-th 3D basis vector ei onto {e1..2'} basis.
Let's find P projection matrix such as a 3D vector vR is linearly transformed into 2D vector v' on a 2D plane with an orthonormal basis {e1..2'}.
A 2D plane can be easily defined by a vector normal to it. For example, from the figures in the OP, it seems that our 2D plane (the plane of the paper) has normal unit vector n = 1/sqrt(3) * ( 1, 1, 1 ). We need to find a 2D basis in the 2D plane defined by this n. Since any two linearly independent vectors lying in our 2D plane would form such basis, here are infinite number of such basis. From the problem's geometry and for the sake of simplicity, let's impose two additional conditions: first, the basis should be orthonormal; second, should be visually appealing (although, this is somewhat a subjective condition). As it can be easily seen, such basis is formed trivially in the primed system by setting e1' = ( 1, 0 )' = x'-axis (horizontal, positive direction from left to right) and e2' = ( 0, 1 )' = y'-axis (vertical, positive direction from bottom to top).
Let's now find this {e1', e2'} 2D basis in {e1..3} 3D basis.
Let's denote e1' and e2' as e1" and e2" in the original basis. Noting that in our case e1" has no e3-component (z-component), and using the fact that n dot e1" = 0, we get that e1' = ( 1, 0 )' -> e1" = ( -1/sqrt(2), 1/sqrt(2), 0 ) in the {e1..3} basis. Here, dot denotes dot-product.
Then e2" = n cross e1" = ( -1/sqrt(6), -1/sqrt(6), 2/sqrt(6) ). Here, cross denotes cross-product.
The 2x3 projection matrix P for the 2D plane defined by n = 1/sqrt(3) * ( 1, 1, 1 ) is then given by:
( -1/sqrt(2) 1/sqrt(2) 0 )
( -1/sqrt(6) -1/sqrt(6) 2/sqrt(6) )
where first, second and third columns are transformed {e1..3} 3D basis onto our 2D basis {e1..2'}, i.e. e1 = ( 1, 0, 0 ) from 3D basis has coordinates ( -1/sqrt(2), -1/sqrt(6) ) in our 2D basis, and so on.
To verify the result we can check few obvious cases:
n is orthogonal to our 2D plane, so there should be no projection. Indeed, P * n = P * ( 1, 1, 1 ) = 0.
e1, e2 and e3 should be transformed into their representation in {e1..2'}, namely corresponding column in P matrix. Indeed, P * e1 = P * ( 1, 0 ,0 ) = ( -1/sqrt(2), -1/sqrt(6) ) and so on.
To finalize the problem. We now constructed a projection matrix P from 3D into 2D for an arbitrarily chosen 2D plane. We now can project any vector, previously rotated by rotation matrix R, onto this plane. For example, rotated original basis {R * e1, R * e2, R * e3}. Moreover, we can multiply given P and R to get a rotation-projection transformation matrix PR = P * R.
P.S. C++ implementation is left as a homework exercise ;).
The rotation matrix will be easy to display,
A Rotation matrix can be constructed by using a normal, binormal and tangent.
You should be able to get them back out as follows:-
Bi-Normal (y') : matrix[0][0], matrix[0][1], matrix[0][2]
Normal (z') : matrix[1][0], matrix[1][1], matrix[1][2]
Tangent (x') : matrix[2][0], matrix[2][1], matrix[2][2]
Using a perspective transform you can the add perspective (x,y) = (x/z, y/z)
To acheive an orthographic project similar to that shown you will need to multiply by another fixed rotation matrix to move to the "camera" view (45° right and then up)
You can then multiply your end points x(1,0,0),y(0,1,0),z(0,0,1) and center(0,0,0) by the final matrix, use only the x,y coordinates.
center should always transform to 0,0,0
You can then scale these values to draw to you 2D canvas.

Orientating Figures to look at the Camera with OpenGL

In my opengl app, i want to orientate figures to look at the camera, to make this, i define for all the objects 2 vectors, front and up.
Im using gluLookAt to control the camera, so the vectors newFront and newUp i need are easily known.
The code i use to control the orientation for each figure is :
m4D orientate(v3D newFront, v3D newUp)
{
double angle = angle_between(front, newFront);
v3D cross = normalize(cross_product(front, newFront));
m4D matrix = rotate_from_axis(angle, cross);
up = normalize(up * matrix);
angle = angle_between(up, newUp);
cross = normalize(cross_product(up, newUp));
return(rotate_from_axis(angle, cross) * matrix);
}
This code works well when the matrix stack has only this matrix, but if i push a previous matrix rotation (rotating of course front and up vectors) it fails.
What's my fault?
Why always those complicated "I solve for an inverse rotation and multiply that onto the modelview" billboard/impostor solutions, when there's a much simpler method?
Let M be the modelview matrix from which a billboard matrix is to be determined. The matrix is a 4×4 real valued type. The upper left 3×3 defines rotation and scaling. For a billboard this part is to be identity.
So by replacing the upper left part of the current modelview matrix with identity, and keeping the rest as is i.e.
1 0 0 tx
0 1 0 ty
0 0 1 tz
wx wy wz ww
and using that matrix for further transformations you get exactly the desired effect. If there was a scaling applied, replace the upper left identity with a scaling matrix.

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

I am doing camera calibration from tsai algo. I got intrensic and extrinsic matrix, but how can I reconstruct the 3D coordinates from that inormation?
1) I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system.
2) I can use the
OpenCV documentation approach:
as I know u, v, R , t , I can compute X,Y,Z.
However both methods end up in different results that are not correct.
What am I'm doing wrong?
If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as
H = K*[r1, r2, t], //eqn 8.1, Hartley and Zisserman
with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.
Then normalize dividing everything by t3.
What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.
Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:
p = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize
Hope this helps.
As nicely stated in the comments above, projecting 2D image coordinates into 3D "camera space" inherently requires making up the z coordinates, as this information is totally lost in the image. One solution is to assign a dummy value (z = 1) to each of the 2D image space points before projection as answered by Jav_Rock.
p = [x y 1];
projection = H * p; //project
projnorm = projection / p(z); //normalize
One interesting alternative to this dummy solution is to train a model to predict the depth of each point prior to reprojection into 3D camera-space. I tried this method and had a high degree of success using a Pytorch CNN trained on 3D bounding boxes from the KITTI dataset. Would be happy to provide code but it'd be a bit lengthy for posting here.