OpenCV Computing Camera Position && Rotation - c++

for a project I need to compute the real world position and orientation of a camera
with respect to a known object.
I have a set of photos, each displays a chessboard from different points of view.
Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing.
So my situation is as such:
Intrinsic parameters are known
Distortioncoefficients are known
translation Vector and rotation Vector are known per photo.
I simply cannot figure out how to compute the position of the camera. My guess was:
invert translation vector. (=t')
transform rotation vector to degree (seems to be radian) and invert
use rodriguez on rotation vector
compute RotationMatrix * t'
But the results are somehow totally off...
Basically I want to to compute a ray for each pixel in world coordinates.
If more informations on my problem are needed, I'd be glad to answer quickly.
I dont' get it... somehow the rays are still off. This is my Code btw:
Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();
//Position of Camera
Mat pos = rot * image1CamPos;
//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;
//Rotation of Ray
Mat rot = stof1 * unit;
But when plotting this, the rays are off :/

The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:
v' = R * v + t
The inversion of the rotation matrix is simply the transposed:
R^-1 = R^T
Knowing this, you can easily resolve the transformation (first eq.) to v:
v = R^T * v' - R^T * t
This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).
You can simply get a 4x4 homogeneous transformation matrix from this:
T = ( R^T -R^T * t )
( 0 1 )
If you now have any point in camera coordinates, you can transform it into object coordiantes:
p' = T * (x, y, z, 1)^T
So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.

Related

How to rotate model to follow path

I have a spaceship model that I want to move along a circular path. I want the nose of the ship to always point in the direction it is moving in.
Here is the code I have to move it in a circle right now:
glm::mat4 m = glm::mat4(1.0f);
//time
long value_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::time_point_cast<std::chrono::milliseconds>(std::chrono::
high_resolution_clock::now())
.time_since_epoch())
.count();
//translate
m = glm::translate(m, translate);
m = glm::translate(m, glm::vec3(-50, 0, -20));
m = glm::scale(m, glm::vec3(0.025f, 0.025f, 0.025f));
m = glm::translate(m, glm::vec3(1800, 0, 3000));
float speed = .002;
float x = 100 * cos(value_ms * speed); // + 1800;
float y = 0;
float z = 100 * sin(value_ms * speed); // + 3000;
m = glm::translate(m, glm::vec3(x, y, z));
How would I move it so the nose always points ahead? I tried doing glm::rotate with the rotation axis set as x or y or z but I cannot get it to work properly.
First see Understanding 4x4 homogenous transform matrices as I am using terminology and stuff from there...
Its usual to use a transform matrix of object for its navigation purposes and not the other way around ... So you should have a transform matrix M for your space ship that represents its position and orientation in [GCS] (global coordinate system). On top of that is sometimes multiplied another matrix M0 that align your space ship mesh to the first matrix (you know some meshes are not centered around (0,0,0) nor axis aligned...)
Now when you are moving your object you just do local transformations on the M so moving forward is just translating M origin position by a multiple of forward axis basis vector. The same goes for sliding to sides (just use different basis vector) resulting in that the object is alway aligned to where it supposed to be (in respect to movement). The same goes for turns. So going in circle is just moving forward and turning at constant speeds per time iteration step (timer).
You are doing this backwards first you compute position and orientation and then you are trying to make operations resulting in matrix that would do the same... In such case is much much easier to construct the matrix M instead of creating transformations that will create it... So what you need is:
origin position
3 perpendicular (most likely unit) basis vectors
So the origin is your x,y,z position. 2 basis vectors can be obtained from the circle so forward is tangent (or position-last_position) and vector towards circle center cen be used as (right or left). The 3th vector can be obtained by cross product so let assume:
+X axis is right
+Y axis is up
+Z axis is forward
you got:
r=100.0
a=speed*t
pos = (r*cos(a),0.0,r*sin(a))
center = (0.0,0.0,0.0)
so:
Z = (cos(a-0.5*M_PI),0.0,sin(a-0.5*M_PI))
X = (cos(a),0.0,sin(a))-ceneter
Y = cross(X,Z)
O = pos
normalize:
X /= length(X)
Y /= length(Y)
Z /= length(Z)
So now just feed your X,Y,Z,O to your matrix (depending on the conventions you use like multiplication order, direct/inverse matrix, row-major or column-major matrices ...)
so for example like this:
double M[16]=
{
X[0],X[1],X[2],0.0,
Y[0],Y[1],Y[2],0.0,
Z[0],Z[1],Z[2],0.0,
O[0],O[1],O[2],1.0,
};
or:
double M[16]=
{
X[0],Y[0],Z[0],O[0],
X[1],Y[1],Z[1],O[1],
X[2],Y[2],Z[2],O[2],
0.0 ,0.0 ,0.0 ,1.0,
};
And that is all ... The matrix might be transposed, inverted etc based on the conventions you use. Sorry I do not use GLM but the syntax should be very siilar ... the matrix feeding might be even simpler if rows or columns are loadable by a vector ...

How to determine the intersection between the camera direction and a plane?

I have a 3D scene with an infinite horizontal plane (parallel to the xz coordinates) at a height H along the Y vertical axis.
I would like to know how to determine the intersection between the axis of my camera and this plane.
The camera is defined by a view-matrix and a projection-matrix.
There are two sub-problems here: 1) Extracting the position and view-direction from the camera matrix. 2) Calculating the intersection between the view-ray and the plane.
Extracting position and view-direction
The view matrix describes how points are transformed from world-space to view space. The view-space in OpenGL is usually defined such that the camera is in the origin and looks into the -z direction.
To get the position of the camera, we have to transform the origin [0,0,0] of the view-space back into world-space. Mathematically speaking, we have to calculate:
camera_pos_ws = inverse(view_matrix) * [0,0,0,1]
but when looking at the equation we'll see that we are only interrested in the 4th column of the inverse matrix which will contain 1
camera_pos_ws = [-view_matrix[12], -view_matrix[13], -view_matrix[14]]
The orientation of the camera can be found by a similar calculation. We know that the camera looks in -z direction in view-space thus the world space direction is given by
camera_dir_ws = inverse(view_matrix) * [0,0,-1,0];
Again, when looking at the equation, we'll see that this only takes the third row of the inverse matrix into account which is given by2
camera_dir_ws = [-view_matrix[2], -view_matrix[6], -view_matrix[10]]
Calculating the intersection
We now know the camera position P and the view direction D, thus we have to find the x,z value along the ray R(x,y,z) = P + l * D where y equals H. Since there is only one unknown, l, we can calculate that from
y = Py + l * Dy
H = Py + l * Dy
l = (H - Py) / Dy
The intersection point is then given by pasting l back into the ray equation.
Notes
1 The indices assume that the matrix is stored in a column-major linear array.
2 Note, that the inverse of a matrix of the form
M = [ R T ]
0 1
, where R is a orthogonal 3x3 matrix, is given by
inv(M) = [ transpose(R) -T ]
0 1
For a general line-plane intersection there are lot of answers and tutorials.
Your case is simple due to the plane is horizontal.
I suppose the camera is at C(cx, cy, cz) and it looks at T(tx, ty,tz).
Then the line camera-target can be defined by:
cx - x cy - y cz - z
------ = ------ = ------ /// These are two independant equations
tx - cx ty - cy tz - cz
For a horizontal plane, only a equation is needed: y = H.
Substitute this value in the line equations and you get
(cx-x)/(tx-cx) = (cy-H)/(ty-cy)
(cz-z)/(tz-cz) = (cy-H)/(ty-cy)
So
x = cx - (tx-cx)*(cy-H)/(ty-cy)
y = H
z = cz - (tz-cz)*(cy-H)/(ty-cy)
Of course if your camera looks in an also horizontal line then ty=cy and there is not solution.

Conversion from dual fisheye coordinates to equirectangular coordinates

The following link suggests that we can convert dual fisheye coordinates to equirectangular coordinates using the following equations:
// 2D fisheye to 3D vector
phi = r * aperture / 2
theta = atan2(y, x)
// 3D vector to longitude/latitude
longitude = atan2(Py, Px)
latitude = atan2(Pz, (Px^2 + Py^2)^(0.5))
// 3D vector to 2D equirectangular
x = longitude / PI
y = 2 * latitude / PI
I applied to above equations to write my source code like this:
const float FOV = 220.0f * PI / 180.0f;
float r = sqrt(u*u + v*v);
float theta = atan2(v, u);
float phi = r * FOV * 0.5f;
float px = u;
float py = r * sin(phi);
float pz = v;
float longitude = atan2(py, px); // theta
float latitude = atan2(pz, sqrt(px*px + py*py)); // phi
x = longitude / PI;
y = 2.0f * latitude / PI;
Unfortunately my math is not good enough to understand this and not sure if I write the above code correctly, where I tried to guess the values for px, py and pz.
Assume my camera FOV is 220 degrees, and the camera resolution is 2880x1440, I would expect the point (358, 224) for rear camera in the overlapped area and the point (2563, 197) for front camera in the overlapped area would both map to a coordinate close to (2205, 1009). However the actual mapping points are (515.966370,1834.647949) and (1644.442017,1853.060669) respectively, which are both very far away from (2205,1009). Please kindly suggest how to fix the above code. Many thanks!
You are building the equirectangular image, so I would suggest you to use the inverse mapping.
Start with pixel locations in the target image you are painting. Convert the 2D location to longitude/latitude. Then convert that to a 3D point on the surface of the unit sphere. Then convert from the 3D point to a location in the 2D fisheye source image. In Paul Bourke page, you would start with the bottom equation, then the rightmost one, then the topmost one.
Use landmark points like 90° long 0° lat, to verify the results make sense at each step.
The final result should be a location in the source fisheye image in the [-1..+1] range. Remap to pixel or to UV as needed. Since the source is split in two eye images you will also need a mapping from target (equirect) longitudes to the correct source sub-image.

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

opengl rotation problem

can anyone tell me how to make my model rotate at its own center go gravity in stead of the default (0,0,0) axis?
and my rotation seems to be only going left and right not 360 degree..
If you want to rotate an object around its center, you first have to translate it to the origin, then rotate and translate it back. Since transformation matrices affect your vectors from right to left, you have to code these steps in opposite order.
Here is some pseudocode since I don't know OpenGL routines by heart:
PushMatrix();
LoadIdentity(); // Start with a fresh matrix
Translate(); // Move your object to its final destination
Rotate(); // Apply rotations
Draw(); // Draw your object using coordinates relative to the object center
PopMatrix();
These matrices get applied:
v_t = (I * T * R) * v = (I * (T * (R * v)))
So the order is: Rotation, Translation.
EDIT: An explanation for the equation above.
The transformations rotation, scale and translation affect the model-view-matrix. Every 3D point (vector) of your model is multiplied by this matrix to get its final point in 3D space, then it gets multiplied by the projection matrix to receive a 2D point (on your 2D screen).
Ignoring the projection stuff, your point transformed by the model-view-matrix is:
v_t = MV * v
Meaning the original point v, multiplied by the model-view-matrix MV.
In the code above, we have constructed MV by an identity matrix I, a translation T and a rotation R:
MV = I * T * R
Putting everything together, you see that your point v is first affected by the rotation R, then the translation T, so that your point is rotated before it is translated, just as we wanted it to be:
v_t = MV * v = (I * T * R) * v = T * (R * v)
Calling Rotate() prior to Translate() would result in:
v_t = (I * R * T) * v = R * (T * v)
which would be bad: Translated to some point in 3D, then rotated around the origin, leading to some strange distortion in your model.