Rotation Matrices of Coordinate Systems ( Euler Angles, Handed-ness) - opengl

I have a coordinate system in which the orientation of a camera is represented as R=Rz(k) * Ry(p) * Rx(o) where R is a 3x3 matrix of the composition of 3x3 rotation matrices around each of X,Y,Z-axis. Moreover, I have a convention in which Z-axis is in the viewing direction of the camera. The X-axis is left-right and the Y-axis is bottom-up.
This R matrix is used in a multi-view stereo reconstruction algorithm. I have a data test set which comes with pre-calibrated camera information. I want to use the R matrices that come with this data set. However, I have no idea what kind of rotation order they assume or even their handed-ness.
How would I be able to figure this out? Any ideas?

R=Rz(k) * Ry(p) * Rx(o)
This is a very instable way of doing it. Euler angles are prone to go into gimbal lock, so I strongly advise against their use.
How would I be able to figure this out?
Well, this problem is difficult to express in a closed solution. Your best bet is treating this as a optimization problem in 3 space, where you try find the values for k, p and o to match up with the given rotation matrix R. There are 3 possible permutations on the evaulation order, so you do that optimization for all 3 of them and take the best matching result.

Related

Interchange the of origin of a 3D plane

I am working on a fiducial marker system (like Aruco) to obtain a 3d pose of markers (3d coordinates (x, y, z) and the roll, pitch, yaw of the marker) with respect to the camera. The overall setup is as shown in the figure.
Marker-Camera
Right now, for some reason, I am getting the pose representation of camera with respect to the marker (Thus, considering marker as an origin). But for my purpose, I want the pose representation of the marker, with respect to the camera. I cannot make changes in the way I am getting the pose, and I must use an external transformation. Currently, I using C++ Eigen library.
From what I have read so far, I have to do a rotation around the yaw (z) axis and then translate the obtained pose by the translation vector (x,y,z). But I am not sure how to represent this in Eigen. I tried to define my pose as Affine3f but I am not getting correct results.
Can anyone help me? Thanks!
If you are using ArUco, this might answer your questions: https://stackoverflow.com/a/59754199/8371691
However, if you are using some other marker system, the most robust way is to construct the attitude matrix and take inverse.
It is not clear how you represent your pose, but whether you use Euler angles or quaternion, it can be easily converted into an attitude matrix, R.
Then, the inverse transformation is simply taking inverse of R.
But given the nature of the configuration space that R belongs to, the inverse of R is also the transpose of R, which is computationally less expensive.
In Eigen, it's simply R.transpose().
If you are using ArUco with OpenCV, you can simply use built-in Rodrigues function.
But, if you are using ArUco, rvec is actually the rotation of the marker with respect to the camera frame.

Quaternion 3 axis rotation

A little help here. I recieve 1 rotation per axis from a hardware gyroscope so 3 rotations for 3 axes (x,y,z) in total. When I use a matrix based rotation I get weird rotations perhaps because of the multiplication order (RotX*RotY*RotZ <> RotY*RotX*RotZ), I have also tried MatrixYawPitchRoll but the same effects appear. Thus I concluded that I should use quaternions but as fas as I can think I must create 3 quaternions, one per rotation, but when I combine them with multiplication I get the same effects as a matrix based rotation... So can someone please tell me how to properly use 3 rotations to create and combine quaternions whithout having the appearance of the previous multiplication effects?
P.S. D3DXQuaternionRotationYawPitchRoll still suffers the same effects as matrix based rotation.
Quaternions are not a magical salve that washes away rotational issues. All quaternions are is a cheap way to represent a specific orientation and to do orientation transforms.
Your problem is that you are not representing your orientation as a quaterion; you're representing it as a 3 angles. And it is that representation that causes your rotation problems.
You need to stop using angles. Represent an object's orientation as a quaternion. If you want to adjust your orientation, create a quaternion from your adjustment angle/axis, then multiply that into the object's orientation. Re-normalize the quaternion and you're done.
I see 2 main source of problems.
Your conversion from Euler Angels is broken.
You use invalid Euler Angle scheme. There are exists 24 types of schemes of Euler Angels
http://en.wikipedia.org/wiki/Euler_angles
Simply Euler Angle scheme is order of rotations around axis XYZ, ZYX, ZXZ ...
All conversions to/from matrix/quaternion can be found in source code to excellent article by Ken Shoemake, 1993.
http://tog.acm.org/resources/GraphicsGems/gemsiv/euler_angle/

use homography to rotate around x/y axes

The Project
I am working on a texture tracking project for mobile. It exclusively tracks planar surfaces so I have been using openCV's cv::FindHomography() to calculate the homography between two frames. That function runs very very slow however and is the primary bottleneck in my pipeline. I decided that an algorithm that can take an initial estimate of the homography would run much faster because my change in homography between frames is very small. Also, my outlier percentage is very small so robust methods are optional. Unfortunately, to my knowledge open CV does not include a homography finder that takes an initial estimate. It does however include solvePnP() which takes the original 3d world coordinates of the scene, the current 2d image coordinates, a camera matrix, distortion parameters, and most importantly an initial estimate. I am trying to replace FindHomography with solvePnP. Since I use only 2d coordinates throughout the pipeline and solvePnP asks for 3d coordinates I am trying to move from 2d->3d->3d_transform->2d_transform. Right now that process runs 6x faster than FindHomography() if it is given a good initial guess but it has issues.
The Problem
Something is wrong with how I am converting. My theory was that since a camera matrix is not required to find a homography it should not be required for this process since I only want the information contained in a homography in the end. I also assumed that since I throw out all z information in the end how I initialize z should not matter. My process is as follows
First I convert all my initial 2d coordinates to 3d by giving them a z pos of 1. I can assume that my original coordinates lie flat in the x-y plane. Then
cv::Mat rot_mat; //3x3 rotation matrix
cv::Mat pnp_rot; //3x1 rotation vector
cv::Mat pnp_tran; //3x1 translation vector
cv::Matx33f camera_matrix(1,0,0,
0,1,0,
0,0,1);
cv::Matx41f dist(0,0,0,0);
cv::solvePnP(original_cord, current_cord, camera_matrix, dist, pnp_rot, pnp_tran,true);
//Rodrigues converts from a rotation vector to a rotation matrix
cv::Rodrigues(pnp_rot, rot_mat);
cv::Matx33f homography(rot_mat(0,0),rot_mat(0,1),pnp_tran(0),
rot_mat(1,0),rot_mat(1,1),pnp_tran(1),
rot_mat(2,0),rot_mat(2,1),pnp_tran(2)+1);
The conversion to a homography here is simple. The first two columns of the homography are from the 3x3 rotation matrix, the last column is the translation vector. The one trick is that homography(2,2) corresponds to scale while pnp_tran(2) corresponds to movement in the z axis. Given that I initialize my z coordinates to 1, scale is z_translation + 1. This process works perfectly for 4 of 6 degrees of freedom. Translation_x, translation_x, scale, and rotation about z all work. Rotation about x and y however display significant error. I believe that this is due to initializing my points at z = 1 but I don't know how to fix it.
The Question
Was my assumption that I can get good results from solvePnP by using a faked camera matrix and initial z coord correct? If so, how should I set up my camera matrix and z coordinates to make x and y rotation work? Also if anyone knows where I could get a homography finding algorithm that takes an initial guess and works only in 2d, or information on techniques for writing my own it would be very helpful. I will most likely be moving in that direction once I get this working.
Update
I built myself a test program which takes a homography, generates a set of coplanar points from that homography, and then runs the points through solvePnP to recover the specified homography. In the process of doing this I realized that I am fundamentally misunderstanding some part of how homographies are constructed. I have been assuming that a homography is constructed as follows.
hom(0,2) = x translation
hom(1,2) = y translation
hom(2,2) = scale, I can divide the entire matrix by this to normalize
the first two columns I assumed were the first two columns of a 3x3 rotation matrix. This essentially amounts to taking a 3x4 transform and throwing away column(2). I have discovered however that this is not true. The test case showing me the error of my ways was trying to make a homography which rotates points some small angle around the y axis.
//rotate by .0175 rad about y axis
rot_mat = (1,0,.0174,
0,1,0,
-.0174,0,1);
//my conversion method to make this a homography gives
homography = (1,0,0,
0,1,0,
-.0174,0,1);
The above homography does not work at all. Take for example a point x,y,1 where x > 58. The result will be x,y,some_negative_number. When I convert from homogeneous coordinates back to cartesian my x and y values will both flip signs.
All that is to say, I now have a much simpler question that I think would let me solve everything. How do I construct a homography that rotates points by some angle around the x and y axis?
Homographies are not simple translation or rotation matrices. The aim is to map straight lines to straight lines rather than to map single points to other points. They take into account perspective matrices to achieve this and are explained here
Hence, homography matrices cannot be easily decomposed, but there are (complicated) ways to do so shown here. This may help you extract rotations and translations out of it.
This should help you better understand a homography, but the rest I am unfamiliar with.

opengl matrix rotation quaternions

Im trying to do a simple rotation of a cube about the x and y axis:
I want to always rotate the cube over the x axis by an amount x
and rotate the cube over the yaxis by an amount y independent of the x axis rotation
first i naively did :
glRotatef(x,1,0,0);
glRotatef(y,0,1,0);
then
but that first rotates over x then rotates over y
i want to rotate over the y independently of the x access.
I started looking into quaternions, so i tried :
Quaternion Rotation1;
Rotation1.createFromAxisAngle(0,1, 0, globalRotateY);
Rotation1.normalize();
Quaternion Rotation2;
Rotation2.createFromAxisAngle(1,0, 0, globalRotateX);
Rotation2.normalize();
GLfloat Matrix[16];
Quaternion q=Rotation2 * Rotation1;
q.createMatrix(Matrix);
glMultMatrixf(Matrix);
that just does almost exactly what was accomplished doing 2 consecutive glRotates ...so i think im missing a step or 2.
is quaternions the way to go or should i be using something different? AND if quaternions are the way to go what steps can i add to make the cube rotate independently of each axis.
i think someone else has the same issue:
Rotating OpenGL scene in 2 axes
I got this to work correctly using quaternions: Im sure there are other ways, but afeter some reseatch , this worked perfectly for me. I posted a similar version on another forum. http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=280859&#Post280859
first create the quaternion representation of the angles of change x/y
then each frame multiply the changing angles quaternions to an accumulating quaternion , then finally convert that quaternion to matrix form to multiply the current matrix. Here is the main code of the loop:
Quaternion3D Rotation1=Quaternion3DMakeWithAxisAndAngle(Vector3DMake(-1.0f,0,0), DEGREES_TO_RADIANS(globalRotateX));
Quaternion3DNormalize(&Rotation1);
Quaternion3D Rotation2=Quaternion3DMakeWithAxisAndAngle(Vector3DMake(0.0f,-1.0f,0), DEGREES_TO_RADIANS(globalRotateY));
Quaternion3DNormalize(&Rotation2);
Matrix3D Mat;
Matrix3DSetIdentity(Mat);
Quaternion3DMultiply(&QAccum, &Rotation1);
Quaternion3DMultiply(&QAccum, &Rotation2);
Matrix3DSetUsingQuaternion3D(Mat, QAccum);
globalRotateX=0;
globalRotateY=0;
glMultMatrixf(Mat);
then draw cube
It would help a lot if you could give a more detailed explanation of what you are trying to do and how the results you are getting differ from the results you want. But in general using Euler angles for rotation has some problems, as combining rotations can result in unintuitive behavior (and in the worst case losing a degree of freedom.)
Quaternion slerp might be the way to go for you if you can find a single axis and a single angle that represent the rotation you want. But doing successive rotations around the X and Y axis using quaternions won't help you avoid the problems inherent in composing Euler rotations.
The post you link to seems to involve another problem though. The poster seems to have been translating his object and then doing his rotations, when he should have been rotating first and then translating.
It is not clear what you want to achieve. Perhaps you should think about some points and where you want them to rotate to -- e.g. vertex (1,1,1) should map to (0,1,0). Then, from that information, you can calculate the required rotation.
Quaternions are generally used to interpolate between two rotational 'positions'. So step one is identifying your start and end 'positions', which you don't have yet. Once you have that, you use quaternions to interpolate. It doesn't sound like you have any time-varying aspect here.
Your problem is not the gimbal lock. And effectively, there is no reason why your quaternion version would work better than your matrix (glRotate) version because the quaternions you are using are mathematically identical to your rotation matrices.
If what you want is a mouse control, you probably want to check out arcballs.

3d geometry: how to align an object to a vector

i have an object in 3d space that i want to align according to a vector.
i already got the Y-rotation out by doing an atan2 on the x and z component of the vector. but i would also like to have an X-rotation to make the object look downwards or upwards.
imagine a plane that does it's pitch yaw roll, just without the roll.
i am using openGL to set the rotations so i will need an Y-angle and an X-angle.
I would not use Euler angles, but rather a Euler axis/angle. For that matter, this is what Opengl glRotate uses as input.
If all you want is to map a vector to another vector, there are an infinite number of rotations to do that. For the shortest one, (the one with the smallest angle of rotation), you can use the vector found by the cross product of your from and to unit vectors.
axis = from X to
from there, the angle of rotation can be found from from.to = cos(theta) (assuming unit vectors)
theta = arccos(from.to)
glRotate(axis, theta) will then transform from to to.
But as I said, this is only one of many rotations that can do the job. You need a full referencial to define better how you want the transform done.
You should use some form of quaternion interpolation (Spherical Linear Interpolation) to animate your object going from its current orientation to this new orientation.
If you store the orientations using Quaternions (vector space math), then you can get the shortest path between two orientations very easily. For a great article, please read Understanding Slerp, Then Not Using It.
If you use Euler angles, you will be subject to gimbal lock and some really weird edge cases.
Actually...take a look at this article. It describes Euler Angles which I believe is what you want here.