3x3 Matrix Rotation in C++ - c++

Alright, first off, I know similar questions are all over the web, I have looked at more than I'd care to count, I've been trying to figure it out for almost 3 weeks now (not constantly, just on and off, hoping for a spark of insight).
In the end, what I want to get, is a function where you pass in how much you want to rotate by (currently I'm working in Radian's, but I can go Degrees or Radians) and it returns the rotation matrix, preserving any translations I had.
I understand the formula to rotate on the "Z" axis in a 2D cartesian plane, is:
[cos(radians) -sin(radians) 0]
[sin(radians) cos(radians) 0]
[0 0 1]
I do understand Matrix Maths (Addition, Subtraction, Multiplication and Determinant/Inverse) fairly well, but what I'm not understanding, is how to, step-by-step, make a matrix I can use for rotation, preserving any translation (and whatever else, like scale) that it has.
From what I've gathered from other examples, is to multiply my current Matrix (whatever that may be, let's just use an Identity Matrix for now), by a Matrix like this:
[cos(radians) - sin(radians)]
[sin(radians) + cos(radians)]
[1]
But then my original Matrix would end up as a 3x1 Matrix instead of a 3x3, wouldn't it? I'm not sure what I'm missing, but something just doesn't seem right to me. I'm not necessarily looking for code for someone to write for me, just to understand how to do this properly and then I can write it myself. (not to say I won't look at other's code :) )
(Not sure if it matters to anybody, but just in-case, using Windows 7 64-bit, Visual Studio 2010 Ultimate, and I believe OpenGL, this is for Uni)
While we're at it, can someone double check this for me? Just to make sure it seems right.
A translation Matrix (again, let's use Identity) is something like this:
[1, 0, X translation element]
[0, 1, Y translation element]
[0, 0, 1]

First, You can not have translation 3x3 matrix for 3D space. You have to use homogeneous 4x4 matrices.
After that create a separate matrix for each transformation (translation, rotation, scale) and multiply them to get the final transformation matrix (multiplying 4x4 matrix will give you 4x4 matrix)

Lets clear some points:
Your object consists of 3D points which are basically 3 by 1 matrices.
You need a 3 by 3 rotation matrix to rotate your object: R but if you also add translation terms, transformation matrix will be 4 by 4:
[R11, R12, R13 tx]
[R21, R22, R23 ty]
[R31, R32, R33 tz]
[0, 0, 0, 1]
For R terms you can have look at :http://inside.mines.edu/~gmurray/ArbitraryAxisRotation/, they are dependent on the rotation angles of each axis.
In order to rotate your object, every 3D point is multiplied by this rotation matrix. For every 3 by 1 point you also need to add a 4th term(scale factor) which is 1 assuming fixed scale:
[x y z 1]'
Resulting product vector will be 4 by 1 and the last term is the scale term which is 1 again and can be removed.
Resulting rotated object points are these new 3D product points.

I faced the same problem and found a satisfying formula in this SO question.
Let (cos0, sin0) be respectively the cosine and sine values of your angle, and (x0, y0) the coordinates of the center of your rotation.
To transform a 2d point of coordinates (x,y), you have to multiply its homogeneous 3x1 coordinates (x,y,1) by this 3x3 matrix:
[cos0, -sin0, x0-(cos0*x0 - sin0*y0)]
[sin0, cos0, y0-(sin0*x0 + cos0*y0)]
[ 0, 0, 1 ]
The values on the third column are the amount of translation necessary to apply when you rotation center is not the origin of the system.

Related

Estimate the camera pose in the reference system using one marker with ARUCO

I am currently working on a camera pose estimation project using only one marker with ARUCO.
I used Aruco's Marker Detector to detect markers and get the marker's Rvec and Tvec. I understand these two vectors represent the transform from the marker to the camera, which is the marker's pose w.r.t camera. I form a 4 by 4 matrix called T_marker_camera using these two vectors.
Then, I set up a world frame (left handed) and get the marker's world pose, which is a 4 by 4 transform matrix.
I want to calculate the pose of the camera w.r.t the world frame, and I use the following formula to calculate it:
T_camera_world = T_marker_world * T_marker_camera_inv
Before I perform the above formula, I convert the OpenCV coordinates to the left handed one (flip the sign of x axis).
However, I didn't get the correct x, y, z of the camera w.r.t the world frame.
What did I miss to get the correct answer?
Thanks
The one equation you gave looks right, so the issue is probably somewhere that you didn't show/describe.
A fix in your notation will help clarify.
Write the pose/source frame on the right (input), the reference/destination frame on the left (output). Then your matrices "match up" like dominos.
rvec and tvec yield a matrix that should be called T_cam_marker.
If you want the pose of your camera in the world frame, that is
T_world_cam = T_world_marker * T_marker_cam
T_world_cam = T_world_marker * inv(T_cam_marker)
(equivalent to what you wrote, but domino)
Be sure that you do matrix multiplication, not element-wise multiplication.
To move between left-handed and right-handed coordinate systems, insert a matrix that maps coordinates accordingly. Frames:
OpenCV camera/screen: right-handed, {X right, Y down, Z far}
ARUCO (in OpenCV anyway): right-handed, {X right, Y far, Z up}, first corner is top left (-X+Y quadrant)
whatever leftie frame you have, let's say {X right, Y up, Z far} and it's a screen or something
The hand-change matrix for typical frames on screens is an identity but with the entry for Y being a -1. I don't know why you would flip the X but that's "equivalent", ignoring any rotations.

Adding Gravity to Acceleration with Quaternions

I am new on this platform so please let me know if i am doing something wrong before reporting! Also, i already searched for similar problems but i could not find the answer i am looking for.
I have this problem where i need to "add" gravity component to my vector of linear accelerations, and i know this can be done with Quaternions. My vector:
Accelerations = [Xacc, Yacc, Zacc]
and a quaternions that represents my current orientation:
q = [w, x, y, z]
Let's say i start in this position:
q = [1,0,0,0]
Now of course if i move for example my smartwatch i'll get some values in my Accelerations Vector and some values in my Quaternion right?
I thought about converting my Accelerations vector into a Quaternion form:
AccQuat = [0, x, y, z]
And then make a rotation to this quaternion by the opposite of my q, so i can "translate" my Accelerations Quaternion to the Earth frame of reference q = [1,0,0,0], the converting it back into a Vector form, adding the Gravity Vector:
g = [0, 0, -9.81]
Once done, getting the "New" Vector with gravity added back into Quaternion form, rotating it to his native position and then converting it back into his vector form.
Its a little bit tricky but i think it can be done in such a way. Is this a right way to do what im trying to do? Because i have no other information other than an already Normalized Quaternion representing my position and a vector of linear Accelerations.
To multiply a Quaternions i have to apply:
qOut = q*p*q^-1
by doing 2 Quaternions multiplications right?
And if, for example, i have a quaternion representing a 90deg rotation on my Z-Axe:
q = [0.71, 0, 0, 0.71]
if i want to "Rotate back" my Accelerations Quaternion, i need to make
qOut = q*p*q^-1
but as q i need to use my "Reverse" quaternion:
qReverse = [0.71, 0, 0, -0.71]
right?
Im working with C++ to make the code work but my problem now is the mental procedure rather than coding it.
Thanks everyone for the help!
========== EDIT ==================
More informations:
i have a fixed coordinates system called World (0,0,0) always parallel to my floor, my Quaternion rotations are for a system called "0" that refers to this World, they have the same centre in (0,0,0).
Maybe this can explain better:
World is Black, 0 is Blue One.
Lets suppose that my blue axis are my 3 components in the acceleration vector.
I thought about converting them into a Quaternion by simply adding a constant = 0 as w, then rotating them with the Quaternion formula (where q is reverse of my quaternion that i got from the data):
f = q*p*q^-1
So they can coincide with the Black system.
From this add the gravity component from the Z axis as it is // to the floor, then rotating my "New Accelerations quaternion" to his original position with the same formula but inverted q. In the end i want to get my Accelerations vector with Gravity added that's all. I thought it could be done with quaternions. Hope this can explain better, forgive me if im repeating same things but im also new to this argument.

How rectify an image from a single calibrated camera using Matlab toolbox [duplicate]

I'm using Matlab for camera calibration using Jean-
Yves Bouget's Camera Calibration Toolbox. I have all the camera
parameters from the calibration procedure. When I use a new image not
in the calibration set, I can get its transformation equation e.g.
Xc=R*X+T, where X is the 3D point of the calibration rig (planar) in
the world frame, and Xc its coordinates in the camera frame. In other
words, I have everything (both extrinsic and intrinsic parameters).
What I want to do is to perform perspective correction on this image
i.e. I want it to remove any perspective and see the calibration rig
undistorted (its a checkerboard).
Matlab's new Computer Vision toolbox has an object that performs a perspective transformation on an
image, given a 3X3 matrix H. The problem is, I can't compute this
matrix from the known intrinsic and extrinsic parameters!
To all who are still interested in this after so many months, i've managed to get the correct homography matrix using Kovesi's code (http://www.csse.uwa.edu.au/~pk/research/matlabfns), and especially the homography2d.m function. You will need however the pixel values of the four corners of the rig. If the camera is steady fixed, then you will need to do this once. See example code below:
%get corner pixel coords from base image
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
por=[p1 p2 p3 p4];
por=[0 1 0;1 0 0;0 0 1]*por; %swap x-y <--------------------
%calculate target image coordinates in world frame
% rig is 9x7 (X,Y) with 27.5mm box edges
XXw=[[0;0;0] [0;27.5*9;0] [27.5*7;27.5*9;0] [27.5*7;0;0]];
Rtarget=[0 1 0;1 0 0;0 0 -1]; %Rotation matrix of target camera (vertical pose)
XXc=Rtarget*XXw+Tc_ext*ones(1,4); %go from world frame to camera frame
xn=XXc./[XXc(3,:);XXc(3,:);XXc(3,:)]; %calculate normalized coords
xpp=KK*xn; %calculate target pixel coords
% get homography matrix from original to target image
HH=homography2d(por,xpp);
%do perspective transformation to validate homography
pnew=HH*por./[HH(3,:)*por;HH(3,:)*por;HH(3,:)*por];
That should do the trick. Note that Matlab defines the x axis in an image ans the rows index and y as the columns. Thus one must swap x-y in the equations (as you'll probably see in the code above). Furthermore, i had managed to compute the homography matrix from the parameters solely, but the result was slightly off (maybe roundoff errors in the calibration toolbox). The best way to do this is the above.
If you want to use just the camera parameters (that is, don't use Kovesi's code), then the Homography matrix is H=KK*Rmat*inv_KK. In this case the code is,
% corner coords in pixels
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
pmat=[p1 p2 p3 p4];
pmat=[0 1 0;1 0 0;0 0 1]*pmat; %swap x-y
R=[0 1 0;1 0 0;0 0 1]; %rotation matrix of final camera pose
Rmat=Rc_ext'*R; %rotation from original pose to final pose
H=KK*Rmat*inv_KK; %homography matrix
pnew=H*pmat./[H(3,:)*pmat;H(3,:)*pmat;H(3,:)*pmat]; %do perspective transformation
H2=[0 1 0;-1 0 0;0 0 1]*H; %swap x-y in the homography matrix to apply in image
Approach 1:
In the Camera Calibration Toolbox you should notice that there is an H matrix for each image of your checkerboard in your workspace. I am not familiar with the computer vision toolbox yet but perhaps this is the matrix you need for your function. It seems that H is computed like so:
KK = [fc(1) fc(1)*alpha_c cc(1);0 fc(2) cc(2); 0 0 1];
H = KK * [R(:,1) R(:,2) Tc]; % where R is your extrinsic rotation matrix and Tc the translation matrix
H = H / H(3,3);
Approach 2:
If the computer vision toolbox function doesn't work out for you then to find the prospective projection of an image I have used the interp2 function like so:
[X, Y] = meshgrid(0:size(I,2)-1, 0:size(I,1)-1);
im_coord = [X(:), Y(:), ones(prod(size(I_1)))]';
% Insert projection here for X and Y to XI and YI
ZI = interp2(X,Y,Z,XI,YI);
I have used prospective projections on a project a while ago and I believe that you need to use homogeneous coordinates. I think I found this wikipedia article quite helpful.

Resolving rotation matrices to obtain the angles

I have used this code as a basis to detect my rectangular target in a scene.I use ORB and Flann Matcher.I have been able to draw the bounding box of the detected target in my scene successfully using the findHomography() and perspectiveTransform() functions.
The reference image (img_object in the above code) is a straight view of only the rectangular target.Now the target in my scene image may be tilted forwards or backwards.I want to find out the angle by which it has been tilted.I have read various posts and came to the conclusion that the homography returned by findHomography() can be decomposed to the rotation matrix and translation vector. I have used code from https:/gist.github.com/inspirit/740979 recommended by this link translated to C++.This is the Zhang SVD decomposition code got from the camera calibration module of OpenCV.I got the complete explanation of this decomposition code from O'Reilly's Learning OpenCV book.
I also used solvePnP() on the the keypoints returned by the matcher to cross check the rotation matrix and the translation vector returned from the homography decomposition but they do not seem to the same.
I have already the measurements of the tilts of all my scene images.i found 2 ways to retrieve the angles from the rotation matrix to check how well they match my values.
Given a 3×3 rotation matrix
R = [ r_{11} & r_{12} & r_{13} ]
[ r_{21} & r_{22} & r_{23} ]
[ r_{31} & r_{32} & r_{33} ]
The 3 Euler angles are
theta_{x} = atan2(r_{32}, r_{33})
theta_{y} = atan2(-r_{31}, sqrt{r_{32}^2 + r_{33}^2})
theta_{z} = atan2(r_{21}, r_{11})
The axis,angle representation - Being R a general rotation matrix, its corresponding rotation axis u
and rotation angle θ can be retrieved from:
cos(θ) = ( trace(R) − 1) / 2
[u]× = (R − R⊤) / 2 sin(θ)
I calculated the angles using both the methods for the rotation matrices obtained from the homography decomposition and the solvepnp().All the angles are different and give very unexpected values.
Is there a hole in my understanding?I do not understand where my calculations are wrong.Are there any alternatives i can use?
Why do you expect them to be the same? They are not the same thing at all.
The Euler angles are three angles of rotation about one axis at a time, starting from the world frame.
Rodriguez's formula gives components of one vector in the world frame, and an angle of rotation about that vector.

Quaternion - Rotate To

I have some object in world space, let's say at (0,0,0) and want to rotate it to face (10,10,10).
How do i do this using quaternions?
This question doesn't quite make sense. You said that you want an object to "face" a specific point, but that doesn't give enough information.
First, what does it mean to face that direction? In OpenGL, it means that the -z axis in the local reference frame is aligned with the specified direction in some external reference frame. In order to make this alignment happen, we need to know what direction the relevant axis of the object is currently "facing".
However, that still doesn't define a unique transformation. Even if you know what direction to make the -z axis point, the object is still free to spin around that axis. This is why the function gluLookAt() requires that you provide an 'at' direction and an 'up' direction.
The next thing that we need to know is what format does the end-result need to be in? The orientation of an object is often stored in quaternion format. However, if you want to graphically rotate the object, then you might need a rotation matrix.
So let's make a few assumptions. I'll assume that your object is centered at the world's point c and has the default alignment. I.e., the object's x, y, and z axes are aligned with the world's x, y, and z axes. This means that the orientation of the object, relative to the world, can be represented as the identity matrix, or the identity quaternion: [1 0 0 0] (using the quaternion convention where w comes first).
If you want the shortest rotation that will align the object's -z axis with point p:=[p.x p.y p.z], then you will rotate by φ around axis a. Now we'll find those values. First we find axis a by normalizing the vector p-c and then taking the cross-product with the unit-length -z vector and then normalizing again:
a = normalize( crossProduct(-z, normalize(p-c) ) );
The shortest angle between those two unit vectors found by taking the inverse cosine of their dot-product:
φ = acos( dotProduct(-z, normalize(p-c) ));
Unfortunately, this is a measure of the absolute value of the angle formed by the two vectors. We need to figure out if it's positive or negative when rotating around a. There must be a more elegant way, but the first way that comes to mind is to find a third axis, perpendicular to both a and -z and then take the sign from its dot-product with our target axis. Vis:
b = crossProduct(a, -z );
if ( dotProduct(b, normalize(p-c) )<0 ) φ = -φ;
Once we have our axis and angle, turning it into a quaternion is easy:
q = [cos(φ/2) sin(φ/2)a];
This new quaternion represents the new orientation of the object. It can be converted into a matrix for rendering purposes, or you can use it to directly rotate the object's vertices, if desired, using the rules of quaternion multiplication.
An example of calculating the Quaternion that represents the rotation between two vectors can be found in the OGRE source code for the Ogre::Vector3 class.
In response to your clarification and to just answer this, I've shamelessly copied a very interesting and neat algorithm for finding the quat between two vectors that looks like I have never seen before from here. Mathematically, it seems valid, and since your question is about the mathematics behind it, I'm sure you'll be able to convert this pseudocode into C++.
quaternion q;
vector3 c = cross(v1,v2);
q.v = c;
if ( vectors are known to be unit length ) {
q.w = 1 + dot(v1,v2);
} else {
q.w = sqrt(v1.length_squared() * v2.length_squared()) + dot(v1,v2);
}
q.normalize();
return q;
Let me know if you need help clarifying any bits of that pseudocode. Should be straightforward though.
dot(a,b) = a1*b1 + a2*b2 + ... + an*bn
and
cross(a,b) = well, the cross product. it's annoying to type out and
can be found anywhere.
You may want to use SLERP (Spherical Linear Interpolation). See this article for reference on how to do it in c++