Translating a Quaternion - opengl

(perhaps this is better for a math Stack Exchange?)
I have a chain composed of bones. Each bone has a with a tip and tail. The following code computes where its tip will be, given a rotation, and sets the next link in the chain's position appropriately:
// Quaternion is a hand-rolled class that works correctly (as far as I can tell.)
Quaternion quat = new Quaternion(getRotationAngleDegrees(), getRotation());
// figure out where the tip will be after applying the rotation
Vector3f rotatedTip = quat.applyRotationTo(tip);
// set the next bone's tail to be at this one's tip
updateNextPosFrom(rotatedTip);
This works if the rotation is supposed to occur around the origin of the object's coordinate system. But what if I want the rotation to occur around some other arbitrary point in the object? I'm not sure how to translate the quaternion. What is the best way to do it?
(I'm using JOGL / OpenGL.)

Dual quaternions are useful for expressing rigid spatial transformations (combined rotations and translations.)
Based on dual numbers (one of the Clifford algebras, d = a + e b where a, b are real and e is unequal to zero but e^2 = 0), dual quaternions, U + e V, can represent lines in space with U the unit direction quaternion and V the moment about a reference point. In this way, dual quaternion lines are very much like Pluecker lines.
While the quaternion transform Q V Q* (Q* is the quaternion conjugate of Q) is used to rotate a unit vector quaternion V about a point, a similar dual quaternion form can be used to apply to line a screw transform (the rigid rotation about an axis combined with a translation along the axis.)
Just as any rigid 2D transform can be resolved to a rotation about a point, any rigid 3D transform can be resolved to a screw.
For such power and expressiveness, dual quaternion references are thin, and the Wikipedia article is as good a place as any to start.

A quaternion is used specifically to handle a rotation factor, but does not include a translation at all.
Typically, in this situation, you'll want to apply a rotation to a point based on the "bone's" length, but centered at the origin. You can then translate post-rotation to the proper location in space.

Quaternions are generally used to represent rotations only; they cannot represent translations as well.
You need to convert your quaternion into a rotation matrix, insert it into the appropriate part of your standard OpenGL 4x4 matrix, and combine it with a translation in order to rotate about an arbitrary point.
4x4 rotation matrix:
[ r r r 0 ]
[ r r r 0 ] <- the r's are the 3x3 rotation matrix from the wiki article
[ r r r 0 ]
[ 0 0 0 1 ]

The Wikipedia page on forward kinematics points to this paper: Introduction to Homogeneous Transformations & Robot Kinematics.

Edit : This answer is wrong. It argues on 4x4 transformation matrices properties, which are not quaternions...
I might have got it wrong but to me (unlike some answers) a quaternion is indeed a tool to handle rotations and translations (and more). It is a 4x4 matrix where the last column represents the translation. Using matrix algebra, replace the 3-vector (x, y, z) by the 4-vector (x, y, z, 1) and compute the transformed vector by the matrix. You will find that values of the last column of the matrix will be added to the coordinates x, y, z of the original vector, as in a translation.
A 3x3 matrix for a 3D space represents a linear transformation (like rotation around the origin). You cannot use a 3x3 matrix for an affine transformation like a translation. So I understand simply the quaternions as a little "trick" to represent more kinds of transformations using matrix algebra. The trick is to add a fourth coordinate equal to 1 and to use 4x4 matrices. Because matrix algebra remains valid, you can combine space transformations by multiplying the matrices, which is indeed powerful.

Related

How to flip only one axis of transformation matrix?

I have a 4x4 transformation matrix. However, after trying out the transformation I noticed that movement and rotation of the Y axis is going the opposite way. The rest is correct.
I got this matrix from some other API so probably it is the difference of coordinate system. So, how can I flip an axis of transformation matrix?
If only translation I can add minus sign on the Y translation, but I have no idea about opposite rotation of only one axis since all the rotation is being represented in the same 3x3 area. I thought there might be some way that even affect both translation and rotation at the same time. (truly flipping the axis)
Edit: I'm pretty sure the operation you're looking for is changing coordinate systems while maintaining Z-up or Y-up. In this case, try setting all the elements of the second column (or row) of your matrix to their inverse.
This question would be better for the Math StackExchange. First, a really helpful read on rotation matrices.
The first problem is the matter of rotation order. I will be assuming the XYZ rotation order. We know the rotation matrices for each axis is as follows:
Given a matrix derived from the same rotation order, the resulting matrix would be as follows, where alpha is the X angle, beta is the Y angle, and gamma is the Z angle:
You can derive the individual components of each axis angle from this matrix. For example, you can derive the Y angle from -sin(beta) using some inverse trig. Given beta, you can derive alpha from cos(beta)sin(alpha). You can also derive gamma from cos(beta)sin(gamma). Note that the same number in the matrix can represent multiple values (e.g. sin(0)=0 and sin(180)=0).
Now that you know alpha, beta, and gamma, you can reverse beta and remake the rotation matrix.
There's a good chance that there's a better way to do this using quaternions, but you should ask the Math StackExchange these kinds of language-agnostic questions.
Much shorter answer: if you are not careful with your frame orientation many things down your pipeline are likely to have a bad hair day. The reason is "parity", a.k.a. "frame orientation", a.k.a. "right-handedness" (or rarely left-handedness). Most 3D geometry tools and libraries that work together normally assume implicitly that all coordinate systems in play are right-handed (or at least consistently-handed). Inverting the orientation of just one axis in a coordinate system changes its orientation from right to left handed or viceversa.
So, suggestion for things to check & try in your problem:
Check that the frame you get from your API is right-handed. You do so
by computing the determinant of the 3x3 rotation part of your 4x4 transform matrix: it must be +1 or very close to it.
If it is -1, then flip one if its axis, i.e. change the sign of one of the columns of the 3x3 rotation.
Note carefully: I said "columns" because I assume that you apply a transform Q to a point x by multiplying as Q * x, x being a 4x1 column vector with the last component equal to one. If you use row vectors left-multiplied by Q you need flip a row.
If that determinant is +1, you have a bug someplace else.

Creating constraints for aTransformation Matrix

In a 3d space I have a 3d object which I am rotating using a transformation matrix. The transformation matrix is 4x4 but I am just using the rotation part of the matrix. I want to add constraints to the rotation for example the object can only rotate in the z direction for 20 degrees. I know the following but when I add manual constraints such as angle cant be larger than 20 i get scaling and skewing in my object.
To summarize my question how can I add constraints to a transformation matrix?
The short answer, you should add constraints to your Euler Angle representation.
If you keep rotation only in matrix form, than convert it to Euler Angle representation, apply constraints and convert Euler Angle to matrix form.
NOTE: Your Rx Ry Rz representation is called "Euler Angels"
http://en.wikipedia.org/wiki/Euler_angles. There is a many ways to combine rotations about orthogonal axes. code for all conversions can be taken from
http://tog.acm.org/resources/GraphicsGems/gemsiv/euler_angle/

Camera projection matrix: why transpose rotation matrix?

In the following:
http://cvlab.epfl.ch/files/content/sites/cvlab2/files/data/strechamvs/rathaus.tar.gz
there's a README file that says:
a 3D point X will be projected into the images in the usual way:
x = K[R^T|-R^T t]X
I remember that 3D to 2D Camera Projection Matrix requires the R rotation, not the transpose R matrix, i.e. I expect:
x = K[R|-R t]X
Why does it say R^T and not simply R ?
It depends in which direction R was determined. I.e. is it a transformation of the camera in the global reference frame, or is it a transformation of the points in the local camera's reference frame.
The true answer is: Don't worry just check that what you've got is right.
Since R^T == R^{-1}, it seems like the upper formula just expects the rotation to be available in the reverse direction of the below one. Just make sure to use the direction they expect as input.

Rounding a 3D point relative to a plane

I have a Plane class that holds n for normal and q for a point on the plane. I also have another point p that also lies on that plane. How do I go about rounding p to the nearest unit on that plane. Like snapping a cursor to a 3D grid but the grid can be rotating plane.
Image to explain:
Red is the current point. Green is the rounded point that I'm trying to get.
Probably the easiest way to achieve this is by taking the plane to define a rotated and shifted coordinate system. This allows you to construct the matrices for transforming a point in global coordinates into plane coordinates and back. Once you have this, you can simply transform the point into plane coordinates, perform the rounding/projection in a trivial manner, and convert back to world coordinates.
Of course, the problem is underspecified the way you pose the question: the transformation you need has six degrees of freedom, your plane equation only yields three constraints. So you need to add some more information: the location of the origin within the plane, and the rotation of your grid around the plane normal.
Personally, I would start by deriving a plane description in parametric form:
xVec = alpha*direction1 + beta*direction2 + x0
Of course, such a description contains nine variables (three vectors), but you can normalize the two direction vectors, and you can constrain the two direction vectors to be orthogonal, which reduces the amount of freedoms back to six.
The two normalized direction vectors, together with the normalized normal, are the base vectors of the rotated coordinate system, so you can simply construct the rotation matrix by putting these three vectors together. To get the inverse rotation, simply transpose the resulting matrix. Add the translation / inverse translation on the appropriate side of the rotation, and you are done.

Transformation Concept in OpenCV

I am new to opencv. and I am right now going through with the concept of Image Transformation in OpenCV. So my question is,
1) Why does Affine Transformation use 2*3 matrix and perspective transformation use 3*3 matrix?
2) When to use Affine transformation and Perspective transformation over each other?
Any Suggestions?
1) It is not a question about OpenCV but rather about mathematics. Applying affine transformation to point (x,y) means the following:
x_new = a*x + b*y + c;
y_new = d*x + e*y + f;
And so affine transform has 6 degrees of freedom: a, b, c, d, e, f. They are stored in 2x3 matrix: a, b, c in the first row, and d, e, f in the second row. You can apply transform to a point by multiplying of matrix and vector.
Perspective transform of (x,y) would be:
z = g*x + h*y + 1;
x_new = (a*x + b*y + c)/z;
y_new = (d*x + e*y + f)/z;
As you can see it has 8 degrees of freedom that are stored in 3x3 matrix. Third row is g, h, 1.
See also homogeneous coordinates for more information about why this representation is so convenient.
2) Affine transformation is also called 'weak perspective' transformation: if you are looking at some scene from different perspective but size of the scene is small relatively to distance to the camera (i.e. parallel lines remain more or less parallel), than you may use affine transform. Otherwise perspective transform will be required.
It is better to consider a hole family of transformations - then you really remember what is what. Let’s go from simplest to complex ones:
1. Euclidean - this is a rigid rotation in plane plus translation. Basically all you can do with a piece of paper lying on the table.
2. Similarity - more general transformation where you can rotate, translate and also scale (hence it is non-rigid);
3. Affine - adds another operation - shear - which would make a parallelogram from a rectangle. This kind of sheer happens during orthographic projection or when objects are viewed from a long distance (compared to their size); parallel lines are still preserved.
4. Homography or perspective transformation - most general transformation and it will make a trapezoid out of rectangle (that is different amount of shear applied to each side). This happens when projecting planar objects from close distance. Remember how train trucks converge to a point at infinity? hence the name perspective. It also means that unlike other transformations we have to apply a division at some point. This what a third row does when we convert from Homogeneous to Cartesian coordinates we divide by a value in a last third row.
This transformation is the only one that cannot be optimally computed using linear algebra and requires non-linear optimization (coz of devision). In camera projections homography happens in three cases:
1. between flat surface and its image;
2. between arbitrary images of 3D scene when camera rotates but not translates;
3. during zoom operation.
In other words whenever a flat camera sensor crosses the same optical rays you have a homography.