Orienting an object based on a parent with quaternions? - c++

I have 2 objects in 3D space, A & B, and object B is parented to A.
Both objects have 3D positions, as well as a Quaternion representing their specific orientations.
I have translation working fine, so whenever A moves, B moves.
However, I can't seem to get the orientation from the parent to be correctly applied to its children.
Lets say A's orientation represents a 90 degree rotation around the X axis. With my code, object B seems to rotate around 180 degrees for some reason.
Here's a picture of exactly what's happening.
Here's how I'm attempting to generate a vector for any particular vertex, given the child and parent's position and orientation:
vec4 finalVertex = rotVertexByQuat(parentOrientation, vec4(parentPos,1) + vec4(objPos,1) + rotVertexByQuat(objOrientation, vertex) );
I rotate the vertex by the quaternion this way:
vec4 rotVertexByQuat(Quaternion quat, vec4 vert)
{
Quaternion p1 = Quaternion(1, vec3(vert.x,vert.y,vert.z));
Quaternion p2 = multiplyQuaternion(quat,p1);
Quaternion p3 = multiplyQuaternion(p2, inverseQuaternion(quat));
return vec4(round(p3.v.x), round(p3.v.y), round(p3.v.z),1);
}
Is there something wrong with my order of operations?

See the answer to this question.
I have a feeling what you are trying to implement is the Quaternion-Vector rotation operation.
There are many different, obscure combinations of Quaternion conventions that can ruin your day (see this paper).
if your rotation operation is not working try:
V_inertial = q (x) V_body (x) q*
Where (x) is the quaternion multiplication operator and q* is the conjugate quaternion of q.
In addition your "stacked" quaternions may be either:
q_C = q_A (x) q_B
or
q_C = q_B (x) q_A
depending on the convention used.

Related

placing objects perpendicularly on the surface of a sphere that has a wavy surface

So I have a sphere. It rotates around a given axis and changes its surface by a sin * cos function.
I also have a bunck of tracticoids at fix points on the sphere. These objects follow the sphere while moving (including the rotation and the change of the surface). But I can't figure out how to make them always perpendicular to the sphere. I have the ponts where the tracticoid connects to the surface of the sphere and its normal vector. The tracticoids are originally orianted by the z axis. So I tried to make it's axis to the given normal vector but I just can't make it work.
This is where i calculate M transformation matrix and its inverse:
virtual void SetModelingTransform(mat4& M, mat4& Minv, vec3 n) {
M = ScaleMatrix(scale) * RotationMatrix(rotationAngle, rotationAxis) * TranslateMatrix(translation);
Minv = TranslateMatrix(-translation) * RotationMatrix(-rotationAngle, rotationAxis) * ScaleMatrix(vec3(1 / scale.x, 1 / scale.y, 1 / scale.z));
}
In my draw function I set the values for the transformation.
_M and _Minv are the matrixes of the sphere so the tracticoids are following the sphere, but when I tried to use a rotation matrix, the tracticoids strated moving on the surface of the sphere.
_n is the normal vector that the tracticoid should follow.
void Draw(RenderState state, float t, mat4 _M, mat4 _Minv, vec3 _n) {
SetModelingTransform(M, Minv, _n);
if (!sphere) {
state.M = M * _M * RotationMatrix(_n.z, _n);
state.Minv = Minv * _Minv * RotationMatrix(-_n.z, _n);
}
else {
state.M = M;
state.Minv = Minv;
}
.
.
.
}
You said your sphere has an axis of rotation, so you should have a vector a aligned with this axis.
Let P = P(t) be the point on the sphere at which your object is positioned. You should also have a vector n = n(t) perpendicular to the surface of the sphere at point P=P(t) for each time-moment t. All vectors are interpreted as column-vectors, i.e. 3 x 1 matrices.
Then, form the matrix
U[][1] = cross(a, n(t)) / norm(cross(a, n(t)))
U[][3] = n(t) / norm(n(t))
U[][2] = cross(U[][3], U[][1])
where for each j=1,2,3 U[][j] is a 3 x 1 vector column. Then
U(t) = [ U[][1], U[][2], U[][3] ]
is a 3 x 3 orthogonal matrix (i.e. it is a 3D rotation around the origin)
For each moment of time t calculate the matrix
M(t) = U(t) * U(0)^T
where ^T is the matrix transposition.
The final transformation that rotates your object from its original position to its position at time t should be
X(t) = P(t) + M(t)*(X - P(0))
I'm not sure if I got your explanations, but here I go.
You have a sphere with a wavy surface. This means that each point on the surface changes its distance to the center of the sphere, like a piece of wood on a wave in the sea changes its distance to the bottom of the sea at that position.
We can tell that the radious R of the sphere is variable at each point/time case.
Now you have a tracticoid (what's a tracticoid?). I'll take it as some object floating on the wave, and following the sphere movements.
Then it seems you're asking as how to make the tracticoid follows both wavy surface and sphere movements.
Well. If we define each movement ("transformation") by a 4x4 matrix it all reduces to combine in the proper order those matrices.
There are some good OpenGL tutorials that teach you about transformations, and how to combine them. See, for example, learnopengl.com.
To your case, there are several transformations to use.
The sphere spins. You need a rotation matrix, let's call it MSR (matrix sphere rotation) and an axis of rotation, ASR. If the sphere also translates then also a MST is needed.
The surface waves, with some function f(lat, long, time) which calculates for those parameters the increment (signed) of the radious. So, Ri = R + f(la,lo,ti)
For the tracticoid, I guess you have some triangles that define a tracticoid. I also guess those triangles are expressed in a "local" coordinates system whose origin is the center of the tracticoid. Your issue comes when you have to position and rotate the tracticoid, right?
You have two options. The first is to rotate the tracticoid to make if aim perpendicular to the sphere and then translate it to follow the sphere rotation. While perfect mathematically correct, I find this option some complicated.
The best option is to make the tracticoid to rotate and translate exactly as the sphere, as if both would share the same origin, the center of the sphere. And then translate it to its current position.
First part is quite easy: The matrix that defines such transformation is M= MST * MSR, if you use the typical OpenGL axis convention, otherwise you need to swap their order. This M is the common part for all objects (sphere & tracticoids).
The second part requires you have a vector Vn that defines the point in the surface, related to the center of the sphere. You should be able to calculate it with the parameters latitude, longitude and the R obtained by f() above, plus the size/2 of the tracticoid (distance from its center to the point where it touches the wave). Use the components of Vn to build a translation matrix MTT
And now, just get the resultant transformation to use with every vertex of the tracticoid: Mt = MTT * M = MTT * MST * MSR
To render the scene you need other two matrices, for the camera (MV) and for the projection (MP). While Mt is for each tracticoid, MV and MP are the same for all objects, including the sphere itself.

OpenGl Rotate object on Y axis to look at another object

So like in a topic I got 2 objects one i moving around (on z and x axis) the other one is static but should rotate around y axis to always like a look at the other... and i am fighting with this already a week
what i got now is
vector from 1object to 2object and actual look at(also vector) of the 2object
i'am calculating angel betwean this two vectors and adding this to rotattion.y of the 2 object but its not working properly
any idea how to make it work? btw i'am using eular angel transforms
pseudCode:
vectorFrom1to2 = vector1 - vector2;
lookatVectorof2ndObject;
i normalize both of them and then
float angle = acos(dot(vectorFrom1to2, lookatVectorof2ndObject));
object2.rotateY = angle;
i dont know where i do mistake
As a general rule of thumb, which proved itself true in many situations I observed is: As soon as you find yourself calculating angles from vectors, you are most likely doing something in a more unnecessarily complicated way than necessary.
All you need is a basis transformation which transforms the first object's local coordinate system to make its local Z axis point towards the second object. You can do this with a simple rotation matrix (provided you have a matrix/vector library ready to facilitate this more easily).
So, provided you have object 1 with position p1 and object 2 with position p2 and you want p1 to rotate towards p2, then the rotation matrix can be obtained as follows:
(I am just using GLSL pseudo syntax here)
vec3 p1 = ... // <- position of first object
vec3 p2 = ... // <- position of second object
vec3 d = normalize(p2 - p1)
vec3 r = cross(vec3(0.0, 1.0, 0.0), d)
= vec3(d.z, 0, -d.x)
mat3 m = mat3(d.z, 0, -d.x, // <- first column ('right' vector)
0, 1, 0, // <- second column (keep Y)
d.x, 0, d.z) // <- third column (map Z to point towards p2)
When transforming the vertices v of the first object with m by: v' = m * v you get the Z axis of object p1 to point towards the position of p2, all formulated in the same "world" coordinate system.

Rotating an Object Around an Axis

I have a circular shape object, which I want to rotate like a fan along it's own axis.
I can change the rotation in any direction i.e. dx, dy, dz using my transformation matrix.
The following it's the code:
Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
Matrix4f.translate(translation, matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix, matrix);
Matrix4f.scale(new Vector3f(scale,scale,scale), matrix, matrix);
My vertex code:
vec4 worldPosition = transformationMatrix * vec4(position,1.0);
vec4 positionRelativeToCam = viewMatrix*worldPosition;
gl_Position = projectionMatrix *positionRelativeToCam;
Main Game Loop:
Object.increaseRotation(dxf,dyf,dzf);
But, it's not rotating along it's own axis. What am I missing here?
I want something like this. Please Help
You should Get rid of Euler angles for this.
Object/mesh geometry
You need to be aware of how your object is oriented in its local space. For example let assume this:
So in this case the main rotation is around axis z. If your mesh is defined so the rotation axis is not aligned to any of the axises (x,y or z) or the center point is not (0,0,0) than that will cause you problems. The remedy is either change your mesh geometry or create a special constant transform matrix M0 that will transform all vertexes from mesh LCS (local coordinate system) to a different one that is axis aligned and center of rotation has zero in the axis which is also the axis of rotation.
In the latter case any operation on object matrix M would be done like this:
M'=M.M0.operation.Inverse(M0)
or in reverse or in inverse (depends on your matrix/vertex multiplication and row/column order conventions). If you got your mesh already centered and axis aligned then do just this instead:
M'=M.operation
The operation is transform matrix of the change increment (for example rotation matrix). The M is the object current transform matrix from #2 and M' is its new version after applying operation.
Object transform matrix
You need single Transform matrix for each object you got. This will hold the position and orientation of your object LCS so it can be converted to world/scene GCS (global coordinate system) or its parent object LCS
rotating your object around its local axis of rotation
As in the Understanding 4x4 homogenous transform matrices is mentioned for standard OpenGL matrix convetions you need to do this:
M'=M*rotation_matrix
Where M is current object transform matrix and M' is the new version of it after rotation. This is the thing you got different. You are using Euler angles rx,ry,rz instead of accumulating the rotations incrementally. You can not do this with Euler angles in any sane and robust way! Even if many modern games and apps are still trying hard to do it (and failing for years).
So what to do to get rid of Euler angles:
You must have persistent/global/static matrix M per object
instead of local instance per render so you need to init it just once instead of clearing it on per frame basis.
On animation update apply operation you need
so:
M*=rotation_around_z(angspeed*dt);
Where angspeed is in [rad/second] or [deg/second] of your fan speed and dt is time elapsed in [seconds]. For example if you do this in timer then dt is the timer interval. For variable times you can measure the time elapsed (it is platform dependent I usually use PerformanceTimers or RDTSC).
You can stack more operations on top of itself (for example your fan can also turning back and forward around y axis to cover more area.
For object direct control (by keyboard,mouse or joystick) just add things like:
if (keys.get( 38)) { redraw=true; M*=translate_z(-pos_speed*dt); }
if (keys.get( 40)) { redraw=true; M*=translate_z(+pos_speed*dt); }
if (keys.get( 37)) { redraw=true; M*=rotation_around_y(-turn_speed*dt); }
if (keys.get( 39)) { redraw=true; M*=rotation_around_y(+turn_speed*dt); }
Where keys is my key map holding on/off state for every key in the keyboard (so I can use more keys at once). This code just control object with arrows. For more info on the subject see related QA:
Computer Graphics: Moving in the world
Preserve accuracy
With incremental changes there is a risc of loosing precision due to floating point errors. So add a counter to your matrix class which counts how many times it has been changed (incremental operation applied) and if some constant count hit (for example 128 operations) Normalize your matrix.
To do that you need to ensure orthogonormality of your matrix. So eaxh axis vector X,Y,Z must be perpendicular to the other two and its size has to be unit. I do it like this:
Choose main axis which will have unchanged direction. I am choosing Z axis as that is usually my main axis in my meshes (viewing direction, rotation axis etc). so just make this vector unit Z = Z/|Z|
exploit cross product to compute the other two axises so X = (+/-) Z x Y and Y = (+/-) Z x X and also normalize them too X = X/|X| and Y = Y/|Y|. The (+/-) is there because I do not know your coordinate system conventions and the cross product can produce opposite vector to your original direction so if the direction is opposite change the multiplication order or negate the result (this is done while coding time not in runtime!).
Here example in C++ how my orthonormal normalization is done:
void reper::orto(int test)
{
double x[3],y[3],z[3];
if ((cnt>=_reper_max_cnt)||(test)) // here cnt is the operations counter and test force normalization regardless of it
{
use_rep(); // you can ignore this
_rep=1; _inv=0; // you can ignore this
axisx_get(x);
axisy_get(y);
axisz_get(z);
vector_one(z,z);
vector_mul(x,y,z); // x is perpendicular to y,z
vector_one(x,x);
vector_mul(y,z,x); // y is perpendicular to z,x
vector_one(y,y);
axisx_set(x);
axisy_set(y);
axisz_set(z);
cnt=0;
}
}
Where axis?_get/set(a) just get/set a as axis from/to your matrix. The vector_one(a,b) returns a = b/|b| and vector_mul(a,b,c) return a = b x c

Trouble Animating Quaternion Slerp

I am attempting to animate a slerp from q1 to q2 for my FPS camera. I have a target somewhere in my world and I want the camera to pan from its current axis to looking at my target. From what I understand the way to do this would be to calculate a quaternion representing my current (axis, rotation) and a second representing my final (axis, rotation) then every frame increment the amount I interpolate between the two from 0 to 1. Is this the correct idea?
What I don't understand is how to compute these beginning and end quaternions?
My camera is pretty standard and has the usual member variables:
glm::vec3 position,forward, up, yAxis, target;
glm::quat orientation;
Note:
= in this post represents mathematical equations, not assignments. (Sadly we have no mathmode on stackoverflow)
If your camera already has a member-quaternion, which describes its rotation, i suppose you have this quaternion. If not, you can use the same technique to find it as well:
If you know your rotational axis vec3 r and your angle a then your quaternion is vec4 q = (cos(a/2), sin(a/2)*r) (and any multiple of it). Your rotated vector is then vec3 v' = q v inv(q).
I assume you want the camera to still point upwards, then you can split the rotation in two rotations, one around the global up axis (probably y) and one around the local horizontal-axis of the camera (probably x).
So your rotation is:
vec3 v' = g l v inv(l) inv(g)
g = (cos(a/2), sin(a/2)*(0,1,0))
l = (cos(b/2), sin(b/2)*(1,0,0))
with the addition of
vec3 normal(viewDirection) = g l (0,0,1) inv(l) inv(g)
(because later you want to have your cameras z-axis point in your viewDirection) you should be able to solve the equations.

Direction of rotation in GLM matrix, using quaternions

I am working on a 3D rendering setup (all math done with GLM for OpenGL), and it all works correctly, except for how I would prefer my transformations to work.
I create a matrix for each entity like so:
matrix = mat4(1);
vec3 scale = GetWorldScale();
vec3 pos = GetWorldPosition(); // Returns pos + parent->pos
quat rot = GetWorldRotationQuat(); // Returns parent->rot * rot
matrix = glm::translate(matrix, pos);
matrix *= mat4_cast(rot);
matrix = glm::scale(matrix, scale);
right = matrix[0].xyz;
up = matrix[1].xyz;
direction = matrix[2].xyz;
Using this, it generally works correctly, except that I'm not sure how to adjust part of it for preference. That is that, using this, translation on the X-axis is flipped (eg. left is positive, and forward is positive on the Z-axis, but I less discriminant with that), and rotation on the Y-axis is flipped.
Looking at other code for this purpose, it seems that many negate what I've used for direction (for the camera). I've done that as well, and translation is correct, but all axes of rotation are the opposite of what's preferred (though rotation on X is the same whether direction is negated or not).
I'm not quite sure what I should do to help correct this, except possibly negate X-axis translation and Y-axis rotation before usage, but I feel that that isn't the best way. Thoughts?
I believe the problems come from the fact that you mix the order of transformations, especially translation and rotation. Each of these transformations (scale, translation, and rotation) define how to transform one coordinate system into another.
Let's go through an example: You have one child object c and its parent p. They each have translation t, rotation r, and scale s. To keep it simple, each of these are 4x4 matrices. Currently you do
matrix = p.t * c.t * p.r * c.r * p.s * c.s
So imagine the local coordinate system of the child that's transformed by this matrix (xyz axes of length 1). Points are multiplied from the right, so we have to read it right to left. First, the coordinate system gets scaled by the child, then scaled by the parent. Then it gets rotated by the child. Then by the parent. And now the child's translation is applied. That means the child translation is applied in a rotated coordinate system. Since you already applied both the child's and the parent's rotations, it's rotated into the parent's coordinate system. So the coordinates of the child are interpreted as if they were parent coordinates.
So what you should be doing: In your method, compute the object matrix m = c.t * c.r * c.s. Then the world matrix of your object is defined as wm = pm * m, where the parent matrix pm is the world matrix of the parent. That way you'll end up with a world matrix:
c.wm = (c.pm) * (c.o) = (p.t * p.r * p.s) * (c.t * c.r * c.s)
And that means that the child's translation is in the coordinate system of the child, and the parent's translation is in the coordinate system of the parent.