Trouble Animating Quaternion Slerp - c++

I am attempting to animate a slerp from q1 to q2 for my FPS camera. I have a target somewhere in my world and I want the camera to pan from its current axis to looking at my target. From what I understand the way to do this would be to calculate a quaternion representing my current (axis, rotation) and a second representing my final (axis, rotation) then every frame increment the amount I interpolate between the two from 0 to 1. Is this the correct idea?
What I don't understand is how to compute these beginning and end quaternions?
My camera is pretty standard and has the usual member variables:
glm::vec3 position,forward, up, yAxis, target;
glm::quat orientation;

Note:
= in this post represents mathematical equations, not assignments. (Sadly we have no mathmode on stackoverflow)
If your camera already has a member-quaternion, which describes its rotation, i suppose you have this quaternion. If not, you can use the same technique to find it as well:
If you know your rotational axis vec3 r and your angle a then your quaternion is vec4 q = (cos(a/2), sin(a/2)*r) (and any multiple of it). Your rotated vector is then vec3 v' = q v inv(q).
I assume you want the camera to still point upwards, then you can split the rotation in two rotations, one around the global up axis (probably y) and one around the local horizontal-axis of the camera (probably x).
So your rotation is:
vec3 v' = g l v inv(l) inv(g)
g = (cos(a/2), sin(a/2)*(0,1,0))
l = (cos(b/2), sin(b/2)*(1,0,0))
with the addition of
vec3 normal(viewDirection) = g l (0,0,1) inv(l) inv(g)
(because later you want to have your cameras z-axis point in your viewDirection) you should be able to solve the equations.

Related

placing objects perpendicularly on the surface of a sphere that has a wavy surface

So I have a sphere. It rotates around a given axis and changes its surface by a sin * cos function.
I also have a bunck of tracticoids at fix points on the sphere. These objects follow the sphere while moving (including the rotation and the change of the surface). But I can't figure out how to make them always perpendicular to the sphere. I have the ponts where the tracticoid connects to the surface of the sphere and its normal vector. The tracticoids are originally orianted by the z axis. So I tried to make it's axis to the given normal vector but I just can't make it work.
This is where i calculate M transformation matrix and its inverse:
virtual void SetModelingTransform(mat4& M, mat4& Minv, vec3 n) {
M = ScaleMatrix(scale) * RotationMatrix(rotationAngle, rotationAxis) * TranslateMatrix(translation);
Minv = TranslateMatrix(-translation) * RotationMatrix(-rotationAngle, rotationAxis) * ScaleMatrix(vec3(1 / scale.x, 1 / scale.y, 1 / scale.z));
}
In my draw function I set the values for the transformation.
_M and _Minv are the matrixes of the sphere so the tracticoids are following the sphere, but when I tried to use a rotation matrix, the tracticoids strated moving on the surface of the sphere.
_n is the normal vector that the tracticoid should follow.
void Draw(RenderState state, float t, mat4 _M, mat4 _Minv, vec3 _n) {
SetModelingTransform(M, Minv, _n);
if (!sphere) {
state.M = M * _M * RotationMatrix(_n.z, _n);
state.Minv = Minv * _Minv * RotationMatrix(-_n.z, _n);
}
else {
state.M = M;
state.Minv = Minv;
}
.
.
.
}
You said your sphere has an axis of rotation, so you should have a vector a aligned with this axis.
Let P = P(t) be the point on the sphere at which your object is positioned. You should also have a vector n = n(t) perpendicular to the surface of the sphere at point P=P(t) for each time-moment t. All vectors are interpreted as column-vectors, i.e. 3 x 1 matrices.
Then, form the matrix
U[][1] = cross(a, n(t)) / norm(cross(a, n(t)))
U[][3] = n(t) / norm(n(t))
U[][2] = cross(U[][3], U[][1])
where for each j=1,2,3 U[][j] is a 3 x 1 vector column. Then
U(t) = [ U[][1], U[][2], U[][3] ]
is a 3 x 3 orthogonal matrix (i.e. it is a 3D rotation around the origin)
For each moment of time t calculate the matrix
M(t) = U(t) * U(0)^T
where ^T is the matrix transposition.
The final transformation that rotates your object from its original position to its position at time t should be
X(t) = P(t) + M(t)*(X - P(0))
I'm not sure if I got your explanations, but here I go.
You have a sphere with a wavy surface. This means that each point on the surface changes its distance to the center of the sphere, like a piece of wood on a wave in the sea changes its distance to the bottom of the sea at that position.
We can tell that the radious R of the sphere is variable at each point/time case.
Now you have a tracticoid (what's a tracticoid?). I'll take it as some object floating on the wave, and following the sphere movements.
Then it seems you're asking as how to make the tracticoid follows both wavy surface and sphere movements.
Well. If we define each movement ("transformation") by a 4x4 matrix it all reduces to combine in the proper order those matrices.
There are some good OpenGL tutorials that teach you about transformations, and how to combine them. See, for example, learnopengl.com.
To your case, there are several transformations to use.
The sphere spins. You need a rotation matrix, let's call it MSR (matrix sphere rotation) and an axis of rotation, ASR. If the sphere also translates then also a MST is needed.
The surface waves, with some function f(lat, long, time) which calculates for those parameters the increment (signed) of the radious. So, Ri = R + f(la,lo,ti)
For the tracticoid, I guess you have some triangles that define a tracticoid. I also guess those triangles are expressed in a "local" coordinates system whose origin is the center of the tracticoid. Your issue comes when you have to position and rotate the tracticoid, right?
You have two options. The first is to rotate the tracticoid to make if aim perpendicular to the sphere and then translate it to follow the sphere rotation. While perfect mathematically correct, I find this option some complicated.
The best option is to make the tracticoid to rotate and translate exactly as the sphere, as if both would share the same origin, the center of the sphere. And then translate it to its current position.
First part is quite easy: The matrix that defines such transformation is M= MST * MSR, if you use the typical OpenGL axis convention, otherwise you need to swap their order. This M is the common part for all objects (sphere & tracticoids).
The second part requires you have a vector Vn that defines the point in the surface, related to the center of the sphere. You should be able to calculate it with the parameters latitude, longitude and the R obtained by f() above, plus the size/2 of the tracticoid (distance from its center to the point where it touches the wave). Use the components of Vn to build a translation matrix MTT
And now, just get the resultant transformation to use with every vertex of the tracticoid: Mt = MTT * M = MTT * MST * MSR
To render the scene you need other two matrices, for the camera (MV) and for the projection (MP). While Mt is for each tracticoid, MV and MP are the same for all objects, including the sphere itself.

What is the correct order of Transformations when calculating Matrices in OpenGL?

I am following the tutorials at LearnOpenGL.com and I am confused about the order of Matrices.
The Transformations chapter tells:
Matrix multiplication is not commutative, which means their order is important. When multiplying matrices the right-most matrix is first multiplied with the vector so you should read the multiplications from right to left. It is advised to first do scaling operations, then rotations and lastly translations when combining matrices otherwise they might (negatively) affect each other. For example, if you would first do a translation and then scale, the translation vector would also scale!
So If I am not wrong, the order is Translate * Rotate * Scale * vector_to_transform.
But immediately in the next Chapter, when calculating the LookAt matrix, the multiplication order is flipped. Here is the code snippet from the website:
// Custom implementation of the LookAt function
glm::mat4 calculate_lookAt_matrix(glm::vec3 position, glm::vec3 target, glm::vec3 worldUp)
{
// 1. Position = known
// 2. Calculate cameraDirection
glm::vec3 zaxis = glm::normalize(position - target);
// 3. Get positive right axis vector
glm::vec3 xaxis = glm::normalize(glm::cross(glm::normalize(worldUp), zaxis));
// 4. Calculate camera up vector
glm::vec3 yaxis = glm::cross(zaxis, xaxis);
// Create translation and rotation matrix
// In glm we access elements as mat[col][row] due to column-major layout
glm::mat4 translation = glm::mat4(1.0f); // Identity matrix by default
translation[3][0] = -position.x; // Third column, first row
translation[3][1] = -position.y;
translation[3][2] = -position.z;
glm::mat4 rotation = glm::mat4(1.0f);
rotation[0][0] = xaxis.x; // First column, first row
rotation[1][0] = xaxis.y;
rotation[2][0] = xaxis.z;
rotation[0][1] = yaxis.x; // First column, second row
rotation[1][1] = yaxis.y;
rotation[2][1] = yaxis.z;
rotation[0][2] = zaxis.x; // First column, third row
rotation[1][2] = zaxis.y;
rotation[2][2] = zaxis.z;
// Return lookAt matrix as combination of translation and rotation matrix
return rotation * translation; // Remember to read from right to left (first translation then rotation)
}
At the end of the code snippet, the matrix is calculated as rotation * translation, even though the matrix is going to be multiplied as,
gl_position = projection * lookAt * model * vec4(vertexPosition, 1.0);
as Column-major matrices must be pre-multiplied to the vector.
Please help me understand this.
LearnOpenGL unfortunately doesn't explain where the Camera transform comes from.
You can see the Camera transform as an inverse model transform.
3D math doesn't care if you move the camera towards the Objects or the Objects towards the Camera.
Also if your objects are already scaled to have the proper "World Space" size you don't need the camera to scale them. The scaling for the "intrinsic Camera Parameters" are dealt with in the projection matrix (scale for aspect ration and field of view). Which is done after the Camera transform.
So we move the object points towards the "camera" instead of the camera towards the points. As I said you would not leave the scaling in the Camera Matrix, since you only want to orient the objects in front of the Camera.
Placing the Camera as Model in the world space would be:
M = TR (leave out S for above reasons)
Then you inverse the Camera Transform ->
C = M^-1 | M = TR
= (TR)^-1
= R^-1 * T^-1 | inverse of matrix product -> flip order and invert matrices
Lets assume R(angle) is the Matrix that rotates by angle angle and T(t) is the Matrix that translates by vector t then:
= R(angle)^-1 * T(t)^-1
= R^T(angle) * T(-t)
Which is exactly what you return in your lookAt-method. The basis vectors of R are set up by the column vectors of you new coordinate frame, but then transposed (so you have them as row vectors). That's because the camera frame vectors are orthogonal and unit length, so the resulting matrix is "orthonomal" which means it's inverse is it's transpose. And the inverse Translation Matrix T(t) gets the translation vector T(-t) (inverse of eye-Position of the Camera).
Hope my explanation clearifies more than it confuses :-)
Okay! After reading through the entire chapter again, I missed a crucial detail. The View matrix usually does not scale the objects. It's just a matrix to rotate and translate the model matrix in such a way to simulate an eye into the world. This chapter of LearnOpenGL.com has a block explaining about combining matrices and it shows how to combine a translate and rotate matric and it's how the lookAt function is implemented.

How to rotate a vector in opengl?

I want to rotate my object,when I use glm::rotate.
It can only rotate on X,Y,Z arrows.
For example,Model = vec3(5,0,0)
if i use Model = glm::rotate(Model,glm::radians(180),glm::vec3(0, 1, 0));
it become vec3(-5,0,0)
i want a API,so i can rotate on vec3(0,4,0) 180 degree,so the Model move to vec3(3,0,0)
Any API can I use?
Yes OpenGL uses 4x4 uniform transform matrices internally. But the glRotate API uses 4 parameters instead of 3:
glMatrixMode(GL_MODELVIEW);
glRotatef(angle,x,y,z);
it will rotate selected matrix around point (0,0,0) and axis [(0,0,0),(x,y,z)] by angle angle [deg]. If you need to rotate around specific point (x0,y0,z0) then you should also translate:
glMatrixMode(GL_MODELVIEW);
glTranslatef(+x0,+y0,+z0);
glRotatef(angle,x,y,z);
glTranslatef(-x0,-y0,-z0);
This is old API however and while using modern GL you need to do the matrix stuff on your own (for example by using GLM) as there is no matrix stack anymore. GLM should have the same functionality as glRotate just find the function which mimics it (looks like glm::rotate is more or less the same). If not you can still do it on your own using Rodrigues rotation formula.
Now your examples make no sense to me:
(5,0,0) -> glm::rotate (0,1,0) -> (-5,0,0)
implies rotation around y axis by 180 degrees? well I can see the axis but I see no angle anywhere. The second (your desired API) is even more questionable:
(4,0,0) -> wanted API -> (3,0,0)
vectors should have the same magnitude after rotation which is clearly not the case (unless you want to rotate around some point other than (0,0,0) which is also nowhere mentioned. Also after rotation usually you leak some of the magnitude to other axises your y,z are all zero that is true only in special cases (while rotation by multiples of 90 deg).
So clearly you forgot to mention vital info or do not know how rotation works.
Now what you mean by you want to rotate on X,Y,Z arrows? Want incremental rotations on key hits ? or have a GUI like arrows rendered in your scene and want to select them and rotate if they are drag?
[Edit1] new example
I want a API so I can rotate vec3(0,4,0) by 180 deg and result
will be vec3(3,0,0)
This is doable only if you are talking about points not vectors. So you need center of rotation and axis of rotation and angle.
// knowns
vec3 p0 = vec3(0,4,0); // original point
vec3 p1 = vec3(3,0,0); // wanted point
float angle = 180.0*(M_PI/180.0); // deg->rad
// needed for rotation
vec3 center = 0.5*(p0+p1); // vec3(1.5,2.0,0.0) mid point due to angle = 180 deg
vec3 axis = cross((p1-p0),vec3(0,0,1)); // any perpendicular vector to `p1-p0` if `p1-p0` is parallel to (0,0,1) then use `(0,1,0)` instead
// construct transform matrix
mat4 m =GLM::identity(); // unit matrix
m = GLM::translate(m,+center);
m = GLM::rotate(m,angle,axis);
m = GLM::translate(m,-center); // here m should be your rotation matrix
// use transform matrix
p1 = m*p0; // and finaly how to rotate any point p0 into p1 ... in OpenGL notation
I do not code in GLM so there might be some little differencies.

How to rotate 3D camera with glm

So, I have a Camera class, witch has vectors forward, up and position. I can move camera by changing position, and I'm calculating its matrix with this:
glm::mat4 view = glm::lookAt(camera->getPos(),
camera->getTarget(), //Caclates forwards end point, starting from pos
camera->getUp());
Mu question is, how can I rotate the camera without getting gimbal lock. I haven't found any good info about glm quaternion, or even quaternion in 3d programming
glm makes quaternions relatively easy. You can initiate a quaternion with a glm::vec3 containing your Euler Angles, e.g glm::fquat(glm::vec3(x,y,z)). You can rotate a quaternion by another quaternion by multiplication, ( r = r1 * r2 ), and this does so without a gimbal lock. To use a quaternion to generate your matrix, use glm::mat_cast(yourQuat) which turns it into a rotational matrix.
So, assuming you are making a 3D app, store your orientation in a quaternion and your position in a vec4, then, to generate your View matrix, you could use a vec4(0,0,1,1) and multiply that against the matrix generated by your quaternion, then adding it to the position, which will give you the target. The up vector can be obtained by multiplying the quaternion's matrix to vec4(0,1,0,1). Tell me if you have anymore questions.
For your two other questions Assuming you are using opengl and your Z axis is the forward axis. (Positive X moves away from the user. )
1). To transform your forward vector, you can rotate about your Y and X axis on your quaternion. E.g glm::fquat(glm::vec3(rotationUpandDown, rotationLeftAndRight, 0)). and multiply that into your orientation quaternion.
2).If you want to roll, find which component your forward axis is on. Since you appear to be using openGL, this axis is most likely your positive Z axis. So if you want to roll, glm::quat(glm::vec3(0,0,rollAmt)). And multiply that into your orientation quaternion. oriention = rollquat * orientation.
Note::Here is a function that might help you, I used to use this for my Cameras.
To make a quat that transform 1 vector to another, e.g one forward vector to another.
//Creates a quat that turns U to V
glm::quat CreateQuatFromTwoVectors(cvec3 U, cvec3 V)
{
cvec3 w = glm::cross(U,V);
glm::quat q = glm::quat(glm::dot(U,V), w.x, w.y, w.z);
q.w += sqrt(q.x*q.x + q.w*q.w + q.y*q.y + q.z*q.z);
return glm::normalize(q);
}

An inconsistency in my understanding of the GLM lookAt function

Firstly, if you would like an explanation of the GLM lookAt algorithm, please look at the answer provided on this question: https://stackoverflow.com/a/19740748/1525061
mat4x4 lookAt(vec3 const & eye, vec3 const & center, vec3 const & up)
{
vec3 f = normalize(center - eye);
vec3 u = normalize(up);
vec3 s = normalize(cross(f, u));
u = cross(s, f);
mat4x4 Result(1);
Result[0][0] = s.x;
Result[1][0] = s.y;
Result[2][0] = s.z;
Result[0][1] = u.x;
Result[1][1] = u.y;
Result[2][1] = u.z;
Result[0][2] =-f.x;
Result[1][2] =-f.y;
Result[2][2] =-f.z;
Result[3][0] =-dot(s, eye);
Result[3][1] =-dot(u, eye);
Result[3][2] = dot(f, eye);
return Result;
}
Now I'm going to tell you why I seem to be having a conceptual issue with this algorithm. There are two parts to this view matrix, the translation and the rotation. The translation does the correct inverse transformation, bringing the camera position to the origin, instead of the origin position to the camera. Similarly, you expect the rotation that the camera defines to be inversed before being put into this view matrix as well. I can't see that happening here, that's my issue.
Consider the forward vector, this is where your camera looks at. Consequently, this forward vector needs to be mapped to the -Z axis, which is the forward direction used by openGL. The way this view matrix is suppose to work is by creating an orthonormal basis in the columns of the view matrix, so when you multiply a vertex on the right hand side of this matrix, you are essentially just converting it's coordinates to that of different axes.
When I play the rotation that occurs as a result of this transformation in my mind, I see a rotation that is not the inverse rotation of the camera, like what's suppose to happen, rather I see the non-inverse. That is, instead of finding the camera forward being mapped to the -Z axis, I find the -Z axis being mapped to the camera forward.
If you don't understand what I mean, consider a 2D example of the same type of thing that is happening here. Let's say the forward vector is (sqr(2)/2 , sqr(2)/2), or sin/cos of 45 degrees, and let's also say a side vector for this 2D camera is sin/cos of -45 degrees. We want to map this forward vector to (0,1), the positive Y axis. The positive Y axis can be thought of as the analogy to the -Z axis in openGL space. Let's consider a vertex in the same direction as our forward vector, namely (1,1). By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y axis by using a 2x2 matrix that consists of the forward vector in the first column and the side vector in the second column. This is an equivalent calculation of that calculation http://www.wolframalpha.com/input/?i=%28sqr%282%29%2F2+%2C+sqr%282%29%2F2%29++1+%2B+%28sqr%282%29%2F2%2C+-sqr%282%29%2F2+%29+1.
Note that you don't get your (1,1) vertex mapped the positive Y axis like you wanted, instead you have it mapped to the positive X axis. You might also consider what happened to a vertex that was on the positive Y axis if you applied this transformation. Sure enough, it is transformed to the forward vector.
Therefore it seems like something very fishy is going on with the GLM algorithm. However, I doubt this algorithm is incorrect since it is so popular. What am I missing?
Have a look at GLU source code in Mesa: http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
First in the implementation of gluPerspective, notice the -1 is using the indices [2][3] and the -2 * zNear * zFar / (zFar - zNear) is using [3][2]. This implies that the indexing is [column][row].
Now in the implementation of gluLookAt, the first row is set to side, the next one to up and the final one to -forward. This gives you the rotation matrix which is post-multiplied by the translation that brings the eye to the origin.
GLM seems to be using the same [column][row] indexing (from the code). And the piece you just posted for lookAt is consistent with the more standard gluLookAt (including the translational part). So at least GLM and GLU agree.
Let's then derive the full construction step by step. Noting C the center position and E the eye position.
Move the whole scene to put the eye position at the origin, i.e. apply a translation of -E.
Rotate the scene to align the axes of the camera with the standard (x, y, z) axes.
2.1 Compute a positive orthonormal basis for the camera:
f = normalize(C - E) (pointing towards the center)
s = normalize(f x u) (pointing to the right side of the eye)
u = s x f (pointing up)
with this, (s, u, -f) is a positive orthonormal basis for the camera.
2.2 Find the rotation matrix R that aligns maps the (s, u, -f) axes to the standard ones (x, y, z). The inverse rotation matrix R^-1 does the opposite and aligns the standard axes to the camera ones, which by definition means that:
(sx ux -fx)
R^-1 = (sy uy -fy)
(sz uz -fz)
Since R^-1 = R^T, we have:
( sx sy sz)
R = ( ux uy uz)
(-fx -fy -fz)
Combine the translation with the rotation. A point M is mapped by the "look at" transform to R (M - E) = R M - R E = R M + t. So the final 4x4 transform matrix for "look at" is indeed:
( sx sy sz tx ) ( sx sy sz -s.E )
L = ( ux uy uz ty ) = ( ux uy uz -u.E )
(-fx -fy -fz tz ) (-fx -fy -fz f.E )
( 0 0 0 1 ) ( 0 0 0 1 )
So when you write:
That is, instead of finding the camera forward being mapped to the -Z
axis, I find the -Z axis being mapped to the camera forward.
it is very surprising, because by construction, the "look at" transform maps the camera forward axis to the -z axis. This "look at" transform should be thought as moving the whole scene to align the camera with the standard origin/axes, it's really what it does.
Using your 2D example:
By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y
axis by using a 2x2 matrix that consists of the forward vector in the
first column and the side vector in the second column.
That's the opposite, following the construction I described, you need a 2x2 matrix with the forward and row vector as rows and not columns to map (1, 1) and the other vector to the y and x axes. To use the definition of the matrix coefficients, you need to have the images of the standard basis vectors by your transform. This gives directly the columns of the matrix. But since what you are looking for is the opposite (mapping your vectors to the standard basis vectors), you have to invert the transformation (transpose, since it's a rotation). And your reference vectors then become rows and not columns.
These guys might give some further insights to your fishy issue:
glm::lookAt vertical camera flips when z <= 0
The answer might be of interest to you?