glm quaternion rotation merge - opengl

I am currently trying to figure out how quaternions multiply and concatenate with each other in glm for opengl, but every thread I find only mentions 1 quaternion rotation.... Basically, how can I combine quaternions such that all the rotations concatenate? For example:
glm::quat quaternions[4]; // how to merge all 4 quaternions?
I have tried 2 approaches so far:
// approach 1:
glm::quat quaternions[4];
glm::quat q = glm::quat(glm::vec3(0));
q *= quaternions[0];
q *= quaternions[1];
q *= quaternions[2];
q *= quaternions[2];
glm::mat4 matrix = glm::toMat4(q);
// approach 2:
glm::quat quaternions[4];
glm::mat4 matrix = glm::mat4(1.0f);
matrix = matrix * glm::toMat4(quaternions[0]);
matrix = matrix * glm::toMat4(quaternions[1]);
matrix = matrix * glm::toMat4(quaternions[2]);
matrix = matrix * glm::toMat4(quaternions[2]);
None of these approaches seem to give me the results I am expecting.
Edit: I should add that I was trying to skin a collada model using assimp and glm. Approach 1 and 2 give me the exact solution I am looking for, so either should work.

After 3 days of more trial and error I finally figured out what was going on. Turns out some of my bones had a "roll" in blender which was screwing things up. Upside down bones were also a problem in my mesh. So for anyone who is having problems with an assimp loaded animation:
Check your bind pose first
Make sure you don't have upside down bones, this may cause some problems.
Check all bone information (in my case it was "roll" causing problems)
So nothing was wrong with my quaternion code...

Related

What is the correct order of Transformations when calculating Matrices in OpenGL?

I am following the tutorials at LearnOpenGL.com and I am confused about the order of Matrices.
The Transformations chapter tells:
Matrix multiplication is not commutative, which means their order is important. When multiplying matrices the right-most matrix is first multiplied with the vector so you should read the multiplications from right to left. It is advised to first do scaling operations, then rotations and lastly translations when combining matrices otherwise they might (negatively) affect each other. For example, if you would first do a translation and then scale, the translation vector would also scale!
So If I am not wrong, the order is Translate * Rotate * Scale * vector_to_transform.
But immediately in the next Chapter, when calculating the LookAt matrix, the multiplication order is flipped. Here is the code snippet from the website:
// Custom implementation of the LookAt function
glm::mat4 calculate_lookAt_matrix(glm::vec3 position, glm::vec3 target, glm::vec3 worldUp)
{
// 1. Position = known
// 2. Calculate cameraDirection
glm::vec3 zaxis = glm::normalize(position - target);
// 3. Get positive right axis vector
glm::vec3 xaxis = glm::normalize(glm::cross(glm::normalize(worldUp), zaxis));
// 4. Calculate camera up vector
glm::vec3 yaxis = glm::cross(zaxis, xaxis);
// Create translation and rotation matrix
// In glm we access elements as mat[col][row] due to column-major layout
glm::mat4 translation = glm::mat4(1.0f); // Identity matrix by default
translation[3][0] = -position.x; // Third column, first row
translation[3][1] = -position.y;
translation[3][2] = -position.z;
glm::mat4 rotation = glm::mat4(1.0f);
rotation[0][0] = xaxis.x; // First column, first row
rotation[1][0] = xaxis.y;
rotation[2][0] = xaxis.z;
rotation[0][1] = yaxis.x; // First column, second row
rotation[1][1] = yaxis.y;
rotation[2][1] = yaxis.z;
rotation[0][2] = zaxis.x; // First column, third row
rotation[1][2] = zaxis.y;
rotation[2][2] = zaxis.z;
// Return lookAt matrix as combination of translation and rotation matrix
return rotation * translation; // Remember to read from right to left (first translation then rotation)
}
At the end of the code snippet, the matrix is calculated as rotation * translation, even though the matrix is going to be multiplied as,
gl_position = projection * lookAt * model * vec4(vertexPosition, 1.0);
as Column-major matrices must be pre-multiplied to the vector.
Please help me understand this.
LearnOpenGL unfortunately doesn't explain where the Camera transform comes from.
You can see the Camera transform as an inverse model transform.
3D math doesn't care if you move the camera towards the Objects or the Objects towards the Camera.
Also if your objects are already scaled to have the proper "World Space" size you don't need the camera to scale them. The scaling for the "intrinsic Camera Parameters" are dealt with in the projection matrix (scale for aspect ration and field of view). Which is done after the Camera transform.
So we move the object points towards the "camera" instead of the camera towards the points. As I said you would not leave the scaling in the Camera Matrix, since you only want to orient the objects in front of the Camera.
Placing the Camera as Model in the world space would be:
M = TR (leave out S for above reasons)
Then you inverse the Camera Transform ->
C = M^-1 | M = TR
= (TR)^-1
= R^-1 * T^-1 | inverse of matrix product -> flip order and invert matrices
Lets assume R(angle) is the Matrix that rotates by angle angle and T(t) is the Matrix that translates by vector t then:
= R(angle)^-1 * T(t)^-1
= R^T(angle) * T(-t)
Which is exactly what you return in your lookAt-method. The basis vectors of R are set up by the column vectors of you new coordinate frame, but then transposed (so you have them as row vectors). That's because the camera frame vectors are orthogonal and unit length, so the resulting matrix is "orthonomal" which means it's inverse is it's transpose. And the inverse Translation Matrix T(t) gets the translation vector T(-t) (inverse of eye-Position of the Camera).
Hope my explanation clearifies more than it confuses :-)
Okay! After reading through the entire chapter again, I missed a crucial detail. The View matrix usually does not scale the objects. It's just a matrix to rotate and translate the model matrix in such a way to simulate an eye into the world. This chapter of LearnOpenGL.com has a block explaining about combining matrices and it shows how to combine a translate and rotate matric and it's how the lookAt function is implemented.

How to rotate a vector in opengl?

I want to rotate my object,when I use glm::rotate.
It can only rotate on X,Y,Z arrows.
For example,Model = vec3(5,0,0)
if i use Model = glm::rotate(Model,glm::radians(180),glm::vec3(0, 1, 0));
it become vec3(-5,0,0)
i want a API,so i can rotate on vec3(0,4,0) 180 degree,so the Model move to vec3(3,0,0)
Any API can I use?
Yes OpenGL uses 4x4 uniform transform matrices internally. But the glRotate API uses 4 parameters instead of 3:
glMatrixMode(GL_MODELVIEW);
glRotatef(angle,x,y,z);
it will rotate selected matrix around point (0,0,0) and axis [(0,0,0),(x,y,z)] by angle angle [deg]. If you need to rotate around specific point (x0,y0,z0) then you should also translate:
glMatrixMode(GL_MODELVIEW);
glTranslatef(+x0,+y0,+z0);
glRotatef(angle,x,y,z);
glTranslatef(-x0,-y0,-z0);
This is old API however and while using modern GL you need to do the matrix stuff on your own (for example by using GLM) as there is no matrix stack anymore. GLM should have the same functionality as glRotate just find the function which mimics it (looks like glm::rotate is more or less the same). If not you can still do it on your own using Rodrigues rotation formula.
Now your examples make no sense to me:
(5,0,0) -> glm::rotate (0,1,0) -> (-5,0,0)
implies rotation around y axis by 180 degrees? well I can see the axis but I see no angle anywhere. The second (your desired API) is even more questionable:
(4,0,0) -> wanted API -> (3,0,0)
vectors should have the same magnitude after rotation which is clearly not the case (unless you want to rotate around some point other than (0,0,0) which is also nowhere mentioned. Also after rotation usually you leak some of the magnitude to other axises your y,z are all zero that is true only in special cases (while rotation by multiples of 90 deg).
So clearly you forgot to mention vital info or do not know how rotation works.
Now what you mean by you want to rotate on X,Y,Z arrows? Want incremental rotations on key hits ? or have a GUI like arrows rendered in your scene and want to select them and rotate if they are drag?
[Edit1] new example
I want a API so I can rotate vec3(0,4,0) by 180 deg and result
will be vec3(3,0,0)
This is doable only if you are talking about points not vectors. So you need center of rotation and axis of rotation and angle.
// knowns
vec3 p0 = vec3(0,4,0); // original point
vec3 p1 = vec3(3,0,0); // wanted point
float angle = 180.0*(M_PI/180.0); // deg->rad
// needed for rotation
vec3 center = 0.5*(p0+p1); // vec3(1.5,2.0,0.0) mid point due to angle = 180 deg
vec3 axis = cross((p1-p0),vec3(0,0,1)); // any perpendicular vector to `p1-p0` if `p1-p0` is parallel to (0,0,1) then use `(0,1,0)` instead
// construct transform matrix
mat4 m =GLM::identity(); // unit matrix
m = GLM::translate(m,+center);
m = GLM::rotate(m,angle,axis);
m = GLM::translate(m,-center); // here m should be your rotation matrix
// use transform matrix
p1 = m*p0; // and finaly how to rotate any point p0 into p1 ... in OpenGL notation
I do not code in GLM so there might be some little differencies.

Project a 3D vertex to screen coordinates independently from OpenGL?

I have a vertex (x, y, z) and I want to calculate the screen location where this point would be rendered on my viewport. Something like Ray Picking, just more or less the other way around. I don't think I can use gluProject because at the time I need the projected point my matrices are restored to identities.
I would like to stay independent from OpenGL, so no extra render pass. This way I'm sure it would only be some math like the ray picking thing. I've implemented that one and it works well, so I want to project a vertex the same way.
Of course I have camera pos, up and lookAt vectors and fovy. Is there any source of information about this? Or does anyone know how to work this out?
If your know your matrices (or at least know how to construct them), you can compute screen location for a vertex by multiplying its position with the matrices and then performing viewport transformation:
vProjected = modelViewPojectionMatrix * v;
if (
// check that vertex shouldn't be clipped.
-vProjected.w <= vProjected.x && vProjected.x <= vProjected.w &&
-vProjected.w <= vProjected.y && vProjected.y <= vProjected.w &&
-vProjected.w <= vProjected.z && vProjected.z <= vProjected.w
) {
vProjected /= vProjected.w;
vScreen.x = VIEWPORT_W * vProjected.x / 2 + VIEWPORT_CENTER_X;
vScreen.y = VIEWPORT_H * vProjected.y / 2 + VIEWPORT_CENTER_Y;
}
Note that, as per OpenGL convention, (0, 0) is lower left corner, not upper left one.
Any math library with verctor and matrix operations can help you with that. For example, mathfu or glm.
UPD. How you can construct modelViewProjectionMatrix given camera position and orientation and projection params? We need two matrices (let's assume that model matrix is just an identity, i.e. vertex positions a given already in world coordinate system). First one would be the view matrix, which takes into account camera position and orientation. Here I'll be using mathfu since I'm more familiar with it, but almost every math library design with 3D graphics in mind has the same functions:
viewMatrix = mathfu::mat4::LookAt(
cameraLookAtPosition,
cameraPosition,
cameraUpVector
);
The second one would be projection matrix:
projectionMatrix = mathfu::mat4::Perspective(fovy, aspect, zNear, zFar);
Now modelViewProjectionMatrix is just a product of those two:
modelViewProjectionMatrix = projectionMatrix * viewMatrix;
Note that matrix multiplication is not commutative, in other words A * B != B * A. So order in which matrices are multiplied is important.

Getting the Tangent for a Object Space to Texture Space

A university assignment requires me to use the Vertex Coordinates I have to calculate the Normals and the Tangent from the Normal values so that I can create a Object Space to Texture Space Matrix.
I have the code needed to make the Matrix, and the binormal but I don't have the code for calculating the Tangent. I tried to look online, but the answers usually confuse me. Can you explain to me clearly how it works?
EDIT: I have corrected what I wrote previously as clearly I misunderstood the assignment. Thank you everyone for helping me see that.
A tangent in the mathematical sense is a property of a geometric object, not of the normalmap. In case of normalmapping, we are in addition searching for a very specific tangent (there are infinitely many in each point, basically every vector in the plane defined by the normal is a tangent).
But let's go one step back: We want a space where the u-direction of the texture is mapped on the tangent direction, the v-direction on the bitangent/binormal and the up-vector of the normalmap to the normal of the object. Thus the tangent for a triangle (v0, v1, v2) with uv-coordinates (uv1, uv2, uv3) can be calculated as:
dv1 = v1-v0
dv2 = v2-v0
duv1 = uv1-uv0
duv2 = uv2-uv0
r = 1.0f / (duv1.x * duv2.y - duv1.y * duv2.x);
tangent = (dv1 * duv2.y - dv2 * duv1.y) * r;
bitangent = (dv2 * duv1.x - dv1 * duv2.x) * r;
When having this done for all triangles, we have to smooth the tangents at shared vertices (quite similar to what happens with the normal). There are several algorithms for doing this, depending on what you need. One can, for example, weight the tangents by the surface area of the adjacent triangles or by the incident angle of them.
An implementation of this whole calculation can be found [here] along a more detailed explaination: (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/)

How to rotate 3D camera with glm

So, I have a Camera class, witch has vectors forward, up and position. I can move camera by changing position, and I'm calculating its matrix with this:
glm::mat4 view = glm::lookAt(camera->getPos(),
camera->getTarget(), //Caclates forwards end point, starting from pos
camera->getUp());
Mu question is, how can I rotate the camera without getting gimbal lock. I haven't found any good info about glm quaternion, or even quaternion in 3d programming
glm makes quaternions relatively easy. You can initiate a quaternion with a glm::vec3 containing your Euler Angles, e.g glm::fquat(glm::vec3(x,y,z)). You can rotate a quaternion by another quaternion by multiplication, ( r = r1 * r2 ), and this does so without a gimbal lock. To use a quaternion to generate your matrix, use glm::mat_cast(yourQuat) which turns it into a rotational matrix.
So, assuming you are making a 3D app, store your orientation in a quaternion and your position in a vec4, then, to generate your View matrix, you could use a vec4(0,0,1,1) and multiply that against the matrix generated by your quaternion, then adding it to the position, which will give you the target. The up vector can be obtained by multiplying the quaternion's matrix to vec4(0,1,0,1). Tell me if you have anymore questions.
For your two other questions Assuming you are using opengl and your Z axis is the forward axis. (Positive X moves away from the user. )
1). To transform your forward vector, you can rotate about your Y and X axis on your quaternion. E.g glm::fquat(glm::vec3(rotationUpandDown, rotationLeftAndRight, 0)). and multiply that into your orientation quaternion.
2).If you want to roll, find which component your forward axis is on. Since you appear to be using openGL, this axis is most likely your positive Z axis. So if you want to roll, glm::quat(glm::vec3(0,0,rollAmt)). And multiply that into your orientation quaternion. oriention = rollquat * orientation.
Note::Here is a function that might help you, I used to use this for my Cameras.
To make a quat that transform 1 vector to another, e.g one forward vector to another.
//Creates a quat that turns U to V
glm::quat CreateQuatFromTwoVectors(cvec3 U, cvec3 V)
{
cvec3 w = glm::cross(U,V);
glm::quat q = glm::quat(glm::dot(U,V), w.x, w.y, w.z);
q.w += sqrt(q.x*q.x + q.w*q.w + q.y*q.y + q.z*q.z);
return glm::normalize(q);
}