how to negate GLM quaternion rotation on any single axis? - c++

I have a quaternion derived from sensors that rotates the "camera" within an OpenGL ES scene.
Also I apply the inverse of this quaternion to certain objects in the scene, so they are "facing" the "camera" - this works as expected.
The issue is that I need to negate rotation on the Z axis for these objects.
How do I come up with a quaternion which has no rotation within the Z component?
My tests:
I have attempted to extract euler Angles, create a negating quaternion and build the rotation matrix for these objects from the multiplication of the two quaternions - results are incorrect.
glm::quat rMQ = cam->getCameraQuaternion();// retrieve camera quat
glm::vec3 a = glm::eulerAngles((rMQ))* 3.14159f / 180.f; // Euler angle set derived
glm::quat rMZ = glm::angleAxis(-a.z, vec3(0.0f, 0.0f, 1.0f)); // negating quaternion
glm::mat4 fM = glm::inverse(glm::mat4_cast(rMQ*rMZ)); //final mat4 for GL rotation

There is no "rotation on the z axis" when you use quaternions, just an axis and an angle. You need to convert to Euler, flip the sign of one component, then convert back to quaternions.
For Euler angles it is up to you to define the order of rotations. Rotations are not commutative, so the order does matter, and that's why there is no generic decomposition of a rotation into components. Normally the order is xyz, but there is no reason why it has to be that way. In some APIs you get to choose the order.

Try simply zeroing-out the 'z' component of the quaternion and then re-normalizing.
Quaternions, when used to represent rotation, can be thought of as an 'axis-angle' representation.

Related

OpenGL glm rotate model around a point issue

I have a model and some helper cubes which are located on its axes, three on each axis for transformations, I used them for rotate the model around its local axes.
I want to make those cubes rotate around the model center with its rotation so I translate them to the model center, rotate them by the same angle on the same axis the translate them back.
This is the code:
//Rotation around X axis
GLfloat theta=glm::radians(xoffset);
glm::quat Qx(glm::angleAxis(theta, glm::vec3(1.0f, 0.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
pickedObject->Transform(rotX);//Multiply the model matrix by the transformation matrix
glm::vec3 op(pickedObject->getMatrix()[3]);//model position
for(TransformationHelper* h:pickedObject->GetTransformationHelpers()){//the small cubes
glm::mat4 m,it,t;
glm::vec3 hp(h->getMatrix()[3]);//the cube position
t=glm::translate(m,op);//m is a unit matrix
it=glm::translate(m,-op);
m=t*rotX*it;
h->Transform(m);
}
The result is unexpected
Update:
after updating the translation matrix I got this result:
The translation is in the wrong direction; the correct offset should be hp-op, i.e. the matrix t should restore the cube's position after rotating.
t=glm::translate(glm::mat(1.f),hp-op);
Also there is no need to use inverse since it is costly (and numerically less stable):
it=glm::translate(glm::mat(1.f),op-hp);
(Note: here translate was called with an explicitly constructed identity matrix. See this post for a similar problem. See here for why this is necessary.)

cursor orientation openGL c++

I want my 2D sprite to rotate while always facing my cursor.
I am using glad, SDL2 & glm for the math.
the "original" way I tried was to calculate the angle between my Front and my desired LookAt vector of the object and put that angle in degrees into an glm::rotate matrix.
That did not work out for some strange reason.
The other way was to do it within a quat and apply the quat to my model matrix, which did not help either.
My object is rendered with its center at the origin (0,0,0) - so no translation needs to be done for the rotation.
I draw 2 triangles to make a rectangle on which I load my texture.
My model matrix looks like that in theory:
model = rotate * scale;
I then plug it into the shader (where Position is my vec3 Vertex)
position = projection * view * model * vec4(Position, 1.0f);
The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise.
If I plug in 80° it actually rotates about ~250° clockwise.
If I plug in 45° it's a perfect 45° clockwise rotation.
All rotations are around the z-axis, eg.:
model = glm::rotate(model, 90.0f, glm::vec3(0.0f,0.0f,1.0f);
If I use a quaternion to simulate an orientation, it gives me angles between 0,2&0,9 radians and my object seems only to rotate between 0.0° & 45° clockwise, no matter where I put my cursor.
If I calculate the angle btw. my two vectors (ObjectLookAt & MousePosition) and store them I get also quite correct angles, but the glm::rotate function does not work as I'd expected.
Finally if I simply code the angle as:
float angle = static_cast<float>(SDL_GetTicks()/1000);
Which starts by one it actually rotates even more weird.
I'd expect it to start to rate by 1° (as it starts with 1 second) and then rotate a full circle around the z axis until there are 360 seconds over.
However it rotates full 360° in about 6 second. so ever "1" that is added on the angle and plugged into glm::rotate as a degree represents 60°?
Is this now a flaw in my logic? Do I not rotate a sprite around the z-axis if it is drawn on a x-y plane?
I also tried the x & y axis just to be safe here, but that didn't work.
I am really stuck here, I (think I) get how it should work in theory, especially as it all happens in "2D", but I cannot get it work..
The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise. If I plug in 80° it actually rotates about ~250° clockwise. If I plug in 45° it's a perfect 45° clockwise rotation.
This is, because the function glm::rotate expects the angle in radians (since GLM version 0.9.6).
Adapt your code like this:
model = glm::rotate(model, glm::radians(angel_degrees), glm::vec3(0.0f,0.0f,1.0f);
see also:
glm rotate usage in Opengl
GLM::Rotate seems to cause wrong rotation?
GLM: function taking degrees as a parameter is deprecated (WHEN USING RADIANS)

One Quaternion Camera

Would it be possible to represent a 3D Camera using only a Quaternion? I know that most cameras use an Up vector and a Forward vector to represent it's rotation, but couldn't the same rotation be represented as a single Quaternion with the rotation axis being forward, and the w component being the amount from the Y Axis that the camera was rotated. If there is a way to do this, any resources would be appreciated. Thanks in advance.
In general, no, it's not possible to represent a 3D camera using only a quaternion. This is because a 3D camera not only has an orientation in space, but also a position, and a projection. The quaternion only describes an orientation.
If you're actually asking whether the rotation component of the camera object could be represented as a quaternion, then the answer in general is yes. Quaternions can be easily converted into rotation matrices (Convert Quaternion rotation to rotation matrix?), and back, so anywhere a rotation matrix is used, a quaternion could also be used (and converted to a rotation matrix where appropriate).

3D rotation in OpenGL

So I'm trying to do some rotation operations on an image in openGL based on quaternion information, and I'm wondering, is there a way to define the location of my image by a vector (let's say (001)), and then apply the quaternion to that vector to rotate my image around an arbitrary origin? I've been using GLM for all the math work. (Using C++)
Or is there a better way to do this that I haven't figured out yet?
If you want to rotate around a point P = {x, y, z} then you can simply translate by -P, rotate around the origin and then translate back by P.
The order in which the transforms should be applied are:
scale -> translation to point of rotation -> rotation -> translation
So your final matrix should be computed:
glm::mat4 finalTransform = translationMat * rotationMat * translationToPointOfRotationMat * scaleMat;

pitch yaw roll, angle independency

I am trying hard to figure out how to make pitch yaw and roll independent between them.
As soon as I rotate something in the z axis (pitch) the second rotation (yaxis yaw) depends on the results of the first and the third rotation (x axis, roll) depends on the other two. So instead of having independent pitch,yaw,roll I get a mixture of the three of them, ugly.
I wish it was possible to store the object angles in an array [pitch,yaw,roll] and then decode those angles during the transformation so that yawing put the object in a given position and then it took the angle corresponding to the pitch, but not a compound of both...
I have seen references to an 'arbitrary axis rotation matrix'. Would it be useful to get the desired results???
1) apply yaw (gl.glRotatef(beta, 0.0f, 1.0f, 0.0f);)
2) get the resulting axis of manually rotating the vector (1.0f,0.0f,0.0f) arround beta
3) apply pitch using the axis got in 2
{and for roll... if 1,2,3 are correct}
4) rotate the axis got in 2 arround its x for a roll
5) apply roll using the axis got in 4
Would it work? Any better solution? I would like keeping my object local orientations in the [pitch,yaw,roll] format.
I have been struggling with it for days, I would like to avoid using quaternions if possible. The 3D objects are stored relatively to 0,0,0 and looking along {1,0,0} and transformed to their destination and angles each frame, so the gimbal lock problem should probably be avoided easily.
In other words, my camera is working fine, World coordinates are being correctly made, but I do not know how or where object-local-transformations based on yaw,pith,roll should be applied.
The results should be read from the array [y,p,r] and combinations of them should not overlap.
Actually my transformations are:
gl.glLoadIdentity();
float[] scalation = transform.getScalation();
gl.glScalef(scalation[0], scalation[1], scalation[2]);
float[] translation = transform.getTranslation();
gl.glTranslatef(translation[0], translation[1], translation[2]);
float[] rotation = transform.getRotation();
gl.glRotatef(rotation[0], 1.0f, 0.0f, 0.0f);
gl.glRotatef(rotation[1], 0.0f, 1.0f, 0.0f);
gl.glRotatef(rotation[2], 0.0f, 0.0f, 1.0f);
The orientation always depends on angles order. You can't make them indipendent. You rotate vectors multipling them by matrices, and matrix multiplication is not commutative. You can choose one order and be consistent with it.
For these problems, a common choice is the ZYX orientation method (first roll, then pitch and at the end yaw).
My personal reference when I work with angles is this document, that helps me a lot.
if you use yaw/pitch/roll, your final orientation will always depend on the amounts and order in which you apply them. you can choose other schemes if you want readability or simplicity. i like choosing a forward vector (F), and calculating a right and up vector based on a canonical 'world up' vector, then just filling in the matrix columns. You could add an extra 'axis spin' angle term, if you like. It's a bit like a quaternion, but more human-readable. I use this representation for controlling a basic WASD-style camera.
Accumulating (yaw, pitch, roll) rotations requires to keep a transformation matrix, which is the product of the separate transformations, in the order in which they occur. The resulting matrix is a rotation around some axis and some angle.