About glm quaternion rotation - opengl

I want to make some rotation by quaternion.
The glm library was done this very well.
The following was my codes:
vec3 v(0.0f, 0.0f, 1.0f);
float deg = 45.0f * 0.5f;
quat q(glm::cos(glm::radians(deg)), 0, glm::sin(glm::radians(deg)), 0);
vec3 newv = q*v;
printf("v %f %f %f \n", newv[0], newv[1], newv[2]);
My question is which in many articles the formula of the rotation by quaternion was
rotated_v = q*v*q_conj
It's weird. In glm the vector "v" just multiply by the quaternion "q" can do the rotation.
It confused me.

After doing some research.
I found the definition of the operation "*" in glm quaternion and what is going on in there.
This implementation is based on those sites.
Quaternion vector rotation optimisation,
A faster quaternion-vector multiplication,
Here's two version of the rotation by quaternion.
//rotate vector
vec3 qrot(vec4 q, vec3 v)
{
return v + 2.0*cross(q.xyz, cross(q.xyz,v) + q.w*v);
}
//rotate vector (alternative)
vec3 qrot_2(vec4 q, vec3 v)
{
return v*(q.w*q.w - dot(q.xyz,q.xyz)) + 2.0*q.xyz*dot(q.xyz,v) +
2.0*q.w*cross(q.xyz,v);
}
If someone can proof that.
I would really appreciate it.

It works when the imaginary part of your quaternion is perpendicular with your vector.
It's your case vec3(0,sin(angle),0) is perpendicular with vec3(0,0,1);
You will see that you need to multiply by the conjugate when it's not right.
q quaternion, v vector.
when you do q * v normally you will obtain a 4D vector, another quaternion.
We just don't care about the first component and assume it's 0, a pure quaternion. when you do q * v * q' you are sure to obtain a pure quaternion which translate to a good 3D vector
You can test with non perpendicular vector/quaternion and you will see that your rotation is not right
https://www.3dgep.com/understanding-quaternions/

Related

Opengl Camera rotation around X

Working on an opengl project in visual studio.Im trying to rotate the camera around the X and Y axis.
Thats the math i should use
Im having trouble because im using glm::lookAt for camera position and it takes glm::vec3 as arguments.
Can someone explain how can i implement this in opengl?
PS:i cant use quaternions
The lookAt function should take three inputs:
vec3 cameraPosition
vec3 cameraLookAt
vec3 cameraUp
For my past experience, if you want to move the camera, first find the transform matrix of the movement, then apply the matrix onto these three vectors, and the result will be three new vec3, which are your new input into the lookAt function.
vec3 newCameraPosition = movementMat4 * cameraPosition
//Same for other two
Another approach could be finding the inverse movement of the one you want the camera to do and applying it to the whole scene. Since moving the camera is kind of equals to move the object onto inverse movement while keep the camera not move :)
Below the camera is rotated around the z-axis.
const float radius = 10.0f;
float camX = sin(time) * radius;
float camZ = cos(time) * radius;
glm::vec3 cameraPos = glm::vec3(camX, 0.0, camZ);
glm::vec3 objectPos = glm::vec3(0.0, 0.0, 0.0);
glm::vec3 up = glm::vec3(0.0, 1.0, 0.0);
glm::mat4 view = glm::lookAt(cameraPos, objectPos, up);
Check out https://learnopengl.com/, its a great site to learn!

Camera/View matrix

After reading through this article (http://3dgep.com/?p=1700) it seems to imply I got my view matrix wrong. Here's how I compute the view matrix;
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
Mat4 Camera::GetViewMatrix() const
{
return Orientation() * glm::translate(Mat4(1.0f), -mTranslation);
}
Supposedly, I am to invert this resulting matrix, but I have not so far and it has work excellently thus far, and I'm not doing any inverting down the pipeline either. Is there something I am missing here?
You already did the inversion. The view matrix is the inverse of the model transformation that positions the camera. This is:
ModelCamera = Translation(position) * Rotation
So the inverse is:
ViewMatrix = (Translation(position) * Rotation)^-1
= Rotation^-1 * Translation(position)^-1
The translation is inverted by negating the offset:
= Rotation^-1 * Translation(-position)
This leaves us with inverting the rotation. We can assume that the rotation is inverted. Thus, the original rotation of the camera model is
Rotation^-1 = RotationX(verticalAngle) * RotationY(horizontalAngle)
Rotation = (RotationX(verticalAngle) * RotationY(horizontalAngle))^-1
= RotationY(horizontalAngle)^-1 * RotationX(verticalAngle)^-1
= RotationY(-horizontalAngle) * RotationX(-verticalAngle)
So the angles you specify are actually the inverted angles that would rotate the camera. If you increase horizontalAngle, the camera should turn to the right (assuming a right-handed coordinate system). That's just a matter of definitions.

Rotation of a point around a fixed point at a particular radius in Directx?

I am trying to rotate a point say (20,6,30) around a point (10,6,10) at a radius of 2 and i have failed so far trying to do it.
I know that to rotate a point around origin you just multiply rotation matrix with world matrix and to rotate a point around itself is translating the point to origin ,then rotating and translating back, but not sure how to approach this problem.
I could slap together some C++ code if you like (stray away from D3DX as it is deprecated), but I think figuring things out for yourself is a big part of programming. Here is the math behind rotating 3d point v2 around 3d point v1. Hope it helps:
1.) Compute difference vector by subractring v2 from v1. Store in v3.
2.) Convert v3 to spherical coordinates, a notation of defined by radius, yaw, and pitch.
3.) Change values of theta (yaw) and phi (pitch) as required.
4.) Convert v3 back into Cartesian (x, y, z) coordinates and add the coordinates of v1. That's where v2's new position should be.
Note 1 - In physics, the meaning of theta and phi are swapped, so theta is pitch and phi is yaw. In mathematics, theta is yaw and phi is pitch.
Note 2 - yaw, pitch and roll are described as:
Note 3 - Wikipedia on D3DX: "In 2012, Microsoft announced that D3DX would be deprecated in the Windows 8 SDK, along with other development frameworks such as XNA. The mathematical constructs of D3DX, like vectors and matrices, would be consolidated with XNAMath into a new library: DirectXMath."
Using a radius doesn't make sense if you are rotating an object coordinate around an arbitrary origin coordinate because the radius is going to be constant during the rotation. To use a desired radius during rotation make sure that the object coordinate and origin coordinate are the desired radius apart.
Also note that you need a rotation axis to rotate around a point while using euler angles (vs. spherical or quaternions) because otherwise its undefined in which direction(s) you will be rotating.
If you are willing to do matrix math here is how I do it (using pseudo code because I only know OGL):
vec3 translate_around_point
(
vec3 object_pos,
vec3 origin_pos,
vec3 rotation_axis,
float rotation
)
{
// get difference
vec3 difference = object_pos - origin_pos;
//rotate difference on rotation axis
mat4 model = rotate(mat4 model(1.0), rotation, rotation_axis);
vec3 trans = model * vec4(difference, 1.0);
//add add translation to origin pos
return origin_pos + trans;
}
Your code using this function would look like this (radius is 22.360679626464844):
vec3 object_pos(20.0, 6.0, 30.0);
vec3 orig_pos(10.0, 6.0, 10.0);
vec3 axis(0.0, 1.0, 0.0); // rotation axis is 'up' in world space
float angle = 90.0;
object_pos = translate_around_point(object_pos, orig_pos, axis, angle);
Also if you were to run it with the object position having a greater height or a tilted axis of rotation then the origin position may not appear to be at the center of the rotation anymore. This is ok, because if the rotation doesn't rotate on part of the axis (x,y, or z), then it won't effect the translation on that part.

OpenGL, GLM Quaternion Rotations

I'm updating an old OpenGL project and I'm switching all the (deprecated) glMatrix() functions for matrices and quaternions, and I'm having trouble getting the rotation working.
My drawing looks like this:
//these two are supposedly working
mat4 mProjection = perspective(FOV, aspectRatio, near, far);
mat4 mView = lookAt(cameraPosition, cameraCenter, headsUp);
mat4 mModel = mat4(1.0f);
mat4 mMVP = mProjection * mView * mModel;
What I'm trying to do now is to apply rotation to an object around a specific point (like the object's center).
I tried:
mat4 mModelRotation = rotate(mModel, object->RotationY(), vec3(0.0, 1.0, 0.0)); //RotationY being an angle in degrees
mat4 mMVP = mProjection * mView * mModel * mModelRotation;
But this causes the object to rotate around one of it's edges, not it's center.
I'd like to know how can I apply Quaternions to rotate the object around any point I pass as parameter for example.
I'm unexperienced with matrices, since I avoided them because I could use the glMatrix() functions before, so I don't understand much about the relation between the spatial position and them, and trying to update them to Quaternions is looking even more complicated.
I've read about the logic of Quaternions and how to use them, technically, but I don't understand where their values comes from.
For example:
//axis is a unit vector
local_rotation.w = cosf( fAngle/2)
local_rotation.x = axis.x * sinf( fAngle/2 )
local_rotation.y = axis.y * sinf( fAngle/2 )
local_rotation.z = axis.z * sinf( fAngle/2 )
total = local_rotation * total
I read this, and I have no clue what these values are. Axis is a unit vector... of what? fAngle I assume it's the angle I want to rotate, but since Quaternions use an arbitrary axis, how do I get the value for each of the XYZ axis, and how do I specify it in the Quaternion?
So, I'm looking for any practical example/tutorial of a Quaternion, so I can understand what's going on.
The only information I have when I want to rotate an object is the axis I want to rotate (x, y OR z, not all of them, but a combination of them in the final result), and a value in degrees.
I'm not much of a math person, so any tutorial that doesn't use shortcuts is highly appreciated.
OK, let say that you have a model ML (set of points of a model) and a point P to rotate that ML.
All the rotations are referred at to the origin, so you need to move the set of points ML taking the point P as the origin, make the rotation of all the points and then move it back.
How to do this ?, simple, for each point ML(k) (a point in the set) you do:
ML(k)-P --> with this you move the points, using the P point as origin
then rotate:
ROT * (ML(k)-P)
and finally, you move it back:
ROT(ML(k)-P) + P
As quaternions, you replace the matrix mult by q and -q
q * (ML(k)-p) * -q + p
That should work.

glm combine rotation and translation

I have an object which I first want to rotate (about its own center) then translate it to some point. I have a glm::quat that holds the rotation and a glm::vec3 that holds the point to which it needs to be translated.
glm::vec3 position;
glm::quat orientation;
glm::mat4 modelmatrix; <-- want to combine them both in here
modelmatrix = glm::translate(glm::toMat4(orientation),position);
Then at my render function, I do.
pvm = projectionMatrix*viewMatrix*modelmatrix;
glUniformMatrix4fv(pvmMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr(pvm));
..and render...
Unfortunately, the object just orbits around the origin when I apply a rotation (the farther the "position" from the origin, the larger the orbit).
When I apply for only the position it translates fine. When I apply only the rotation it stays at the origin and rotates about its center (as expected). So why does it go weird when I apply them both? Am I missing something basic?
Because you're applying them in the wrong order. By doing glm::translate(glm::toMat4(orientation),position), you are doing the equivalent of this:
glm::mat4 rot = glm::toMat4(orientation);
glm::mat4 trans = glm::translate(glm::mat4(1.0f), position);
glm::mat4 final = rot * trans;
Note that the translation is on the right side of the matrix, not the left. This means that the translation happens first, then the rotation happens relative to the translation. So rotation happens in the space after translation.
You want the rotation to happen first. So reverse the order of the matrix multiplication.