OpenGL glm rotate model around a point issue - c++

I have a model and some helper cubes which are located on its axes, three on each axis for transformations, I used them for rotate the model around its local axes.
I want to make those cubes rotate around the model center with its rotation so I translate them to the model center, rotate them by the same angle on the same axis the translate them back.
This is the code:
//Rotation around X axis
GLfloat theta=glm::radians(xoffset);
glm::quat Qx(glm::angleAxis(theta, glm::vec3(1.0f, 0.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
pickedObject->Transform(rotX);//Multiply the model matrix by the transformation matrix
glm::vec3 op(pickedObject->getMatrix()[3]);//model position
for(TransformationHelper* h:pickedObject->GetTransformationHelpers()){//the small cubes
glm::mat4 m,it,t;
glm::vec3 hp(h->getMatrix()[3]);//the cube position
t=glm::translate(m,op);//m is a unit matrix
it=glm::translate(m,-op);
m=t*rotX*it;
h->Transform(m);
}
The result is unexpected
Update:
after updating the translation matrix I got this result:

The translation is in the wrong direction; the correct offset should be hp-op, i.e. the matrix t should restore the cube's position after rotating.
t=glm::translate(glm::mat(1.f),hp-op);
Also there is no need to use inverse since it is costly (and numerically less stable):
it=glm::translate(glm::mat(1.f),op-hp);
(Note: here translate was called with an explicitly constructed identity matrix. See this post for a similar problem. See here for why this is necessary.)

Related

OpenGL Make Object Stick to Camera

In my adventures to better understand what exactly is going on with matrices and vertex math with OpenGL, I wanted to see if I could get an object to "stick" in front of my "camera". I have been playing with this for a couple of days now and feel like I'm getting close.
So far I have managed to get the object to follow the camera's movement, except for it's rotation (I just can't seem to figure out this front vector). To create the new position of the object I have the following code:
glm::mat4 mtx_trans = glm::mat4(1.0f);
mtx_trans = glm::translate(mtx_trans, camera->getPosition() + camera->getFront());
glm::vec4 cubePosVec4 = glm::vec4(0.0f, 0.0f, -3.0f, 1.0);
cubePosVec4 = mtx_trans * cubePosVec4;
cubePositions[9] = glm::vec3(cubePosVec4.x, cubePosVec4.y, cubePosVec4.z);
Where camera->getPosition() obtains the camera's current position vector and camera->getFront() obtains the camera's current front vector.
As mentioned I'm in the process of learning what all is going on here, so it's possible I'm going about this all wrong...In which case how should I go about "locking" an object a certain distance away from the camera?
If the object should always be at the same position in relation to the camera, then you've to skip the transformation by the view matrix.
Note, the view matrix transforms from world space to view space. If the object should always be placed in front of the camera, then the object has not to be placed in the world, it has to be placed in the view. Therefore, the "view" matrix for the object is the identity matrix.
So the common model view transformation for the object in your case is a translation of the object in along the z axis in negative direction. The translation has to be negative, because the z axis points out of the view (in view space):
glm::mat4 mtx_trans = glm::mat4(1.0f);
mtx_trans = glm::translate(mtx_trans, glm::vec3(0.0f, 0.0f, -3.0f));

cursor orientation openGL c++

I want my 2D sprite to rotate while always facing my cursor.
I am using glad, SDL2 & glm for the math.
the "original" way I tried was to calculate the angle between my Front and my desired LookAt vector of the object and put that angle in degrees into an glm::rotate matrix.
That did not work out for some strange reason.
The other way was to do it within a quat and apply the quat to my model matrix, which did not help either.
My object is rendered with its center at the origin (0,0,0) - so no translation needs to be done for the rotation.
I draw 2 triangles to make a rectangle on which I load my texture.
My model matrix looks like that in theory:
model = rotate * scale;
I then plug it into the shader (where Position is my vec3 Vertex)
position = projection * view * model * vec4(Position, 1.0f);
The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise.
If I plug in 80° it actually rotates about ~250° clockwise.
If I plug in 45° it's a perfect 45° clockwise rotation.
All rotations are around the z-axis, eg.:
model = glm::rotate(model, 90.0f, glm::vec3(0.0f,0.0f,1.0f);
If I use a quaternion to simulate an orientation, it gives me angles between 0,2&0,9 radians and my object seems only to rotate between 0.0° & 45° clockwise, no matter where I put my cursor.
If I calculate the angle btw. my two vectors (ObjectLookAt & MousePosition) and store them I get also quite correct angles, but the glm::rotate function does not work as I'd expected.
Finally if I simply code the angle as:
float angle = static_cast<float>(SDL_GetTicks()/1000);
Which starts by one it actually rotates even more weird.
I'd expect it to start to rate by 1° (as it starts with 1 second) and then rotate a full circle around the z axis until there are 360 seconds over.
However it rotates full 360° in about 6 second. so ever "1" that is added on the angle and plugged into glm::rotate as a degree represents 60°?
Is this now a flaw in my logic? Do I not rotate a sprite around the z-axis if it is drawn on a x-y plane?
I also tried the x & y axis just to be safe here, but that didn't work.
I am really stuck here, I (think I) get how it should work in theory, especially as it all happens in "2D", but I cannot get it work..
The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise. If I plug in 80° it actually rotates about ~250° clockwise. If I plug in 45° it's a perfect 45° clockwise rotation.
This is, because the function glm::rotate expects the angle in radians (since GLM version 0.9.6).
Adapt your code like this:
model = glm::rotate(model, glm::radians(angel_degrees), glm::vec3(0.0f,0.0f,1.0f);
see also:
glm rotate usage in Opengl
GLM::Rotate seems to cause wrong rotation?
GLM: function taking degrees as a parameter is deprecated (WHEN USING RADIANS)

how to negate GLM quaternion rotation on any single axis?

I have a quaternion derived from sensors that rotates the "camera" within an OpenGL ES scene.
Also I apply the inverse of this quaternion to certain objects in the scene, so they are "facing" the "camera" - this works as expected.
The issue is that I need to negate rotation on the Z axis for these objects.
How do I come up with a quaternion which has no rotation within the Z component?
My tests:
I have attempted to extract euler Angles, create a negating quaternion and build the rotation matrix for these objects from the multiplication of the two quaternions - results are incorrect.
glm::quat rMQ = cam->getCameraQuaternion();// retrieve camera quat
glm::vec3 a = glm::eulerAngles((rMQ))* 3.14159f / 180.f; // Euler angle set derived
glm::quat rMZ = glm::angleAxis(-a.z, vec3(0.0f, 0.0f, 1.0f)); // negating quaternion
glm::mat4 fM = glm::inverse(glm::mat4_cast(rMQ*rMZ)); //final mat4 for GL rotation
There is no "rotation on the z axis" when you use quaternions, just an axis and an angle. You need to convert to Euler, flip the sign of one component, then convert back to quaternions.
For Euler angles it is up to you to define the order of rotations. Rotations are not commutative, so the order does matter, and that's why there is no generic decomposition of a rotation into components. Normally the order is xyz, but there is no reason why it has to be that way. In some APIs you get to choose the order.
Try simply zeroing-out the 'z' component of the quaternion and then re-normalizing.
Quaternions, when used to represent rotation, can be thought of as an 'axis-angle' representation.

OpenGL/GLSL/GLM - Skybox rotates as if in 3rd person

I have just gotten into implementing skyboxes and am doing so with OpenGL/GLSL and GLM as my math library. I assume the problem is matrix related and I haven't been able to find an implementation that utilizes the GLM library:
The model for the skybox loads just fine, the camera however circles it as if it is rotating around it in 3d third person camera.
For my skybox matrix, I am updating it every time my camera updates. Because I use glm::lookAt, it is essentially created the same way as my view matrix except I use 0, 0, 0 for the direction.
Here is my view matrix creation. It works fine in rendering of objects and geometry:
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
glm::mat4 viewMatrix = glm::lookAt(position, position+direction, up);
Similarly, my sky matrix is created in the same way with only one change:
glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 skyView = glm::lookAt(position, position + direction, up);
I know a skybox does not apply translation and only considers rotations so I am not sure what the issue is. Is there an easier way to do this?
Visual aids:
Straight on without any movement yet
When I rotate the camera:
My question is this: how do I set up the correct matrix for rendering a skybox using glm:lookAt?
Aesthete is right skybox/skydome is only object that means you do not change projection matrix !!!
your render should be something like this:
clear screen/buffers
set camera
set modelview to identity and then translate it to position of camera you can get the position directly from projection matrix (if my memory serves at array positions 12,13,14) to obtain the matrix see this https://stackoverflow.com/a/18039707/2521214
draw skybox/skydome (do not cross your z_far plane,or disable Depth Test)
optionaly clear Z-buffer or re-enable Depth Test
draw your scene stuf .... ( do not forget to set modelview matrix for each of your drawed model)
of course you can temporary set your camera position (projection matrix) to (0,0,0) and leave modelview matrix with identity, it is sometimes more precise approach but do not forget to set the camera position back after skybox draw.
hope it helps.

pitch yaw roll, angle independency

I am trying hard to figure out how to make pitch yaw and roll independent between them.
As soon as I rotate something in the z axis (pitch) the second rotation (yaxis yaw) depends on the results of the first and the third rotation (x axis, roll) depends on the other two. So instead of having independent pitch,yaw,roll I get a mixture of the three of them, ugly.
I wish it was possible to store the object angles in an array [pitch,yaw,roll] and then decode those angles during the transformation so that yawing put the object in a given position and then it took the angle corresponding to the pitch, but not a compound of both...
I have seen references to an 'arbitrary axis rotation matrix'. Would it be useful to get the desired results???
1) apply yaw (gl.glRotatef(beta, 0.0f, 1.0f, 0.0f);)
2) get the resulting axis of manually rotating the vector (1.0f,0.0f,0.0f) arround beta
3) apply pitch using the axis got in 2
{and for roll... if 1,2,3 are correct}
4) rotate the axis got in 2 arround its x for a roll
5) apply roll using the axis got in 4
Would it work? Any better solution? I would like keeping my object local orientations in the [pitch,yaw,roll] format.
I have been struggling with it for days, I would like to avoid using quaternions if possible. The 3D objects are stored relatively to 0,0,0 and looking along {1,0,0} and transformed to their destination and angles each frame, so the gimbal lock problem should probably be avoided easily.
In other words, my camera is working fine, World coordinates are being correctly made, but I do not know how or where object-local-transformations based on yaw,pith,roll should be applied.
The results should be read from the array [y,p,r] and combinations of them should not overlap.
Actually my transformations are:
gl.glLoadIdentity();
float[] scalation = transform.getScalation();
gl.glScalef(scalation[0], scalation[1], scalation[2]);
float[] translation = transform.getTranslation();
gl.glTranslatef(translation[0], translation[1], translation[2]);
float[] rotation = transform.getRotation();
gl.glRotatef(rotation[0], 1.0f, 0.0f, 0.0f);
gl.glRotatef(rotation[1], 0.0f, 1.0f, 0.0f);
gl.glRotatef(rotation[2], 0.0f, 0.0f, 1.0f);
The orientation always depends on angles order. You can't make them indipendent. You rotate vectors multipling them by matrices, and matrix multiplication is not commutative. You can choose one order and be consistent with it.
For these problems, a common choice is the ZYX orientation method (first roll, then pitch and at the end yaw).
My personal reference when I work with angles is this document, that helps me a lot.
if you use yaw/pitch/roll, your final orientation will always depend on the amounts and order in which you apply them. you can choose other schemes if you want readability or simplicity. i like choosing a forward vector (F), and calculating a right and up vector based on a canonical 'world up' vector, then just filling in the matrix columns. You could add an extra 'axis spin' angle term, if you like. It's a bit like a quaternion, but more human-readable. I use this representation for controlling a basic WASD-style camera.
Accumulating (yaw, pitch, roll) rotations requires to keep a transformation matrix, which is the product of the separate transformations, in the order in which they occur. The resulting matrix is a rotation around some axis and some angle.