cursor orientation openGL c++ - c++

I want my 2D sprite to rotate while always facing my cursor.
I am using glad, SDL2 & glm for the math.
the "original" way I tried was to calculate the angle between my Front and my desired LookAt vector of the object and put that angle in degrees into an glm::rotate matrix.
That did not work out for some strange reason.
The other way was to do it within a quat and apply the quat to my model matrix, which did not help either.
My object is rendered with its center at the origin (0,0,0) - so no translation needs to be done for the rotation.
I draw 2 triangles to make a rectangle on which I load my texture.
My model matrix looks like that in theory:
model = rotate * scale;
I then plug it into the shader (where Position is my vec3 Vertex)
position = projection * view * model * vec4(Position, 1.0f);
The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise.
If I plug in 80° it actually rotates about ~250° clockwise.
If I plug in 45° it's a perfect 45° clockwise rotation.
All rotations are around the z-axis, eg.:
model = glm::rotate(model, 90.0f, glm::vec3(0.0f,0.0f,1.0f);
If I use a quaternion to simulate an orientation, it gives me angles between 0,2&0,9 radians and my object seems only to rotate between 0.0° & 45° clockwise, no matter where I put my cursor.
If I calculate the angle btw. my two vectors (ObjectLookAt & MousePosition) and store them I get also quite correct angles, but the glm::rotate function does not work as I'd expected.
Finally if I simply code the angle as:
float angle = static_cast<float>(SDL_GetTicks()/1000);
Which starts by one it actually rotates even more weird.
I'd expect it to start to rate by 1° (as it starts with 1 second) and then rotate a full circle around the z axis until there are 360 seconds over.
However it rotates full 360° in about 6 second. so ever "1" that is added on the angle and plugged into glm::rotate as a degree represents 60°?
Is this now a flaw in my logic? Do I not rotate a sprite around the z-axis if it is drawn on a x-y plane?
I also tried the x & y axis just to be safe here, but that didn't work.
I am really stuck here, I (think I) get how it should work in theory, especially as it all happens in "2D", but I cannot get it work..

The first strange thing is, if I hardcode 90.0f as an angle into glm::rotate, my object is actually rotated about 120° clockwise. If I plug in 80° it actually rotates about ~250° clockwise. If I plug in 45° it's a perfect 45° clockwise rotation.
This is, because the function glm::rotate expects the angle in radians (since GLM version 0.9.6).
Adapt your code like this:
model = glm::rotate(model, glm::radians(angel_degrees), glm::vec3(0.0f,0.0f,1.0f);
see also:
glm rotate usage in Opengl
GLM::Rotate seems to cause wrong rotation?
GLM: function taking degrees as a parameter is deprecated (WHEN USING RADIANS)

Related

OpenGl rotations and translations

I am building a camera class to look arround a scene. At the moment I have 3 cubes just spread arround to have a good impression of what is going on. I have set my scroll button on a mouse to give me translation along z-axis and when I move my mouse left or right I detect this movement and rotate arround y-axis. This is just to see what happens and play arround a bit. So I succeeded in making the camera rotate by rotating the cubes arround the origin but after I rotate by some angle, lets say 90 degrees, and try to translate along z axis to my surprise I find out that my cubes are now going from left to right and not towards me or away from me. So what is going on here? It seems that z axis is rotated also. I guess the same goes for x axis. So it seems that nothing actually moved in regard to the origin, but the whole coordinate system with all the objects was just rotated. Can anyone help me here, what is going on? How coordinate system works in opengl?
You are most likely confusing local and global rotations. Usual cheap remedy is to change(reverse) order of some of your transformation. However doing this blindly is trial&error and can be frustrating. Its better to understand the math first...
Old API OpeGL uses MVP matrix which is:
MVP = Model * View * Projection
Where Model and View are already multiplied together. What you have is most likely the same. Now the problem is that Model is direct matrix, but View is Inverse.
So if you have some transform matrix representing your camera in oder to use it to transform back you need to use its inverse...
MVP = Model * Inverse(Camera) * Projection
Then you can use the same order of transformations for both Model and Camera and also use their geometric properties like basis vectors etc ... then stuff like camera local movements or camera follow are easy. Beware some tutorials use glTranspose instead of real matrix Inverse. That is correct only if the Matrix contains only unit (or equal sized) orthogonal basis vectors without any offset so no scale,skew,offset or projections just rotation and equal scale along all axises !!!
That means when you rotate Model and View in the same way the result is opposite. So in old code there is usual to have something like this:
// view part of matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotate3f(view_c,0,0,1); // ugly euler angles
glRotate3f(view_b,0,1,0); // ugly euler angles
glRotate3f(view_a,1,0,0); // ugly euler angles
glTranslatef(view_pos); // set camera position
// model part of matrix
for (i=0;i<objs;i++)
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(obj_pos[i]); // set camera position
glRotate3f(obj_a[i],1,0,0); // ugly euler angles
glRotate3f(obj_b[i],0,1,0); // ugly euler angles
glRotate3f(obj_c[i],0,0,1); // ugly euler angles
//here render obj[i]
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
note the order of transforms is opposite (I just wrote it here in editor so its not tested and can be opposite to native GL notation ... I do not use Euler angles) ... The order must match your convention... To know more about these (including examples) not using useless Euler angles see:
Understanding 4x4 homogenous transform matrices
Here is 4D version of what your 3D camera class should look like (just shrink the matrices to 4x4 and have just 3 rotations instead of 6):
reper4D
pay attention to difference between local lrot_?? and global grot_?? functions. Also note rotations are defined by plane not axis vector as axis vector is just human abstraction that does not really work except 2D and 3D ... planes work from 2D to ND
PS. its a good idea to have the distortions (scale,skew) separated from model and keep transform matrices representing coordinate systems orthonormal. It will ease up a lot of things latter on once you got to do advanced math on them. Resulting in:
MVP = Model * Model_distortion * Inverse(Camera) * Projection

Problem rotating a GameObject with a quaternion rotation by another quaternion

I am making a Videogame Engine for a College Subject and I implemented a 3d camera icons to show where the gameobjects without mesh but with a component camera are.
https://i.gyazo.com/5cd944b8f1c3d3e08aea4c440d294a36.mp4
Here's how it rotates now. The goal is to make the camera rotate just like now but looking to the frustum front, so i should make the camera mesh rotate 90 degrees to the right.
How can I make my original quaternion rotate 90 degrees to the right? Thanks in advance!
One of the core aspects of why quaternions are used to represent rotations in games (and other applications) is that you can chain them very easily by multiplying them. So, by creating a quaternion that rotates 90 degrees to the right, and multiplying that with your rotation quaternion, you get a quaternion that does both.
Notice that order matters here, so Quaternion(90 degree to the right) * YourQuaternion will yield a different result than YourQuaternion * Quaternion(90 degree to the right), similar to how you end up with a different rotation in the real world, depending on the order you're applying the rotations. In terms of quaternions, the rightmost rotation is "applied first", so your "90 degrees to the right" quaternion should be on the right side of the multiply sign.

Cross Product Confusion

I have a question regarding a tutorial that I have been following on the rotation of the camera's view direction in OpenGL.
Whilst I appreciate that prospective respondents are not the authors of the tutorial, I think that this is likely to be a situation which most intermediate-experienced graphics programmers have encountered before so I will seek the advice of members on this topic.
Here is a link to the video to which I refer: https://www.youtube.com/watch?v=7oNLw9Bct1k.
The goal of the tutorial is to create a basic first person camera which the user controls via movement of the mouse.
Here is the function that handles cursor movement (with some variables/members renamed to conform with my personal conventions):
glm::vec2 Movement { OldCursorPosition - NewCursorPosition };
// camera rotates on y-axis when mouse moved left/right (orientation is { 0.0F, 1.0F, 0.0F }):
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.x) / 2, MVP.view.orientation)
* glm::vec4(MVP.view.direction, 0.0F);
glm::vec3 RotateAround { glm::cross(MVP.view.direction, MVP.view.orientation) };
/* why is camera rotating around cross product of view direction and view orientation
rather than just rotating around x-axis when mouse is moved up/down..? : */
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
OldCursorPosition = NewCursorPosition;
What I struggle to understand is why obtaining the cross product is even required. What I would naturally expect is for the camera to rotate around the y-axis when the mouse is moved from left to right, and for the camera to rotate around the x-axis when the mouse is moved up and down. I just can't get my head around why the cross product is even relevant.
From my understanding, the cross product will return a vector which is perpendicular to two other vectors; in this case that is the cross product of the view direction and view orientation, but why would one want a cross product of these two vectors? Shouldn't the camera just rotate on the x-axis for up/down movement and then on the y-axis for left/right movement...? What am I missing/overlooking here?
Finally, when I run the program, I can't visually detect any rotation on the z-axis despite the fact that the rotation scalar 'RotateAround' has a z-value greater than or less than 0 on every call to the the function subsequent to the first (which suggests that the camera should rotate at least partially on the z-axis).
Perhaps this is just due to my lack of intuition, but if I change the line:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
To:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, glm::vec3(1.0F, 0.0F, 0.0F))
* glm::vec4(MVP.view.direction, 0.0F);
So that the rotation only happens on the x-axis rather than partially on the x-axis and partially on the z-axis, and then run the program, I can't really notice much of a difference to the workings of the camera. It feels like maybe there is a difference but I can't articulate what this is.
The problem here is frame of reference.
rather than just rotating around x-axis when mouse is moved up/down..?
What you consider x axis? If that's an axis of global frame of reference or paralleled one, then yes. If that's x axis for frame of reference, partially constricted by camera's position, then, in general answer is no. Depends on order of rotations are done and if MVP gets saved between movements.
Provided that in code her MVP gets modified by rotation, this means it gets changed. If Camera would make 180 degrees around x axis, the direction of x axis would change to opposite one.
If camera would rotate around y axis (I assume ISO directions for ground vehicle), direction would change as well. If camera would rotate around global y by 90 degrees, then around global x by 45 degrees, in result you'll see that view had been tilted by 45 degrees sideways.
Order of rotation around constrained frame of reference for ground-bound vehicles (and possibly, for character of classic 3d shooter) is : around y, around x, around z. For aerial vehicles with airplane-like controls it is around z, around x, around y. In orbital space z and x are inverted, if I remember right (z points down).
You have to do the cross product because after multiple mouse moves the camera is now differently oriented. The original x-axis you wanted to rotate around is NOT the same x-axis you want to rotate around now. You must calculate the vector that is currently pointed straight out the side of the camera and rotate around that. This is considered the "right" vector. This is the cross product of the view and up vectors, where view is the "target" vector (down the camera's z axis, where you are looking) and up is straight up out of the camera up the camera's y-axis. These axes must be updated as the camera moves. Calculating the view and up vectors does not require a cross product as you should be applying rotations to these depending on your movements along the way. The view and up should update by rotations, but if you want to rotate around the x-axis (pitch) you must do a cross product.

OpenGL camera rotation

In OpenGL I'm trying to create a free flight camera. My problem is the rotation on the Y axis. The camera should always be rotated on the Y world axis and not on the local orientation. I have tried several matrix multiplications, but all without results. With
camMatrix = camMatrix * yrotMatrix
rotates the camera along the local axis. And with
camMatrix = yrotMatrix * camMatrix
rotates the camera along the world axis, but always around the origin. However, the rotation center should be the camera. Somebody an idea?
One of the more tricky aspects of 3D programming is getting complex transformations right.
In OpenGL, every point is transformed with the model/view matrix and then with the projection matrix.
the model view matrix takes each point and translates it to where it should be from the point of view of the camera. The projection matrix converts the point's coordinates so that the X and Y coordinates can be mapped to the window easily.
To get the mode/view matrix right, you have to start with an identity matrix (one that doesn't change the vertices), then apply the transforms for the camera's position and orientation, then for the object's position and orientation in reverse order.
Another thing you need to keep in mind is, rotations are always about an axis that is centered on the origin (0,0,0). So when you apply a rotate transform for the camera, whether you are turning it (as you would turn your head) or orbiting it around the origin (as the Earth orbits the Sun) depends on whether you have previously applied a translation transform.
So if you want to both rotate and orbit the camera, you need to:
Apply the rotation(s) to orient the camera
Apply translation(s) to position it
Apply rotation(s) to orbit the camera round the origin
(optionally) apply translation(s) to move the camera in its set orientation to move it to orbit around a point other than (0,0,0).
Things can get more complex if you, say, want to point the camera at a point that is not (0,0,0) and also orbit that point at a set distance, while also being able to pitch or yaw the camera. See here for an example in WebGL. Look for GLViewerBase.prototype.display.
The Red Book covers transforms in much more detail.
Also note gluLookAt, which you can use to point the camera at something, without having to use rotations.
Rather than doing this using matrices, you might find it easier to create a camera class which stores a position and orthonormal n, u and v axes, and rotate them appropriately, e.g. see:
https://github.com/sgolodetz/hesperus2/blob/master/Shipwreck/MapEditor/GUI/Camera.java
and
https://github.com/sgolodetz/hesperus2/blob/master/Shipwreck/MapEditor/Math/MathUtil.java
Then you write things like:
if(m_keysDown[TURN_LEFT])
{
m_camera.rotate(new Vector3d(0,0,1), deltaAngle);
}
When it comes time to set the view for the camera, you do:
gl.glLoadIdentity();
glu.gluLookAt(m_position.x, m_position.y, m_position.z,
m_position.x + m_nVector.x, m_position.y + m_nVector.y, m_position.z + m_nVector.z,
m_vVector.x, m_vVector.y, m_vVector.z);
If you're wondering how to rotate about an arbitrary axis like (0,0,1), see MathUtil.rotate_about_axis in the above code.
If you don't want to transform based on the camera from the previous frame, my suggestion might be just to throw out the matrix compounding and recalc it every frame. I don't think there's a way to do what you want with a single matrix, as that stores the translation and rotation together.
I guess if you just want a pitch/yaw camera only, just store those values as two floats, and then rebuild the matrix based on that. Maybe something like pseudocode:
onFrameUpdate() {
newPos = camMatrix * (0,0,speed) //move forward along the camera axis
pitch += mouse_move_x;
yaw += mouse_move_y;
camMatrix = identity.translate(newPos)
camMatrix = rotate(camMatrix, (0,1,0), yaw)
camMatrix = rotate(camMatrix, (1,0,0), pitch)
}
rotates the camera along the world axis, but always around the origin. However, the rotation center should be the camera. Somebody an idea?
I assume matrix stored in memory this way (number represent element index if matrix were a linear 1d array):
0 1 2 3 //row 0
4 5 6 7 //row 1
8 9 10 11 //row 2
12 13 14 15 //row 3
Solution:
Store last row of camera matrix in temporary variable.
Set last row of camera matrix to (0, 0, 0, 1)
Use camMatrix = yrotMatrix * camMatrix
Restore last row of camera matrix from temporary variable.

Making an object orbit a fixed point in directx?

I am trying to make a very simple object rotate around a fixed point in 3dspace.
Basically my object is created from a single D3DXVECTOR3, which indicates the current position of the object, relative to a single constant point. Lets just say 0,0,0.
I already calculate my angle based on the current in game time of the day.
But how can i apply that angle to the position, so it will rotate?
:(?
Sorry im pretty new to Directx.
So are you trying to plot the sun or the moon?
If so then one assumes your celestial object is something like a sphere that has (0,0,0) as its center point.
Probably the easiest way to rotate it into position is to do something like the following
D3DXMATRIX matRot;
D3DXMATRIX matTrans;
D3DXMatrixRotationX( &matRot, angle );
D3DXMatrixTranslation( &matTrans, 0.0f, 0.0f, orbitRadius );
D3DXMATRIX matFinal = matTrans * matRot;
Then Set that matrix as your world matrix.
What it does is it creates a rotation matrix to rotate the object by "angle" around the XAxis (ie in the Y-Z plane); It then creates a matrix that pushes it out to the appropriate place at the 0 angle (orbitRadius may be better off as the 3rd parameter in the translation call, depending on where your zero point is). The final line multiplies these 2 matrices together. Matrix multiplications are non commutative (ie M1 * M2 != M2 * M1). What the above does is move the object orbitRadius units along the Z-axis and then it rotates that around the point (0, 0, 0). You can think of rotating an object that is held in your hand. If orbitRadius is the distance from your elbow to your hand then any rotation around your elbow (at 0,0,0) is going to form an arc through the air.
I hope that helps, but I would really recommend doing some serious reading up on Linear Algebra. The more you know the easier questions like this will be to solve yourself :)