Finding angles in each axis from vertical normal - c++

I am making program which reads texture that should be applied to the mesh and generates some shapes which should be displayed on their triangles. I am converting points in a way that originally shape appears to be lying on XZ (in openGL way of marking axes, so Y is vertical, Z goes towards camera, X to the right). Now I have no idea how to properly measure angle between actual normal of traingle and vertical normal (I mean (0, 1, 0)) of image. I know, that it's probably basic, but my mind refuses to cooperate on 3D graphics tasks recently.
Currently I use
angles.x = glm::orientedAngle(glm::vec2(normalOfTriangle.z, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles.y = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.z), glm::vec2(1.0f, 0.0f));
angles.z = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles = angles + glm::vec3(-glm::half_pi<float>(), 0.0f, glm::half_pi<float>());
Which given my way of thinking should give proper results, but the faces of cube that should have normal parallel to Z axis appear to be unrotated in Z.
My logic bases on that I measure angle from each axis, and then rotate each axis by such angle for it to be vertical. But as I said, my mind glitches, and I cannot find proper way to do it. Can somebody please help?

Related

Correct camera transformation for first person camera

I am making a camera in openGl and I am having troubles with first person camera. So I had a few versions of camera transformation and all of them had their own problems. So at first, I was doing transformations in this order: I would first translate the object in the positive direction when trying to move away from it and I would translate it in the negative direction when trying to move towards it. After this translation, I would perform rotations arround X and Y axis. Now, when I try to use this camera, I found out that when I have objects in my scene, lets say a few cubes, and when I rotate, everything is fine, but when after this rotation I try translation, all of the objects converge to me or better to say, towards the "player". So after I gave this some thought I realized that because I am doing translations first, in the next frame when I try to translate the player in the direction in which the camera is looking at that moment, what happens is, objects get translated first and then rotated so I get, as a result of this, movement of the objects towards or away from the player. Code for this is here (and dont mind camUp and camRight vectors, these are just y and x axis vectors and are not transformed at all):
m_ViewMatrix = inverse(glm::rotate(glm::mat4(1.0f), m_Rotation, camUp))* inverse(glm::rotate(glm::mat4(1.0f), m_TiltRotation, camRight)) * glm::translate(glm::mat4(1.0f), m_Position);
But option to rotate and then translate is not good because then I get editor sort of camera which is actually fine but that is not what I want.
So I thought about it some more and tried to make small transformations and then reset the parameters, acumulating all the transformations in this way:
m_ViewMatrix = inverse(glm::rotate(glm::mat4(1.0f), m_Rotation, camUp)) * glm::translate(glm::mat4(1.0f), m_Position)* inverse(glm::rotate(glm::mat4(1.0f), m_TiltRotation, camRight))*m_ViewMatrix;
m_Position = { 0.0f, 0.0f, 0.0f };
m_Rotation = 0.0f;
m_TiltRotation = 0.0f;
But now I have a problem with rotations arround z axis which I don't want. This problem was not there before. So now I have no idea what to do, I read some answers here but couldn't apply them I don't know why. So if anyone could help me in the context of the code I just copied here, that would be great.

Cross Product Confusion

I have a question regarding a tutorial that I have been following on the rotation of the camera's view direction in OpenGL.
Whilst I appreciate that prospective respondents are not the authors of the tutorial, I think that this is likely to be a situation which most intermediate-experienced graphics programmers have encountered before so I will seek the advice of members on this topic.
Here is a link to the video to which I refer: https://www.youtube.com/watch?v=7oNLw9Bct1k.
The goal of the tutorial is to create a basic first person camera which the user controls via movement of the mouse.
Here is the function that handles cursor movement (with some variables/members renamed to conform with my personal conventions):
glm::vec2 Movement { OldCursorPosition - NewCursorPosition };
// camera rotates on y-axis when mouse moved left/right (orientation is { 0.0F, 1.0F, 0.0F }):
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.x) / 2, MVP.view.orientation)
* glm::vec4(MVP.view.direction, 0.0F);
glm::vec3 RotateAround { glm::cross(MVP.view.direction, MVP.view.orientation) };
/* why is camera rotating around cross product of view direction and view orientation
rather than just rotating around x-axis when mouse is moved up/down..? : */
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
OldCursorPosition = NewCursorPosition;
What I struggle to understand is why obtaining the cross product is even required. What I would naturally expect is for the camera to rotate around the y-axis when the mouse is moved from left to right, and for the camera to rotate around the x-axis when the mouse is moved up and down. I just can't get my head around why the cross product is even relevant.
From my understanding, the cross product will return a vector which is perpendicular to two other vectors; in this case that is the cross product of the view direction and view orientation, but why would one want a cross product of these two vectors? Shouldn't the camera just rotate on the x-axis for up/down movement and then on the y-axis for left/right movement...? What am I missing/overlooking here?
Finally, when I run the program, I can't visually detect any rotation on the z-axis despite the fact that the rotation scalar 'RotateAround' has a z-value greater than or less than 0 on every call to the the function subsequent to the first (which suggests that the camera should rotate at least partially on the z-axis).
Perhaps this is just due to my lack of intuition, but if I change the line:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
To:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, glm::vec3(1.0F, 0.0F, 0.0F))
* glm::vec4(MVP.view.direction, 0.0F);
So that the rotation only happens on the x-axis rather than partially on the x-axis and partially on the z-axis, and then run the program, I can't really notice much of a difference to the workings of the camera. It feels like maybe there is a difference but I can't articulate what this is.
The problem here is frame of reference.
rather than just rotating around x-axis when mouse is moved up/down..?
What you consider x axis? If that's an axis of global frame of reference or paralleled one, then yes. If that's x axis for frame of reference, partially constricted by camera's position, then, in general answer is no. Depends on order of rotations are done and if MVP gets saved between movements.
Provided that in code her MVP gets modified by rotation, this means it gets changed. If Camera would make 180 degrees around x axis, the direction of x axis would change to opposite one.
If camera would rotate around y axis (I assume ISO directions for ground vehicle), direction would change as well. If camera would rotate around global y by 90 degrees, then around global x by 45 degrees, in result you'll see that view had been tilted by 45 degrees sideways.
Order of rotation around constrained frame of reference for ground-bound vehicles (and possibly, for character of classic 3d shooter) is : around y, around x, around z. For aerial vehicles with airplane-like controls it is around z, around x, around y. In orbital space z and x are inverted, if I remember right (z points down).
You have to do the cross product because after multiple mouse moves the camera is now differently oriented. The original x-axis you wanted to rotate around is NOT the same x-axis you want to rotate around now. You must calculate the vector that is currently pointed straight out the side of the camera and rotate around that. This is considered the "right" vector. This is the cross product of the view and up vectors, where view is the "target" vector (down the camera's z axis, where you are looking) and up is straight up out of the camera up the camera's y-axis. These axes must be updated as the camera moves. Calculating the view and up vectors does not require a cross product as you should be applying rotations to these depending on your movements along the way. The view and up should update by rotations, but if you want to rotate around the x-axis (pitch) you must do a cross product.

OpenGL Transforming an object on it's local coordinate system based on rotation

So at the moment am working on a game for my coursework which is based around the idea of flying a rocket, I spent too much time thinking about the physics behind it that I completely ignored getting it to move properly.
For example when I were to draw a cone with the top pointing to the sky, and I rotate it on the X axis it rotates properly however if I translate it on the Y axis it moves on the global Y axis instead on it's local coordinate system which would have the Y axis pointing out of the cone's top.
My question is does openGL have a local coordinate system or would I have to somehow make my own transformation matrices, and if so how would I go about doing that.
The way I am doing the transformation and rotation is as follows:
glPushMatrix();
glTranslatef(llmX, llmY + acceleration, llmZ);
glRotatef(rotX, 1.0f, 0.0f, 0.0f);
glRotatef(rotY, 0.0f, 0.0f, 1.0f);
drawRocket();
glPopMatrix();
Here is a picture better explaining what I mean hopefully offers a better explanation.
EDIT: I find it really weird that the rotations seem to work one after the other as in if I rotate it on the X axis once and then proceed to rotate it on the Z axis it rotates from the already rotated X axis instead of the world X axis.
Hoping somebody could help me with understanding this, really need to get it working for my project.
Thank you.
If you are using a translation matrix for moving up (i.e. moving in positive Y direction), no matter where you are in on the matrix stack or in the transformation process, you are going to move the vertices in the positive Y direction.
If you instead want it move in the rotated direction, I suggest translating along the Y axis first, and then rotate to your desired angle. Essentially, push the matrices in the opposite fashion.

opengl rotate about world axes

Can someone with OpenGl experience please suggest a strategy to help me solve an issue I'm having with rotations?
Imagine a set of world coordinate xyz axes bolted to the center of the universe; that is, for purposes of this
discussion they do not move. I'm also doing no translations, and the camera is fixed,
to keep things simple. I have a cube centered at the origin and the intent is
that pressing the 'x', 'y', and 'z' keys will increment
a variable representing the number of degrees to rotate the cube about the world xyz axes. Each key press is 90&#176 (you can imagine rotating a lego brick
in such a way), so pressing the 'x' key increments a float property RotXdeg:
RotXdeg += 90.0f;
Likewise for the pressing the 'y' and 'z' keys.
A naive way to implement[1] this is:
Gl.glPushMatrix();
Gl.glRotatef(RotXdeg, 1.0f, 0.0f, 0.0f);
Gl.glRotatef(RotYdeg, 0.0f, 1.0f, 0.0f);
Gl.glRotatef(RotZdeg, 0.0f, 0.0f, 1.0f);
Gl.glPopMatrix();
This of course has the effect or rotating the cube, and its local xyz axes, so the desired rotations about the world xyz axes have not been achieved.
(For those not familiar with OpenGl, this can be demonstrated by simply rotating 90&#176 about the x axis
&#8212 which causes the local y axis to be oriented along the world z axis &#8212
then a subsequent 90&#176 rotation about the y, which to the user appears to be a rotation
about the world z axis).
I believe this post is asking for something similar, but the answer is not clear, and my understanding is that quaternions are just one way to solve the problem.
It seems to me that
there should be a relatively straightforward solution, even if it is not
particularly efficient. I've spent hours trying various ideas, including creating my own rotation matrices and trying ways to multiply them with the modelview matrix, but to no
avail. (I realize matrix multiplication is not commutative, but I have a feeling that's not the problem.)
([1] By the way, I'm using the Tao OpenGl namespace; thanks to http://vasilydev.blogspot.com for the suggestion.)
Code is here
If the cube lies on (0,0,0) world and local rotations have the same effect. If the cube was in another position a 90deg rotation would result in a quarter-circular orbit around (0,0,0). It is unclear what you are failing to achieve, and i'd also advise against using the old immediate mode for matrix operations. Nevertheless a way to achieve world rotation that way is:
- translate to (0,0,0)
- rotate 90 degrees
- translate back

pitch yaw roll, angle independency

I am trying hard to figure out how to make pitch yaw and roll independent between them.
As soon as I rotate something in the z axis (pitch) the second rotation (yaxis yaw) depends on the results of the first and the third rotation (x axis, roll) depends on the other two. So instead of having independent pitch,yaw,roll I get a mixture of the three of them, ugly.
I wish it was possible to store the object angles in an array [pitch,yaw,roll] and then decode those angles during the transformation so that yawing put the object in a given position and then it took the angle corresponding to the pitch, but not a compound of both...
I have seen references to an 'arbitrary axis rotation matrix'. Would it be useful to get the desired results???
1) apply yaw (gl.glRotatef(beta, 0.0f, 1.0f, 0.0f);)
2) get the resulting axis of manually rotating the vector (1.0f,0.0f,0.0f) arround beta
3) apply pitch using the axis got in 2
{and for roll... if 1,2,3 are correct}
4) rotate the axis got in 2 arround its x for a roll
5) apply roll using the axis got in 4
Would it work? Any better solution? I would like keeping my object local orientations in the [pitch,yaw,roll] format.
I have been struggling with it for days, I would like to avoid using quaternions if possible. The 3D objects are stored relatively to 0,0,0 and looking along {1,0,0} and transformed to their destination and angles each frame, so the gimbal lock problem should probably be avoided easily.
In other words, my camera is working fine, World coordinates are being correctly made, but I do not know how or where object-local-transformations based on yaw,pith,roll should be applied.
The results should be read from the array [y,p,r] and combinations of them should not overlap.
Actually my transformations are:
gl.glLoadIdentity();
float[] scalation = transform.getScalation();
gl.glScalef(scalation[0], scalation[1], scalation[2]);
float[] translation = transform.getTranslation();
gl.glTranslatef(translation[0], translation[1], translation[2]);
float[] rotation = transform.getRotation();
gl.glRotatef(rotation[0], 1.0f, 0.0f, 0.0f);
gl.glRotatef(rotation[1], 0.0f, 1.0f, 0.0f);
gl.glRotatef(rotation[2], 0.0f, 0.0f, 1.0f);
The orientation always depends on angles order. You can't make them indipendent. You rotate vectors multipling them by matrices, and matrix multiplication is not commutative. You can choose one order and be consistent with it.
For these problems, a common choice is the ZYX orientation method (first roll, then pitch and at the end yaw).
My personal reference when I work with angles is this document, that helps me a lot.
if you use yaw/pitch/roll, your final orientation will always depend on the amounts and order in which you apply them. you can choose other schemes if you want readability or simplicity. i like choosing a forward vector (F), and calculating a right and up vector based on a canonical 'world up' vector, then just filling in the matrix columns. You could add an extra 'axis spin' angle term, if you like. It's a bit like a quaternion, but more human-readable. I use this representation for controlling a basic WASD-style camera.
Accumulating (yaw, pitch, roll) rotations requires to keep a transformation matrix, which is the product of the separate transformations, in the order in which they occur. The resulting matrix is a rotation around some axis and some angle.