Can someone with OpenGl experience please suggest a strategy to help me solve an issue I'm having with rotations?
Imagine a set of world coordinate xyz axes bolted to the center of the universe; that is, for purposes of this
discussion they do not move. I'm also doing no translations, and the camera is fixed,
to keep things simple. I have a cube centered at the origin and the intent is
that pressing the 'x', 'y', and 'z' keys will increment
a variable representing the number of degrees to rotate the cube about the world xyz axes. Each key press is 90° (you can imagine rotating a lego brick
in such a way), so pressing the 'x' key increments a float property RotXdeg:
RotXdeg += 90.0f;
Likewise for the pressing the 'y' and 'z' keys.
A naive way to implement[1] this is:
Gl.glPushMatrix();
Gl.glRotatef(RotXdeg, 1.0f, 0.0f, 0.0f);
Gl.glRotatef(RotYdeg, 0.0f, 1.0f, 0.0f);
Gl.glRotatef(RotZdeg, 0.0f, 0.0f, 1.0f);
Gl.glPopMatrix();
This of course has the effect or rotating the cube, and its local xyz axes, so the desired rotations about the world xyz axes have not been achieved.
(For those not familiar with OpenGl, this can be demonstrated by simply rotating 90° about the x axis
— which causes the local y axis to be oriented along the world z axis —
then a subsequent 90° rotation about the y, which to the user appears to be a rotation
about the world z axis).
I believe this post is asking for something similar, but the answer is not clear, and my understanding is that quaternions are just one way to solve the problem.
It seems to me that
there should be a relatively straightforward solution, even if it is not
particularly efficient. I've spent hours trying various ideas, including creating my own rotation matrices and trying ways to multiply them with the modelview matrix, but to no
avail. (I realize matrix multiplication is not commutative, but I have a feeling that's not the problem.)
([1] By the way, I'm using the Tao OpenGl namespace; thanks to http://vasilydev.blogspot.com for the suggestion.)
Code is here
If the cube lies on (0,0,0) world and local rotations have the same effect. If the cube was in another position a 90deg rotation would result in a quarter-circular orbit around (0,0,0). It is unclear what you are failing to achieve, and i'd also advise against using the old immediate mode for matrix operations. Nevertheless a way to achieve world rotation that way is:
- translate to (0,0,0)
- rotate 90 degrees
- translate back
Related
I am making program which reads texture that should be applied to the mesh and generates some shapes which should be displayed on their triangles. I am converting points in a way that originally shape appears to be lying on XZ (in openGL way of marking axes, so Y is vertical, Z goes towards camera, X to the right). Now I have no idea how to properly measure angle between actual normal of traingle and vertical normal (I mean (0, 1, 0)) of image. I know, that it's probably basic, but my mind refuses to cooperate on 3D graphics tasks recently.
Currently I use
angles.x = glm::orientedAngle(glm::vec2(normalOfTriangle.z, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles.y = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.z), glm::vec2(1.0f, 0.0f));
angles.z = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles = angles + glm::vec3(-glm::half_pi<float>(), 0.0f, glm::half_pi<float>());
Which given my way of thinking should give proper results, but the faces of cube that should have normal parallel to Z axis appear to be unrotated in Z.
My logic bases on that I measure angle from each axis, and then rotate each axis by such angle for it to be vertical. But as I said, my mind glitches, and I cannot find proper way to do it. Can somebody please help?
I have a question regarding a tutorial that I have been following on the rotation of the camera's view direction in OpenGL.
Whilst I appreciate that prospective respondents are not the authors of the tutorial, I think that this is likely to be a situation which most intermediate-experienced graphics programmers have encountered before so I will seek the advice of members on this topic.
Here is a link to the video to which I refer: https://www.youtube.com/watch?v=7oNLw9Bct1k.
The goal of the tutorial is to create a basic first person camera which the user controls via movement of the mouse.
Here is the function that handles cursor movement (with some variables/members renamed to conform with my personal conventions):
glm::vec2 Movement { OldCursorPosition - NewCursorPosition };
// camera rotates on y-axis when mouse moved left/right (orientation is { 0.0F, 1.0F, 0.0F }):
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.x) / 2, MVP.view.orientation)
* glm::vec4(MVP.view.direction, 0.0F);
glm::vec3 RotateAround { glm::cross(MVP.view.direction, MVP.view.orientation) };
/* why is camera rotating around cross product of view direction and view orientation
rather than just rotating around x-axis when mouse is moved up/down..? : */
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
OldCursorPosition = NewCursorPosition;
What I struggle to understand is why obtaining the cross product is even required. What I would naturally expect is for the camera to rotate around the y-axis when the mouse is moved from left to right, and for the camera to rotate around the x-axis when the mouse is moved up and down. I just can't get my head around why the cross product is even relevant.
From my understanding, the cross product will return a vector which is perpendicular to two other vectors; in this case that is the cross product of the view direction and view orientation, but why would one want a cross product of these two vectors? Shouldn't the camera just rotate on the x-axis for up/down movement and then on the y-axis for left/right movement...? What am I missing/overlooking here?
Finally, when I run the program, I can't visually detect any rotation on the z-axis despite the fact that the rotation scalar 'RotateAround' has a z-value greater than or less than 0 on every call to the the function subsequent to the first (which suggests that the camera should rotate at least partially on the z-axis).
Perhaps this is just due to my lack of intuition, but if I change the line:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, RotateAround)
* glm::vec4(MVP.view.direction, 0.0F);
To:
MVP.view.direction = glm::rotate(glm::mat4(1.0F), glm::radians(Movement.y) / 2, glm::vec3(1.0F, 0.0F, 0.0F))
* glm::vec4(MVP.view.direction, 0.0F);
So that the rotation only happens on the x-axis rather than partially on the x-axis and partially on the z-axis, and then run the program, I can't really notice much of a difference to the workings of the camera. It feels like maybe there is a difference but I can't articulate what this is.
The problem here is frame of reference.
rather than just rotating around x-axis when mouse is moved up/down..?
What you consider x axis? If that's an axis of global frame of reference or paralleled one, then yes. If that's x axis for frame of reference, partially constricted by camera's position, then, in general answer is no. Depends on order of rotations are done and if MVP gets saved between movements.
Provided that in code her MVP gets modified by rotation, this means it gets changed. If Camera would make 180 degrees around x axis, the direction of x axis would change to opposite one.
If camera would rotate around y axis (I assume ISO directions for ground vehicle), direction would change as well. If camera would rotate around global y by 90 degrees, then around global x by 45 degrees, in result you'll see that view had been tilted by 45 degrees sideways.
Order of rotation around constrained frame of reference for ground-bound vehicles (and possibly, for character of classic 3d shooter) is : around y, around x, around z. For aerial vehicles with airplane-like controls it is around z, around x, around y. In orbital space z and x are inverted, if I remember right (z points down).
You have to do the cross product because after multiple mouse moves the camera is now differently oriented. The original x-axis you wanted to rotate around is NOT the same x-axis you want to rotate around now. You must calculate the vector that is currently pointed straight out the side of the camera and rotate around that. This is considered the "right" vector. This is the cross product of the view and up vectors, where view is the "target" vector (down the camera's z axis, where you are looking) and up is straight up out of the camera up the camera's y-axis. These axes must be updated as the camera moves. Calculating the view and up vectors does not require a cross product as you should be applying rotations to these depending on your movements along the way. The view and up should update by rotations, but if you want to rotate around the x-axis (pitch) you must do a cross product.
I understand that the camera in OpenGL is defined to be looking in the negative Z direction. So in a simple example, I imagine that for my vertices to be rendered, they must be defined similar to the following:
rawverts = {
0.0f, 0.0f, -1.0f,
0.0f, 0.5f, -1.0f,
0.5f, 0.0f, -1.0f,
};
However, absolutely no guide will tell me the answer. Everywhere I look, the "Hello triangle" example is made with the z coordinate left at 0, and whenever a more complex mesh is defined the coordinates are not even shown. I still have no idea regarding the possible values of the coordinates for them to be drawn onto the screen. Take for example, glm::perspective:
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The horizontal Field of View, in degrees : the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);
But how can the clipping planes be defined with any positive values? The camera faces the -Z direction! Furthermore, if I create near/far clipping planes at, say, -1 and -4, does this now invalidate any Z coordinate that is more than -1 or less than -4? Or are the raw z coordinates only ever valid between 0 and -1 (again, surely z coordinates categorically cannot be positive?)..?
But let's assume that what actually happens, is that OpenGL (or glm) takes the clipping plane values and secretly negates them. So, my -1 to -4 becomes 1 to 4. Does this now invalidate any Z coordinate that is less than 1 and more than 4, being the reason why 0.0f, 0.0f, -1.0f wont be drawn on the screen?
At this stage, I would treat treat an answer as simply a pointer to a book or some material that has information on this matter.
No, points/vertices can have a positive z coordinate, but you won't see them unless the camera is moved back.
This article talks about that about a third of the way through.
Your problem is that you don't understand the coordinate systems and transformations.
First of there is the window coordinates. It is the pixel grid in your window, pure an simple. There is no z-axis.
Next is NDC. Google it. It is a cube from -1 to 1 in xyz axes. If you load both modelview and projection matrices with identity this is the space you render in. By specifying viewport you transform from NDC to window coordinates. Pixels from vertices outside the cube is clipped.
What you do with projection and modelview matrix is that you create a transformation on the NDC cube, making it cover you objects. When moving the camera, you alter the transform. The transform can translate a vertex from any location to the NDC cube, including negative z-coords.
That is the short version of how things work. The long version is too long to enter here. For more information please ask specific questions or better yet read some litterature on the subject.
So at the moment am working on a game for my coursework which is based around the idea of flying a rocket, I spent too much time thinking about the physics behind it that I completely ignored getting it to move properly.
For example when I were to draw a cone with the top pointing to the sky, and I rotate it on the X axis it rotates properly however if I translate it on the Y axis it moves on the global Y axis instead on it's local coordinate system which would have the Y axis pointing out of the cone's top.
My question is does openGL have a local coordinate system or would I have to somehow make my own transformation matrices, and if so how would I go about doing that.
The way I am doing the transformation and rotation is as follows:
glPushMatrix();
glTranslatef(llmX, llmY + acceleration, llmZ);
glRotatef(rotX, 1.0f, 0.0f, 0.0f);
glRotatef(rotY, 0.0f, 0.0f, 1.0f);
drawRocket();
glPopMatrix();
Here is a picture better explaining what I mean hopefully offers a better explanation.
EDIT: I find it really weird that the rotations seem to work one after the other as in if I rotate it on the X axis once and then proceed to rotate it on the Z axis it rotates from the already rotated X axis instead of the world X axis.
Hoping somebody could help me with understanding this, really need to get it working for my project.
Thank you.
If you are using a translation matrix for moving up (i.e. moving in positive Y direction), no matter where you are in on the matrix stack or in the transformation process, you are going to move the vertices in the positive Y direction.
If you instead want it move in the rotated direction, I suggest translating along the Y axis first, and then rotate to your desired angle. Essentially, push the matrices in the opposite fashion.
I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
But the result is not a triangle rotated 90 degrees.
Edit
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
glMatrixMode(GL_MODELVIEW);
Otherwise, you may be modifying either the projection or a texture matrix instead.
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.
Two major problems with the code as written:
Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.
You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.
The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow.
Regarding Projection matrix, you can find a good source to start with here:
http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx
It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).
transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:
Transform1 = Object coordinates system to World (for example - object rotation and scale)
Transform2 = World coordinates system to Camera (placing the object in the right place)
Transform3 = Camera coordinates system to Screen space (projecting to screen)
Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.
Have fun