I am going through a series of NeHe OpenGK tutorials. Tutorial #9 does some fancy stuff; I understood everything, except for two things which I think are the back bone of the whole tutorial.
In the DrawGlScene function, I didn't understand the following line.
glRotatef(tilt,1.0f,0.0f,0.0f); // Tilt The View (Using The Value In 'tilt')
I understand what that line does and it is also very clearly mentioned in the tutorial. But I don't understand why he wants to tilt the screen.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique? What needs to tilt? Just rotate the star when the star faces the user.
glRotatef(star[loop].angle,0.0f,1.0f,0.0f); // Rotate To The Current Stars Angle
glTranslatef(star[loop].dist,0.0f,0.0f); // Move Forward On The X Plane
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle
glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt
I will be really thankful if some body tells me the mechanism going on under the hood.
I don't understand why he wants to tilt the screen.
Tilting makes you see the stars in another angle and not just "right above" them.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique?
That is because he wants to rotate the star around the selected plane (in this case Y plane), but (!) he also want the textured quad to face the viewer. Let us say he rotate it 90 degrees, if so, you would only see (like he states in the tutorial) a "thick" line.
Consider these comments:
// Rotate the current drawing by the specified angle on the Y axis
// in order to get it to rotate.
glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f);
// Rotating around the object's origin is not going to make
// any visible effects, especially since the star object itself is in 2D.
// In order to move around in your current projection, a glRotatef()
// call does rotate the star, but not in terms of moving it "around"
// on the screen.
// Therefore, use the star's distance to move it out from the center.
glTranslatef(star[loop].dist, 0.0f, 0.0f);
// We've moved the star out from the center, with the specified
// distance in star's distance. With the first glRotatef()
// call in mind, the 2D star is not 100 % facing
// the viewer. Therefore, face the star towards the screen using
// the negative angle value.
glRotatef(-star[loop].angle, 0.0f, 1.0f, 0.0f);
// Cancel the tilt on the X axis.
glRotatef(-tilt, 1.0f, 0.0f, 0.0f);
Related
After searching many pages, glm documentation, tutorials...etc, I'm still confused on some things.
I'm trying to understand why I need to apply the following transformations to get my 800x600 (fullscreen square, assume the screen of the user is 800x600 for this minimal example) image to draw over everything. Assume I'm only drawing CCW triangles. Everything renders fine in my code, but I have to do the following:
// Vertex data (x/y/z), using EBOs
0.0f, 600.0f, 1.0f,
800.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
800.0f, 600.0f, 1.0f
// Later on...
glm::mat4 m, v, p;
m = scale(m, glm::vec3(-1.0, 1.0, 1.0));
v = rotate(v, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
p = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
(Note that just since I used the variable names m, v, and p doesn't mean they're actually the proper transformation for that name, the above just does what I want it to)
I'm confused on the following:
Where is the orthographic bounds? I assume it's pointing down the negative z-axis, but where do the left/right bounds come in? Does that mean [-400, 400] on the x-axis maps to [-1.0, 1.0] NDC, or that [0, 800] maps to it? (I assume whatever answer here applies to the y-axis). Then documentation just says Creates a matrix for an orthographic parallel viewing volume.
What happens if you flip the following third and fourth arguments (I ask because I see people doing this and I don't know if it's a mistake/typo or it works by a fluke... or if it properly works regardless):
Args three and four here:
_____________
| These two |
p1 = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
p2 = glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.5f, 1.5f);
Now I assume this third question will be answered with the above two, but I'm trying to figure out if this is why my first piece of code requires me flipping everything on the x-axis to work... which I will admit I was just messing around with it and it happened to work. I figure I need a 180 degree rotation to turn my plane around so it's on the -z side however... so that just leaves me with figuring out the -1.0, 1.0, 1.0 scaling.
The code provided in this example (minus the variable names) is the only stuff I use and the rendering works perfectly... it's just my lack of knowledge as to why it works that I'm unhappy with.
EDIT: Was trying to understand it from here by using the images and descriptions on the site as a single example of reference. I may have missed the point.
EDIT2: As a random question, since I always draw my plane at z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible (ex: 0.99, 1.01) for any reason? Assume nothing else is drawn or will be drawn.
You can assume the visible area in a orthographic projection to be a cube given in view space. This cube is then mapped to the [-1,1] cube in NDC coordinates, such that everything inside the cube is visible and everything outside will be clipped away. Generally, the viewer looks along the negative Z-axis, while +x is right and +Y is up.
How are the orthographic bounds mapped to NDC space?
The side length of the cube are given by the parameters passed to glOrtho. In the first example, parameters for left and right are [0, 800], thus the space from 0 to 800 along the X axis is mapped to [-1, 1] along the NDC X axis. Similar logic happens along the other two axes (top/bottom along y, near/far along -z).
What happens when the top and bottom parameters are exchanged?
Interchanging, for example, top and bottom is equivalent to mirroring the scene along this axis. If you look at second diagonal element of a orthographic matrix, this is defined as 2 / (top - bottom). By exchanging top and bottom only the sign of this element changes. The same also works for exchanging left with right or near with far. Sometimes this is used when the screen-space origin should be the lower left corner instead of upper left.
Why do you have to rotate the quad by 180° and mirror it?
As described above, near and far values are along the negative Z-axis. Values of [0.5, 1.5] along -Z mean [-0.5, -1.5] in world space coordinates. Since the plane is defined a z=1.0 this is outside the visible area. By rotating it around the origin by 180 degrees moves it to z=-1.0, but now you are looking at it from the back, which means back-face culling strikes. By mirroring it along X, the winding order is changed and thus back and front side are changed.
Since I always draw my plane at Z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible?
As long as you don't draw anything else, you can basically choose whatever you want. When multiple objects are drawn, then the range between near and far defines how precise differences in depth can be stored.
I understand that the camera in OpenGL is defined to be looking in the negative Z direction. So in a simple example, I imagine that for my vertices to be rendered, they must be defined similar to the following:
rawverts = {
0.0f, 0.0f, -1.0f,
0.0f, 0.5f, -1.0f,
0.5f, 0.0f, -1.0f,
};
However, absolutely no guide will tell me the answer. Everywhere I look, the "Hello triangle" example is made with the z coordinate left at 0, and whenever a more complex mesh is defined the coordinates are not even shown. I still have no idea regarding the possible values of the coordinates for them to be drawn onto the screen. Take for example, glm::perspective:
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The horizontal Field of View, in degrees : the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);
But how can the clipping planes be defined with any positive values? The camera faces the -Z direction! Furthermore, if I create near/far clipping planes at, say, -1 and -4, does this now invalidate any Z coordinate that is more than -1 or less than -4? Or are the raw z coordinates only ever valid between 0 and -1 (again, surely z coordinates categorically cannot be positive?)..?
But let's assume that what actually happens, is that OpenGL (or glm) takes the clipping plane values and secretly negates them. So, my -1 to -4 becomes 1 to 4. Does this now invalidate any Z coordinate that is less than 1 and more than 4, being the reason why 0.0f, 0.0f, -1.0f wont be drawn on the screen?
At this stage, I would treat treat an answer as simply a pointer to a book or some material that has information on this matter.
No, points/vertices can have a positive z coordinate, but you won't see them unless the camera is moved back.
This article talks about that about a third of the way through.
Your problem is that you don't understand the coordinate systems and transformations.
First of there is the window coordinates. It is the pixel grid in your window, pure an simple. There is no z-axis.
Next is NDC. Google it. It is a cube from -1 to 1 in xyz axes. If you load both modelview and projection matrices with identity this is the space you render in. By specifying viewport you transform from NDC to window coordinates. Pixels from vertices outside the cube is clipped.
What you do with projection and modelview matrix is that you create a transformation on the NDC cube, making it cover you objects. When moving the camera, you alter the transform. The transform can translate a vertex from any location to the NDC cube, including negative z-coords.
That is the short version of how things work. The long version is too long to enter here. For more information please ask specific questions or better yet read some litterature on the subject.
So at the moment am working on a game for my coursework which is based around the idea of flying a rocket, I spent too much time thinking about the physics behind it that I completely ignored getting it to move properly.
For example when I were to draw a cone with the top pointing to the sky, and I rotate it on the X axis it rotates properly however if I translate it on the Y axis it moves on the global Y axis instead on it's local coordinate system which would have the Y axis pointing out of the cone's top.
My question is does openGL have a local coordinate system or would I have to somehow make my own transformation matrices, and if so how would I go about doing that.
The way I am doing the transformation and rotation is as follows:
glPushMatrix();
glTranslatef(llmX, llmY + acceleration, llmZ);
glRotatef(rotX, 1.0f, 0.0f, 0.0f);
glRotatef(rotY, 0.0f, 0.0f, 1.0f);
drawRocket();
glPopMatrix();
Here is a picture better explaining what I mean hopefully offers a better explanation.
EDIT: I find it really weird that the rotations seem to work one after the other as in if I rotate it on the X axis once and then proceed to rotate it on the Z axis it rotates from the already rotated X axis instead of the world X axis.
Hoping somebody could help me with understanding this, really need to get it working for my project.
Thank you.
If you are using a translation matrix for moving up (i.e. moving in positive Y direction), no matter where you are in on the matrix stack or in the transformation process, you are going to move the vertices in the positive Y direction.
If you instead want it move in the rotated direction, I suggest translating along the Y axis first, and then rotate to your desired angle. Essentially, push the matrices in the opposite fashion.
Can someone with OpenGl experience please suggest a strategy to help me solve an issue I'm having with rotations?
Imagine a set of world coordinate xyz axes bolted to the center of the universe; that is, for purposes of this
discussion they do not move. I'm also doing no translations, and the camera is fixed,
to keep things simple. I have a cube centered at the origin and the intent is
that pressing the 'x', 'y', and 'z' keys will increment
a variable representing the number of degrees to rotate the cube about the world xyz axes. Each key press is 90° (you can imagine rotating a lego brick
in such a way), so pressing the 'x' key increments a float property RotXdeg:
RotXdeg += 90.0f;
Likewise for the pressing the 'y' and 'z' keys.
A naive way to implement[1] this is:
Gl.glPushMatrix();
Gl.glRotatef(RotXdeg, 1.0f, 0.0f, 0.0f);
Gl.glRotatef(RotYdeg, 0.0f, 1.0f, 0.0f);
Gl.glRotatef(RotZdeg, 0.0f, 0.0f, 1.0f);
Gl.glPopMatrix();
This of course has the effect or rotating the cube, and its local xyz axes, so the desired rotations about the world xyz axes have not been achieved.
(For those not familiar with OpenGl, this can be demonstrated by simply rotating 90° about the x axis
— which causes the local y axis to be oriented along the world z axis —
then a subsequent 90° rotation about the y, which to the user appears to be a rotation
about the world z axis).
I believe this post is asking for something similar, but the answer is not clear, and my understanding is that quaternions are just one way to solve the problem.
It seems to me that
there should be a relatively straightforward solution, even if it is not
particularly efficient. I've spent hours trying various ideas, including creating my own rotation matrices and trying ways to multiply them with the modelview matrix, but to no
avail. (I realize matrix multiplication is not commutative, but I have a feeling that's not the problem.)
([1] By the way, I'm using the Tao OpenGl namespace; thanks to http://vasilydev.blogspot.com for the suggestion.)
Code is here
If the cube lies on (0,0,0) world and local rotations have the same effect. If the cube was in another position a 90deg rotation would result in a quarter-circular orbit around (0,0,0). It is unclear what you are failing to achieve, and i'd also advise against using the old immediate mode for matrix operations. Nevertheless a way to achieve world rotation that way is:
- translate to (0,0,0)
- rotate 90 degrees
- translate back
I'm trying to rotate a cube around the axis and what I'm doing is:
glTranslatef(0.0f, 0.0f, -60.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
I'm expecting it to move to -60 and rotate around the y axis in circle, but instead it's just spinning around it self at -60 coordinate. When I write it like this:
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, -60.0f);
I get what I need but I don't understand why?
Why are they doing to opposite?
Can someone please explain.
When you apply a transform it is applied locally. Think of it as a coordinate system that you are moving around. You start with the coordinate system representing your view, and then you transform that coordinate system relative to itself. So in the first case, you are translating the coordinate system -60 along the Z axis of the coordinate system, and then you are rotating the coordinate system around the new Y axis at the new origin. Anything you draw is then drawn in that new coordinate system.
This actually provides a simpler way to think about transformations once you are used to it. You don't have to keep two separate coordinate systems in mind: one for the coordinate system that the transforms are applied in and one for the coordinate system that the geometry is drawn in.