After searching many pages, glm documentation, tutorials...etc, I'm still confused on some things.
I'm trying to understand why I need to apply the following transformations to get my 800x600 (fullscreen square, assume the screen of the user is 800x600 for this minimal example) image to draw over everything. Assume I'm only drawing CCW triangles. Everything renders fine in my code, but I have to do the following:
// Vertex data (x/y/z), using EBOs
0.0f, 600.0f, 1.0f,
800.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
800.0f, 600.0f, 1.0f
// Later on...
glm::mat4 m, v, p;
m = scale(m, glm::vec3(-1.0, 1.0, 1.0));
v = rotate(v, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
p = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
(Note that just since I used the variable names m, v, and p doesn't mean they're actually the proper transformation for that name, the above just does what I want it to)
I'm confused on the following:
Where is the orthographic bounds? I assume it's pointing down the negative z-axis, but where do the left/right bounds come in? Does that mean [-400, 400] on the x-axis maps to [-1.0, 1.0] NDC, or that [0, 800] maps to it? (I assume whatever answer here applies to the y-axis). Then documentation just says Creates a matrix for an orthographic parallel viewing volume.
What happens if you flip the following third and fourth arguments (I ask because I see people doing this and I don't know if it's a mistake/typo or it works by a fluke... or if it properly works regardless):
Args three and four here:
_____________
| These two |
p1 = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
p2 = glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.5f, 1.5f);
Now I assume this third question will be answered with the above two, but I'm trying to figure out if this is why my first piece of code requires me flipping everything on the x-axis to work... which I will admit I was just messing around with it and it happened to work. I figure I need a 180 degree rotation to turn my plane around so it's on the -z side however... so that just leaves me with figuring out the -1.0, 1.0, 1.0 scaling.
The code provided in this example (minus the variable names) is the only stuff I use and the rendering works perfectly... it's just my lack of knowledge as to why it works that I'm unhappy with.
EDIT: Was trying to understand it from here by using the images and descriptions on the site as a single example of reference. I may have missed the point.
EDIT2: As a random question, since I always draw my plane at z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible (ex: 0.99, 1.01) for any reason? Assume nothing else is drawn or will be drawn.
You can assume the visible area in a orthographic projection to be a cube given in view space. This cube is then mapped to the [-1,1] cube in NDC coordinates, such that everything inside the cube is visible and everything outside will be clipped away. Generally, the viewer looks along the negative Z-axis, while +x is right and +Y is up.
How are the orthographic bounds mapped to NDC space?
The side length of the cube are given by the parameters passed to glOrtho. In the first example, parameters for left and right are [0, 800], thus the space from 0 to 800 along the X axis is mapped to [-1, 1] along the NDC X axis. Similar logic happens along the other two axes (top/bottom along y, near/far along -z).
What happens when the top and bottom parameters are exchanged?
Interchanging, for example, top and bottom is equivalent to mirroring the scene along this axis. If you look at second diagonal element of a orthographic matrix, this is defined as 2 / (top - bottom). By exchanging top and bottom only the sign of this element changes. The same also works for exchanging left with right or near with far. Sometimes this is used when the screen-space origin should be the lower left corner instead of upper left.
Why do you have to rotate the quad by 180° and mirror it?
As described above, near and far values are along the negative Z-axis. Values of [0.5, 1.5] along -Z mean [-0.5, -1.5] in world space coordinates. Since the plane is defined a z=1.0 this is outside the visible area. By rotating it around the origin by 180 degrees moves it to z=-1.0, but now you are looking at it from the back, which means back-face culling strikes. By mirroring it along X, the winding order is changed and thus back and front side are changed.
Since I always draw my plane at Z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible?
As long as you don't draw anything else, you can basically choose whatever you want. When multiple objects are drawn, then the range between near and far defines how precise differences in depth can be stored.
Related
I understand that the camera in OpenGL is defined to be looking in the negative Z direction. So in a simple example, I imagine that for my vertices to be rendered, they must be defined similar to the following:
rawverts = {
0.0f, 0.0f, -1.0f,
0.0f, 0.5f, -1.0f,
0.5f, 0.0f, -1.0f,
};
However, absolutely no guide will tell me the answer. Everywhere I look, the "Hello triangle" example is made with the z coordinate left at 0, and whenever a more complex mesh is defined the coordinates are not even shown. I still have no idea regarding the possible values of the coordinates for them to be drawn onto the screen. Take for example, glm::perspective:
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The horizontal Field of View, in degrees : the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);
But how can the clipping planes be defined with any positive values? The camera faces the -Z direction! Furthermore, if I create near/far clipping planes at, say, -1 and -4, does this now invalidate any Z coordinate that is more than -1 or less than -4? Or are the raw z coordinates only ever valid between 0 and -1 (again, surely z coordinates categorically cannot be positive?)..?
But let's assume that what actually happens, is that OpenGL (or glm) takes the clipping plane values and secretly negates them. So, my -1 to -4 becomes 1 to 4. Does this now invalidate any Z coordinate that is less than 1 and more than 4, being the reason why 0.0f, 0.0f, -1.0f wont be drawn on the screen?
At this stage, I would treat treat an answer as simply a pointer to a book or some material that has information on this matter.
No, points/vertices can have a positive z coordinate, but you won't see them unless the camera is moved back.
This article talks about that about a third of the way through.
Your problem is that you don't understand the coordinate systems and transformations.
First of there is the window coordinates. It is the pixel grid in your window, pure an simple. There is no z-axis.
Next is NDC. Google it. It is a cube from -1 to 1 in xyz axes. If you load both modelview and projection matrices with identity this is the space you render in. By specifying viewport you transform from NDC to window coordinates. Pixels from vertices outside the cube is clipped.
What you do with projection and modelview matrix is that you create a transformation on the NDC cube, making it cover you objects. When moving the camera, you alter the transform. The transform can translate a vertex from any location to the NDC cube, including negative z-coords.
That is the short version of how things work. The long version is too long to enter here. For more information please ask specific questions or better yet read some litterature on the subject.
Can someone explain me how I can determine if a triangle is clockwise or counter-clockwise?
If I render a triangle with the following code
glBegin(GL_POLYGON);
glVertex3f(-0.5f, -0.5f, 0.0f);
glVertex3f(-0.5f, 0.5f, 0.0f);
glVertex3f(0.5f, 0.5f, 0.0f);
glEnd();
how do I now if it is clockwise or counter-clockwise? I do know that it also depends on the face of the triangle you are looking at, but how can I see that in the code? I have read that OpenGL uses counter-clockwise by default. But if I consider how OpenGL draws the vertices, it seems clockwise to me. I think it is just an error in my reasoning.
Take a look at this saying:
The projection of a polygon to window coordinates is said to have clockwise winding if an imaginary object following the path from its first vertex, its second vertex, and so on, to its last vertex, and finally back to its first vertex, moves in a clockwise direction about the interior of the polygon.
It is important to consider the relation with the projection of said polygon to window coordinates.
Basically, your reasoning is slightly off when you say that OpenGL uses counter-clockwise by default. But for what? It is to determine what polygons are front - facing so that the polygons not visible are culled (not rendered). That is, there is some purpose for the winding, they don't just happen to be ccw or cw winded.
On a side node, stop using glBegin() and glEnd().
By default the glVertex3f function supplies the points in counter-clockwise order.
The points you have supplied visually form a clockwise triangle.
What you are seeing is the back face of the triangle.
I'm currently facing some perspective issues when trying to render the axes of a coordinate system into my scene. For these axes I draw three orthogonal lines that go through the center of my 3D cube.
It's pretty tough to explain what the problem is, so I guess the most demonstrative way of presenting it is to post some pictures.
1) view on the whole scene: click here
2) zoomed in view on the origin of the coordinate system: click here
3) When I zoom in a tiny little bit further, two of the axes disappear and the other one seems to be displaced for some reason: click here
Why does this happen and how can I prevent it?
My modelview and projection matrices look the following:
// Set ProjectionMatrix
projectionMatrix = glm::perspective(90.0f, (GLfloat)width / (GLfloat) height, 0.0001f, 1000.f);
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(projectionMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Set ModelViewMatrix
glm::mat4 identity = glm::mat4(1.0); // Start with the identity as the transformation matrix
glm::mat4 pointTranslateZ = glm::translate(identity, glm::vec3(0.0f, 0.0f, -translate_z)); // Zoom in or out by translating in z-direction based on user input
glm::mat4 viewRotateX = glm::rotate(pointTranslateZ, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
glm::mat4 viewRotateY = glm::rotate(viewRotateX, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
glm::mat4 pointRotateX = glm::rotate(viewRotateY, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
glm::mat4 viewTranslate = glm::translate(pointRotateX, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
That's called "clipping". The axis is hitting the near-clip plane and thus is being clipped. The third axis is not "displaced"; it is simply partially clipped. Take your second image and cover up most of it, so that you only see part of the diagonal axis; that's what you're getting.
There are a few general solutions to this. First, you could just not allow the user to zoom in that far. Or you could adjust the near clip plane inward as the camera is moved closer to the target object. This will also cause precision problems for far away objects, so you'll probably want to adjust your far clip plane inward too.
Alternatively, you can just turn on depth clamping (assuming you have GL 3.x+, or access to ARB_depth_clamp or NV_depth_clamp). This isn't a perfect solution, as things will still be clipped when they get behind the camera. And things that intersect the near clip plane will no longer have proper depth buffering if two such objects overlap. But it's generally good enough.
I'm trying to rotate a cube around the axis and what I'm doing is:
glTranslatef(0.0f, 0.0f, -60.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
I'm expecting it to move to -60 and rotate around the y axis in circle, but instead it's just spinning around it self at -60 coordinate. When I write it like this:
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, -60.0f);
I get what I need but I don't understand why?
Why are they doing to opposite?
Can someone please explain.
When you apply a transform it is applied locally. Think of it as a coordinate system that you are moving around. You start with the coordinate system representing your view, and then you transform that coordinate system relative to itself. So in the first case, you are translating the coordinate system -60 along the Z axis of the coordinate system, and then you are rotating the coordinate system around the new Y axis at the new origin. Anything you draw is then drawn in that new coordinate system.
This actually provides a simpler way to think about transformations once you are used to it. You don't have to keep two separate coordinate systems in mind: one for the coordinate system that the transforms are applied in and one for the coordinate system that the geometry is drawn in.
I am going through a series of NeHe OpenGK tutorials. Tutorial #9 does some fancy stuff; I understood everything, except for two things which I think are the back bone of the whole tutorial.
In the DrawGlScene function, I didn't understand the following line.
glRotatef(tilt,1.0f,0.0f,0.0f); // Tilt The View (Using The Value In 'tilt')
I understand what that line does and it is also very clearly mentioned in the tutorial. But I don't understand why he wants to tilt the screen.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique? What needs to tilt? Just rotate the star when the star faces the user.
glRotatef(star[loop].angle,0.0f,1.0f,0.0f); // Rotate To The Current Stars Angle
glTranslatef(star[loop].dist,0.0f,0.0f); // Move Forward On The X Plane
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle
glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt
I will be really thankful if some body tells me the mechanism going on under the hood.
I don't understand why he wants to tilt the screen.
Tilting makes you see the stars in another angle and not just "right above" them.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique?
That is because he wants to rotate the star around the selected plane (in this case Y plane), but (!) he also want the textured quad to face the viewer. Let us say he rotate it 90 degrees, if so, you would only see (like he states in the tutorial) a "thick" line.
Consider these comments:
// Rotate the current drawing by the specified angle on the Y axis
// in order to get it to rotate.
glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f);
// Rotating around the object's origin is not going to make
// any visible effects, especially since the star object itself is in 2D.
// In order to move around in your current projection, a glRotatef()
// call does rotate the star, but not in terms of moving it "around"
// on the screen.
// Therefore, use the star's distance to move it out from the center.
glTranslatef(star[loop].dist, 0.0f, 0.0f);
// We've moved the star out from the center, with the specified
// distance in star's distance. With the first glRotatef()
// call in mind, the 2D star is not 100 % facing
// the viewer. Therefore, face the star towards the screen using
// the negative angle value.
glRotatef(-star[loop].angle, 0.0f, 1.0f, 0.0f);
// Cancel the tilt on the X axis.
glRotatef(-tilt, 1.0f, 0.0f, 0.0f);