Clockwise and counter-clockwise in OpenGL - c++

Can someone explain me how I can determine if a triangle is clockwise or counter-clockwise?
If I render a triangle with the following code
glBegin(GL_POLYGON);
glVertex3f(-0.5f, -0.5f, 0.0f);
glVertex3f(-0.5f, 0.5f, 0.0f);
glVertex3f(0.5f, 0.5f, 0.0f);
glEnd();
how do I now if it is clockwise or counter-clockwise? I do know that it also depends on the face of the triangle you are looking at, but how can I see that in the code? I have read that OpenGL uses counter-clockwise by default. But if I consider how OpenGL draws the vertices, it seems clockwise to me. I think it is just an error in my reasoning.

Take a look at this saying:
The projection of a polygon to window coordinates is said to have clockwise winding if an imaginary object following the path from its first vertex, its second vertex, and so on, to its last vertex, and finally back to its first vertex, moves in a clockwise direction about the interior of the polygon.
It is important to consider the relation with the projection of said polygon to window coordinates.
Basically, your reasoning is slightly off when you say that OpenGL uses counter-clockwise by default. But for what? It is to determine what polygons are front - facing so that the polygons not visible are culled (not rendered). That is, there is some purpose for the winding, they don't just happen to be ccw or cw winded.
On a side node, stop using glBegin() and glEnd().

By default the glVertex3f function supplies the points in counter-clockwise order.
The points you have supplied visually form a clockwise triangle.
What you are seeing is the back face of the triangle.

Related

Understanding how glm::ortho()'s arguments affect vertex location after projection

After searching many pages, glm documentation, tutorials...etc, I'm still confused on some things.
I'm trying to understand why I need to apply the following transformations to get my 800x600 (fullscreen square, assume the screen of the user is 800x600 for this minimal example) image to draw over everything. Assume I'm only drawing CCW triangles. Everything renders fine in my code, but I have to do the following:
// Vertex data (x/y/z), using EBOs
0.0f, 600.0f, 1.0f,
800.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
800.0f, 600.0f, 1.0f
// Later on...
glm::mat4 m, v, p;
m = scale(m, glm::vec3(-1.0, 1.0, 1.0));
v = rotate(v, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
p = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
(Note that just since I used the variable names m, v, and p doesn't mean they're actually the proper transformation for that name, the above just does what I want it to)
I'm confused on the following:
Where is the orthographic bounds? I assume it's pointing down the negative z-axis, but where do the left/right bounds come in? Does that mean [-400, 400] on the x-axis maps to [-1.0, 1.0] NDC, or that [0, 800] maps to it? (I assume whatever answer here applies to the y-axis). Then documentation just says Creates a matrix for an orthographic parallel viewing volume.
What happens if you flip the following third and fourth arguments (I ask because I see people doing this and I don't know if it's a mistake/typo or it works by a fluke... or if it properly works regardless):
Args three and four here:
_____________
| These two |
p1 = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
p2 = glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.5f, 1.5f);
Now I assume this third question will be answered with the above two, but I'm trying to figure out if this is why my first piece of code requires me flipping everything on the x-axis to work... which I will admit I was just messing around with it and it happened to work. I figure I need a 180 degree rotation to turn my plane around so it's on the -z side however... so that just leaves me with figuring out the -1.0, 1.0, 1.0 scaling.
The code provided in this example (minus the variable names) is the only stuff I use and the rendering works perfectly... it's just my lack of knowledge as to why it works that I'm unhappy with.
EDIT: Was trying to understand it from here by using the images and descriptions on the site as a single example of reference. I may have missed the point.
EDIT2: As a random question, since I always draw my plane at z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible (ex: 0.99, 1.01) for any reason? Assume nothing else is drawn or will be drawn.
You can assume the visible area in a orthographic projection to be a cube given in view space. This cube is then mapped to the [-1,1] cube in NDC coordinates, such that everything inside the cube is visible and everything outside will be clipped away. Generally, the viewer looks along the negative Z-axis, while +x is right and +Y is up.
How are the orthographic bounds mapped to NDC space?
The side length of the cube are given by the parameters passed to glOrtho. In the first example, parameters for left and right are [0, 800], thus the space from 0 to 800 along the X axis is mapped to [-1, 1] along the NDC X axis. Similar logic happens along the other two axes (top/bottom along y, near/far along -z).
What happens when the top and bottom parameters are exchanged?
Interchanging, for example, top and bottom is equivalent to mirroring the scene along this axis. If you look at second diagonal element of a orthographic matrix, this is defined as 2 / (top - bottom). By exchanging top and bottom only the sign of this element changes. The same also works for exchanging left with right or near with far. Sometimes this is used when the screen-space origin should be the lower left corner instead of upper left.
Why do you have to rotate the quad by 180° and mirror it?
As described above, near and far values are along the negative Z-axis. Values of [0.5, 1.5] along -Z mean [-0.5, -1.5] in world space coordinates. Since the plane is defined a z=1.0 this is outside the visible area. By rotating it around the origin by 180 degrees moves it to z=-1.0, but now you are looking at it from the back, which means back-face culling strikes. By mirroring it along X, the winding order is changed and thus back and front side are changed.
Since I always draw my plane at Z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible?
As long as you don't draw anything else, you can basically choose whatever you want. When multiple objects are drawn, then the range between near and far defines how precise differences in depth can be stored.

In gouraud shading, what is the T-junction issure and how to demonstrate it with OpenGL

I noticed here in the Gouraud Shading part, it said that "T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided".
It seems like the T-junction is about three surfaces in picture below share edges and the point A may have different normal vector due to it belongs to different surfaces.
But what is the effect when T-junction happened and how to use OpenGL to implement it? I tried set different normal for each vertex of each rectangle and put a light in the scene, however, I didn't see anything strange in the junction point A.
Here is my code:
glColor3f(1.0f, 0.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(0, 0,1);
glVertex3f(-5.0f, 5.0f, 0.0f);
glNormal3f(0, 1,1);
glVertex3f(5.0f, 5.0f, 0.0f);
glNormal3f(1, 1,1);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(0, -1,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glEnd();
glColor3f(0.0f, 1.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(1, 0,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glNormal3f(1, 2,1);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, 0,1);
glVertex3f(0.0f, -5.0f, 0.0f);
glNormal3f(0, 1, 2);
glVertex3f(-5.0f, -5.0f, 0.0f);
glEnd();
glColor3f(0.0f, 0.0f, 1.0f);
glBegin(GL_QUADS);
glNormal3f(1, 1, 3);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, -2, 5);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(-1, 1, 1);
glVertex3f(5.0f, -5.0f, 0.0f);
glNormal3f(1, -2, 0);
glVertex3f(0.0f, -5.0f, 0.0f);
glEnd();
The point light is in (0, 0, 10) as well as the camera. The result below has no visual anomaly I think. Maybe normals should be kind of special?
Is there anything wrong I did? Could anyone give me some hints to make this happen?
T-Junction is bad for Gouraud shading and in geometry in general.
First remember that goraud shading, is a method for light interpolation used in the fixed pipeline era where light is interpolated accross the vertices, making mesh tesselation (the number and connectivity) of the vertices directly affect the shading. Having t-junction will give a sudden discontinuity in how the final interpolated color looks (keep in mind that Gouraud shading has other problems, like under-sampling).
Gouraud shading directly use the vertex normals unlike Phong shading, and as a note don't confuse Phong shading with Phong lighting they are different
Note the case you are presenting is a t-junction but you won't notice any shading problem because the mesh is not tessellated enough and (it seems) you are not using any light. Try testing on a sphere with a t-junction.
Regarding geometry t-junction is considered a degenerate case. Because at that edge/polygon the geometric mesh loses consistency, you no longer have two edges connected at their ends, and you lose the polygon loop property (read: directed edges). It's usually a tricky problem to solve, a solution could be to triangulate the polygons so that the t-juction edge is now properly connected.
http://en.wikipedia.org/wiki/Gouraud_shading
The more you deal with this situation, the more clear the problem at its core is going to become. With one solid example and some time spent looking at it you'll probably go "aha!" and it'll click.
In theory the problem is usually described as a situation where pixels in the immediate and neighboring area of a t-vert are shaded based off of separate and sometimes different inputs (the normal at the t-vert versus the normals of neighboring verts). You can exaggerate the problem as an illustration by setting the t-vert's normal to something very different than the neighboring verts' normals (ex. very different than their average).
In practice though, aside from corner cases you're usually dealing with smooth gradations of normals among vertices, so the problem is more subtle. I view the problem in a different way because of this: as a sample data propagation issue. The situation causes an interpolation across samples that doesn't propagate the sample data across the surface in a homogeneous way. In your example, the t-vert light sample input isn't being propagated upward, only left/right/down. That's one reason that t-vertices are problematic, they represent discontinuities in a mesh's network that lead to issues like this.
You can visualize it in your mind by picturing light values at each of the normal points on the surface and then thinking of what the resultant colors would be across the faces for given light locations. Using your example but with a smoother gradation of normals, for the top face you'd see one long linear interpolation of color. For the bottom two faces though, you'd see two linear interpolations of color driven by the t-vertex normal. Depending on the light angle, the t-vertex normal can pick up different amounts of light than the neighboring normals. This will drive apart the color interpolations above and below it, and you'll see a shading seam.
To illustrate with your example, I'd use one color only, adjust the normals so they form a more even distribution of relative angle (something like the set I'll throw in below), and then view it using different light locations (especially ones close to the t-vertex).
top left normal: [-1, 1, 1]
top right normal: [1, 1, 1]
middle left normal: [-1, 0, 1]
t-vert normal: [0, 0, 1]
middle right normal: [1, 0, 1]
bottom left normal: [-1, -1, 1]
bottom middle normal: [0, -1, 1]
bottom right normal: [1, -1, 1]
Because this is an issue driven by uneven propagation of sampled data--and propagation is what interpolation does--similar anomalies occur with other interpolation techniques too (like Phong shading) by the way.

OpenGL glRotate and glTranslate order

I'm trying to rotate a cube around the axis and what I'm doing is:
glTranslatef(0.0f, 0.0f, -60.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
I'm expecting it to move to -60 and rotate around the y axis in circle, but instead it's just spinning around it self at -60 coordinate. When I write it like this:
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, -60.0f);
I get what I need but I don't understand why?
Why are they doing to opposite?
Can someone please explain.
When you apply a transform it is applied locally. Think of it as a coordinate system that you are moving around. You start with the coordinate system representing your view, and then you transform that coordinate system relative to itself. So in the first case, you are translating the coordinate system -60 along the Z axis of the coordinate system, and then you are rotating the coordinate system around the new Y axis at the new origin. Anything you draw is then drawn in that new coordinate system.
This actually provides a simpler way to think about transformations once you are used to it. You don't have to keep two separate coordinate systems in mind: one for the coordinate system that the transforms are applied in and one for the coordinate system that the geometry is drawn in.

Texture mapping with openGL

I was texture mapping a primitive, a quad to be exact. I had a problem with the texture being somehow rotated 90 degrees to anticlockwise direction. I thought the problem would be with the loading code of the texture, but turned out it was actually a problem with the draw function.
So this was the code which draw the picture erroneously:
glVertex2f(0.0f, 0.0f); glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.5f, 0.0f); glTexCoord2f(1.0f, 1.0f);
glVertex2f(0.5f, 0.5f); glTexCoord2f(1.0f, 0.0f);
glVertex2f(0.0f, 0.5f); glTexCoord2f(0.0f, 0.0f);
and this one draw it just as I intended it to be drawn:
glTexCoord2f(0.0f, 1.0f); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(0.5f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, 0.5f);
What causes this kind of behaviour? I really didn't think this would have such effects to the drawing.
I really didn't think this would have such effects to the drawing.
Think about it. What does glTexCoord do? It specifies the texture coordinate, correct? But the texture coordinate of what?
Yes, you know it specifies the texture coordinate of the next vertex, but OpenGL doesn't know that. All glTexCoord does is set the values you pass it into a piece of memory.
glVertex does something more. It sets the vertex position, but it also tells OpenGL, "Take all of the vertex values I've set so far and render a vertex with it." That's why you can't call glVertex outside of glBegin/glEnd, even though you can do that with glTexCoord, glColor, etc.
So when you do glTexCoord(...); glVertex(...), you're saying "set the current texture coordinate to X, then set the position to Y and render with these values." When you do glVertex(...); glTexCoord(...);, you're saying, "set the position to Y and render with the previously set values, then set the current texture coordinate to X."
It's a little late to be setting the texture coordinate after you've already told OpenGL to render a vertex.
OpenGL functions in a state-wise fashion. Many GL function calls serve to change the current state so that when you call some other functions, they can use the current state to do the proper operation.
In your situation, the glVertex2f() call uses the current texture state to define which part of the texture gets mapped on which vertex. In your first series of call, the first call to glVertex2f() would have no previous texture state, so it would probably default to (0.0f, 0.0f), although it could also be undefined behavior. The second call to glVertex2f would then use the state set by your first call to glTexCoord2f(), then the third call to glVertex2f() uses the state set by the second call to glTexCoord2(), and so on.
In the future, make sure to set the proper GL state before you call the functions which use those states, and you should be good to go.
The order in which you call glVertex and glTexCoord definitely matters! Whenever you specify vertex attributes like glTexCoord, glColor, etc.. they apply all future vertices that you draw, until you change one of those attributes again. So in the previous example, your first vertex was being drawn with some unspecified previous tex coord, the second vertex with tex coord (0.0, 1.0), etc..
Probably the best explanation there is online : Texture mapping - Tutorial
And also just to make sure, texture coordinates (texCoor) are as following :
And the order in which they are called matters!
(0,0) bottom left corner
(0,1) upper left corner
(1,0) bottom right corner
(1,1) upper right corner

OpenGl Rotation and Translation

I am going through a series of NeHe OpenGK tutorials. Tutorial #9 does some fancy stuff; I understood everything, except for two things which I think are the back bone of the whole tutorial.
In the DrawGlScene function, I didn't understand the following line.
glRotatef(tilt,1.0f,0.0f,0.0f); // Tilt The View (Using The Value In 'tilt')
I understand what that line does and it is also very clearly mentioned in the tutorial. But I don't understand why he wants to tilt the screen.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique? What needs to tilt? Just rotate the star when the star faces the user.
glRotatef(star[loop].angle,0.0f,1.0f,0.0f); // Rotate To The Current Stars Angle
glTranslatef(star[loop].dist,0.0f,0.0f); // Move Forward On The X Plane
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle
glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt
I will be really thankful if some body tells me the mechanism going on under the hood.
I don't understand why he wants to tilt the screen.
Tilting makes you see the stars in another angle and not just "right above" them.
The other thing is first he tilts the screen and then rotate it by star angle and immediately after that he does the the reverse of that. What is that technique?
That is because he wants to rotate the star around the selected plane (in this case Y plane), but (!) he also want the textured quad to face the viewer. Let us say he rotate it 90 degrees, if so, you would only see (like he states in the tutorial) a "thick" line.
Consider these comments:
// Rotate the current drawing by the specified angle on the Y axis
// in order to get it to rotate.
glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f);
// Rotating around the object's origin is not going to make
// any visible effects, especially since the star object itself is in 2D.
// In order to move around in your current projection, a glRotatef()
// call does rotate the star, but not in terms of moving it "around"
// on the screen.
// Therefore, use the star's distance to move it out from the center.
glTranslatef(star[loop].dist, 0.0f, 0.0f);
// We've moved the star out from the center, with the specified
// distance in star's distance. With the first glRotatef()
// call in mind, the 2D star is not 100 % facing
// the viewer. Therefore, face the star towards the screen using
// the negative angle value.
glRotatef(-star[loop].angle, 0.0f, 1.0f, 0.0f);
// Cancel the tilt on the X axis.
glRotatef(-tilt, 1.0f, 0.0f, 0.0f);