I noticed here in the Gouraud Shading part, it said that "T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided".
It seems like the T-junction is about three surfaces in picture below share edges and the point A may have different normal vector due to it belongs to different surfaces.
But what is the effect when T-junction happened and how to use OpenGL to implement it? I tried set different normal for each vertex of each rectangle and put a light in the scene, however, I didn't see anything strange in the junction point A.
Here is my code:
glColor3f(1.0f, 0.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(0, 0,1);
glVertex3f(-5.0f, 5.0f, 0.0f);
glNormal3f(0, 1,1);
glVertex3f(5.0f, 5.0f, 0.0f);
glNormal3f(1, 1,1);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(0, -1,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glEnd();
glColor3f(0.0f, 1.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(1, 0,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glNormal3f(1, 2,1);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, 0,1);
glVertex3f(0.0f, -5.0f, 0.0f);
glNormal3f(0, 1, 2);
glVertex3f(-5.0f, -5.0f, 0.0f);
glEnd();
glColor3f(0.0f, 0.0f, 1.0f);
glBegin(GL_QUADS);
glNormal3f(1, 1, 3);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, -2, 5);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(-1, 1, 1);
glVertex3f(5.0f, -5.0f, 0.0f);
glNormal3f(1, -2, 0);
glVertex3f(0.0f, -5.0f, 0.0f);
glEnd();
The point light is in (0, 0, 10) as well as the camera. The result below has no visual anomaly I think. Maybe normals should be kind of special?
Is there anything wrong I did? Could anyone give me some hints to make this happen?
T-Junction is bad for Gouraud shading and in geometry in general.
First remember that goraud shading, is a method for light interpolation used in the fixed pipeline era where light is interpolated accross the vertices, making mesh tesselation (the number and connectivity) of the vertices directly affect the shading. Having t-junction will give a sudden discontinuity in how the final interpolated color looks (keep in mind that Gouraud shading has other problems, like under-sampling).
Gouraud shading directly use the vertex normals unlike Phong shading, and as a note don't confuse Phong shading with Phong lighting they are different
Note the case you are presenting is a t-junction but you won't notice any shading problem because the mesh is not tessellated enough and (it seems) you are not using any light. Try testing on a sphere with a t-junction.
Regarding geometry t-junction is considered a degenerate case. Because at that edge/polygon the geometric mesh loses consistency, you no longer have two edges connected at their ends, and you lose the polygon loop property (read: directed edges). It's usually a tricky problem to solve, a solution could be to triangulate the polygons so that the t-juction edge is now properly connected.
http://en.wikipedia.org/wiki/Gouraud_shading
The more you deal with this situation, the more clear the problem at its core is going to become. With one solid example and some time spent looking at it you'll probably go "aha!" and it'll click.
In theory the problem is usually described as a situation where pixels in the immediate and neighboring area of a t-vert are shaded based off of separate and sometimes different inputs (the normal at the t-vert versus the normals of neighboring verts). You can exaggerate the problem as an illustration by setting the t-vert's normal to something very different than the neighboring verts' normals (ex. very different than their average).
In practice though, aside from corner cases you're usually dealing with smooth gradations of normals among vertices, so the problem is more subtle. I view the problem in a different way because of this: as a sample data propagation issue. The situation causes an interpolation across samples that doesn't propagate the sample data across the surface in a homogeneous way. In your example, the t-vert light sample input isn't being propagated upward, only left/right/down. That's one reason that t-vertices are problematic, they represent discontinuities in a mesh's network that lead to issues like this.
You can visualize it in your mind by picturing light values at each of the normal points on the surface and then thinking of what the resultant colors would be across the faces for given light locations. Using your example but with a smoother gradation of normals, for the top face you'd see one long linear interpolation of color. For the bottom two faces though, you'd see two linear interpolations of color driven by the t-vertex normal. Depending on the light angle, the t-vertex normal can pick up different amounts of light than the neighboring normals. This will drive apart the color interpolations above and below it, and you'll see a shading seam.
To illustrate with your example, I'd use one color only, adjust the normals so they form a more even distribution of relative angle (something like the set I'll throw in below), and then view it using different light locations (especially ones close to the t-vertex).
top left normal: [-1, 1, 1]
top right normal: [1, 1, 1]
middle left normal: [-1, 0, 1]
t-vert normal: [0, 0, 1]
middle right normal: [1, 0, 1]
bottom left normal: [-1, -1, 1]
bottom middle normal: [0, -1, 1]
bottom right normal: [1, -1, 1]
Because this is an issue driven by uneven propagation of sampled data--and propagation is what interpolation does--similar anomalies occur with other interpolation techniques too (like Phong shading) by the way.
I am currently working on a little toy program with OpenGL which shows a scene in clip-space view, i.e. it draws a cube to visualize the canonical view volume and inside the cube, the projectively transformed model is drawn. To show a code snippet for the model drawing:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
So, naturally, the drawn model is "distorted" (which is the desired behaviour). However, the lighting is wrong, as the surface normals are also transformed by the projection matrix and, thus, are not orthogonal to their surfaces after transform. What I am trying to accomplish is lighting that is "correct" in the sense that the surfaces of the distorted models have correct normals.
The question is - how can I do that? I was playing with the usual transposed-inverse-matrix rule for normals, but as far as I understand, that's what OGL does with its normals by default. I think I would have to recalculate the surface normals AFTER the surfaces are transformed with the modelview matrix, but how to do that? Or is there another way?
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
The projection matrix goes into glMatrixMode(GL_PROJECTION);. Transforming the normals happens with the inverse transpose of the modelview. If there's a projection component in the modelview it messes up your normal transformation.
The correct code would be
glMatrixMode(GL_PROJECTION);
glLoadMatrixd(projectionMat);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
If you're using fixed-function, you must put all of this in your projection matrix. Including the scale, translation, and rotation that happens after the projection:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
This works because the positions (ie: what you see) are transformed by both the projection and modelview matrices, but the fixed-function lighting is done only in view space (ie: after modelview but before projection).
In fact, this is exactly why fixed-function GL has a distinction between the two matrices.
I am trying to do a Photoshop-like hue shift on a texture.
My code is somewhat like this:
glColor4f(0.0f, 1.0f, 1.0f, 1.0f);
//bind texture, draw quad, etc.
Here is a picture describing what happens:
I can't post images yet, so here's a link.
I use glm::perspective(80.0f, 4.0f/3.0f, 1.0f, 120.0f); and multiply it by
glm::mat4 view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 60.5f),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f)
);
My question touches the subject of OpenGL and Maths. It relates to drawing GUI on my viewport. I do not know how to get proper coordinates in order to draw, e.g. a square that covers ΒΌ of the window. If I don't use perspectives and glm::lookAt(...) (matrix indentity), I will be able to draw my GUI by setting coords from X,Y in <-1.0, 1.0>. And when I put a vertex on (-1.0, -1.0), it will be localized at the bottom left corner of the window.
How to gain the same effect using perspective and lookAt?
Don't try to fiddle things into one certain projection. Just switch your projection to something that better suits your GUI drawing needs. OpenGL is a state machine, and it's perfectly normal to switch the parameters multiple times throughout rendering a single image.
How can i set the texture coordinate offset and multiplier for the gluCylinder() and gluDisk() etc. functions?
So if normally the texture would start at point 0, i would like to set it start at point 0.6 or 3.2 etc. by multiplier i mean the texture would either get bigger or smaller.
The solution cant be glScalef() because 1) im using normals, 2) i want to adjust the texture start position as well.
Try using the texture matrix stack:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.6f, 3.2f, 0.0f);
glScalef(2.0f, 2.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
drawObject();
The solution has nothing to do with the GLU functions and is indeed glScalef (and glTranslatef for the offset adjustment), but applying it to the texture matrix (assuming you don't use shaders). The texture matrix, selected by calling glMatrixMode with GL_TEXTURE, transforms the vertices' texture coordinates before they are interpolated and used to access the texture (no matter how these texture coordinates are computed, in this case by GLU, which just computes them on the CPU and calls glTexCoord2f).
So to let the texture start at (0.1,0.2) (in texture space, of course) and make it 2 times as large, you just call:
glMatrixMode(GL_TEXTURE);
glTranslatef(0.1f, 0.2f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);
before calling gluCylinder. But be sure to revert these changes afterwards (probably wrapping it between glPush/PopMatrix).
But if you want to change the texture coordinates based on the world space coordinates, this might involve some more computation. And of course you can also use a vertex shader to have complete control over the texture coordinate generation.