Fast way to make vertices darker? - opengl

To make a lighting system for a voxel game, I need to specify a darkness value per vertex. I'm using GL_COLOR_MATERIAL and specifying a color per vertex, like this:
glEnable(GL_COLOR_MATERIAL);
glBegin(GL_QUADS);
glColor3f(0.6f, 0.6f, 0.6f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.3f, 0.3f, 0.3f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.7f, 0.7f, 0.7f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.9f, 0.9f, 0.9f);
glTexCoord2f(...);
glVertex3f(...);
glEnd();
This is working, but with many quads it is very slow. I'm using display lists too. Any good ideas in how to make vertices darker?

You're using immediate mode (glBegin, glEnd and everything in between). If performance is what you need, then I recommend you stop doing that.
What you're probably after is a generic vertex attribute. And lo and behold: Modern OpenGL actually has exactly this: Generic attributes. They even went so far, doing the right thing and do away with the predefined attributes (position, color, normal, texcoords, etc.) and have only generic attributes in OpenGL-3 core and later.
The functions glVertexAttrib (most of the time a Uniform does the job better) and glVertexAttribPointer are your friends. Specify how vertex attributes are processed using an appropriate vertex shader.

Related

In gouraud shading, what is the T-junction issure and how to demonstrate it with OpenGL

I noticed here in the Gouraud Shading part, it said that "T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided".
It seems like the T-junction is about three surfaces in picture below share edges and the point A may have different normal vector due to it belongs to different surfaces.
But what is the effect when T-junction happened and how to use OpenGL to implement it? I tried set different normal for each vertex of each rectangle and put a light in the scene, however, I didn't see anything strange in the junction point A.
Here is my code:
glColor3f(1.0f, 0.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(0, 0,1);
glVertex3f(-5.0f, 5.0f, 0.0f);
glNormal3f(0, 1,1);
glVertex3f(5.0f, 5.0f, 0.0f);
glNormal3f(1, 1,1);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(0, -1,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glEnd();
glColor3f(0.0f, 1.0f, 0.0f);
glBegin(GL_QUADS);
glNormal3f(1, 0,1);
glVertex3f(-5.0f, 0.0f, 0.0f);
glNormal3f(1, 2,1);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, 0,1);
glVertex3f(0.0f, -5.0f, 0.0f);
glNormal3f(0, 1, 2);
glVertex3f(-5.0f, -5.0f, 0.0f);
glEnd();
glColor3f(0.0f, 0.0f, 1.0f);
glBegin(GL_QUADS);
glNormal3f(1, 1, 3);
glVertex3f(0.0f, 0.0f, 0.0f);
glNormal3f(0, -2, 5);
glVertex3f(5.0f, 0.0f, 0.0f);
glNormal3f(-1, 1, 1);
glVertex3f(5.0f, -5.0f, 0.0f);
glNormal3f(1, -2, 0);
glVertex3f(0.0f, -5.0f, 0.0f);
glEnd();
The point light is in (0, 0, 10) as well as the camera. The result below has no visual anomaly I think. Maybe normals should be kind of special?
Is there anything wrong I did? Could anyone give me some hints to make this happen?
T-Junction is bad for Gouraud shading and in geometry in general.
First remember that goraud shading, is a method for light interpolation used in the fixed pipeline era where light is interpolated accross the vertices, making mesh tesselation (the number and connectivity) of the vertices directly affect the shading. Having t-junction will give a sudden discontinuity in how the final interpolated color looks (keep in mind that Gouraud shading has other problems, like under-sampling).
Gouraud shading directly use the vertex normals unlike Phong shading, and as a note don't confuse Phong shading with Phong lighting they are different
Note the case you are presenting is a t-junction but you won't notice any shading problem because the mesh is not tessellated enough and (it seems) you are not using any light. Try testing on a sphere with a t-junction.
Regarding geometry t-junction is considered a degenerate case. Because at that edge/polygon the geometric mesh loses consistency, you no longer have two edges connected at their ends, and you lose the polygon loop property (read: directed edges). It's usually a tricky problem to solve, a solution could be to triangulate the polygons so that the t-juction edge is now properly connected.
http://en.wikipedia.org/wiki/Gouraud_shading
The more you deal with this situation, the more clear the problem at its core is going to become. With one solid example and some time spent looking at it you'll probably go "aha!" and it'll click.
In theory the problem is usually described as a situation where pixels in the immediate and neighboring area of a t-vert are shaded based off of separate and sometimes different inputs (the normal at the t-vert versus the normals of neighboring verts). You can exaggerate the problem as an illustration by setting the t-vert's normal to something very different than the neighboring verts' normals (ex. very different than their average).
In practice though, aside from corner cases you're usually dealing with smooth gradations of normals among vertices, so the problem is more subtle. I view the problem in a different way because of this: as a sample data propagation issue. The situation causes an interpolation across samples that doesn't propagate the sample data across the surface in a homogeneous way. In your example, the t-vert light sample input isn't being propagated upward, only left/right/down. That's one reason that t-vertices are problematic, they represent discontinuities in a mesh's network that lead to issues like this.
You can visualize it in your mind by picturing light values at each of the normal points on the surface and then thinking of what the resultant colors would be across the faces for given light locations. Using your example but with a smoother gradation of normals, for the top face you'd see one long linear interpolation of color. For the bottom two faces though, you'd see two linear interpolations of color driven by the t-vertex normal. Depending on the light angle, the t-vertex normal can pick up different amounts of light than the neighboring normals. This will drive apart the color interpolations above and below it, and you'll see a shading seam.
To illustrate with your example, I'd use one color only, adjust the normals so they form a more even distribution of relative angle (something like the set I'll throw in below), and then view it using different light locations (especially ones close to the t-vertex).
top left normal: [-1, 1, 1]
top right normal: [1, 1, 1]
middle left normal: [-1, 0, 1]
t-vert normal: [0, 0, 1]
middle right normal: [1, 0, 1]
bottom left normal: [-1, -1, 1]
bottom middle normal: [0, -1, 1]
bottom right normal: [1, -1, 1]
Because this is an issue driven by uneven propagation of sampled data--and propagation is what interpolation does--similar anomalies occur with other interpolation techniques too (like Phong shading) by the way.

OpenGL - PBuffer render to Texture

After my last post, when someone recommended me to use pBuffers, I digged a bit on Google and I found some cool examples to make Offscreen Rendering, using pbuffers. Some example, available on nVidia's website, does a simple offscreen rendering, which just renders on the pbuffer context, reads the pixels into an array and then calls the opengl functions to DrawPixels.
I changed this example, in order to create a texture from the pixels read - Render it offscreen, read the pixels to the array, and then initialize a texture with this colorbit array. But this looks very redundant to me - We render the image, we copy it from Graphical Card memory into our memory (the array), to later copy it back to the graphical card in order to display it on screen, but just in a different rendering context. It looks kinda stupid the copies that I am making just to display the rendered texture, so I tried a different approach using glCopyTexImage2D(), which unfortunately doesn't work. I'll display code and explanations:
mypbuffer.Initialize(256, 256, false, false);
- The false values are for Sharing context and sharing object. They are false cause this fantastic graphical card doesn't support it.
Then I perform the usual initializations, to enable Blending, and GL_TEXTURE_2D.
CreateTexture();
mypbuffer.Activate();
int viewport[4];
glGetIntegerv(GL_VIEWPORT,(int*)viewport);
glViewport(0,0,xSize,ySize);
DrawScene(hDC);
//save data to texture using glCopyTexImage2D
glBindTexture(GL_TEXTURE_2D,texture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
0,0, xSize, ySize, 0);
glClearColor(.0f, 0.5f, 0.5f, 1.0f); // Set The Clear Color To Medium Blue
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(viewport[0],viewport[1],viewport[2],viewport[3]);
// glBindTexture(GL_TEXTURE_2D,texture);
first = false;
mypbuffer.Deactivate();
- The DrawScene function is very simple, it just renders a triangle and a rectangle, which is suposed to be offscreen rendered (I HOPE). CreateTexture() creates an empty texture. The function should work, as it was tested in the previous way I described and it works.
After this, in the main loop, i just do the following:
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texture);
glRotatef(theta, 0.0f, 0.0f, 0.01f);
glBegin(GL_QUADS);
//Front Face
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-0.5, -0.5f, 0.5f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.5f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f( 0.5f, 0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-0.5f, 0.5f, 0.5f);
glEnd();
SwapBuffers(hDC);
theta = 0.10f;
Sleep (1);
The final result is just a window with a blue background, nothing got actually Rendered. Any Idea why is this happening? My Graphical Card doesn't support the extension wgl_ARB_render_texture, but this shouldn't be a problem when calling the glCopyTexImage2D() right?
My Card doesn't support FBO either
What you must do is, sort of "connect" your two OpenGL contexts so that the textures of your PBuffer context also show up in the main render context. The term you need to look for is "display list sharing". In Windows you connect the contexts retroactively using wglShareLists, on X11 and MacOS X you must supply the handle of the context to be shared at context creation.
An entirely different possibility and working just as well is reusing the same context on the PBuffer. It's a little known fact, that you can use OpenGL render contexts not only on the drawable it has been created with first, but on any drawable with compatible settings. So if your PBuffer matches your main window's pixel format, you can detach the render context from the main window and attach it to the PBuffer. Of course you then need low level access to the main window's device context/drawable, which is normally hidden behind a framework.
You should check whether your OpenGL implementation supports framebuffer objects: these object are able to be render targets, and they can have attached textures as color buffers, indeed rendering directly into a texture.
This should be the way to go, otherwise your method is the alternative.

Texture mapping with openGL

I was texture mapping a primitive, a quad to be exact. I had a problem with the texture being somehow rotated 90 degrees to anticlockwise direction. I thought the problem would be with the loading code of the texture, but turned out it was actually a problem with the draw function.
So this was the code which draw the picture erroneously:
glVertex2f(0.0f, 0.0f); glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.5f, 0.0f); glTexCoord2f(1.0f, 1.0f);
glVertex2f(0.5f, 0.5f); glTexCoord2f(1.0f, 0.0f);
glVertex2f(0.0f, 0.5f); glTexCoord2f(0.0f, 0.0f);
and this one draw it just as I intended it to be drawn:
glTexCoord2f(0.0f, 1.0f); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(0.5f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, 0.5f);
What causes this kind of behaviour? I really didn't think this would have such effects to the drawing.
I really didn't think this would have such effects to the drawing.
Think about it. What does glTexCoord do? It specifies the texture coordinate, correct? But the texture coordinate of what?
Yes, you know it specifies the texture coordinate of the next vertex, but OpenGL doesn't know that. All glTexCoord does is set the values you pass it into a piece of memory.
glVertex does something more. It sets the vertex position, but it also tells OpenGL, "Take all of the vertex values I've set so far and render a vertex with it." That's why you can't call glVertex outside of glBegin/glEnd, even though you can do that with glTexCoord, glColor, etc.
So when you do glTexCoord(...); glVertex(...), you're saying "set the current texture coordinate to X, then set the position to Y and render with these values." When you do glVertex(...); glTexCoord(...);, you're saying, "set the position to Y and render with the previously set values, then set the current texture coordinate to X."
It's a little late to be setting the texture coordinate after you've already told OpenGL to render a vertex.
OpenGL functions in a state-wise fashion. Many GL function calls serve to change the current state so that when you call some other functions, they can use the current state to do the proper operation.
In your situation, the glVertex2f() call uses the current texture state to define which part of the texture gets mapped on which vertex. In your first series of call, the first call to glVertex2f() would have no previous texture state, so it would probably default to (0.0f, 0.0f), although it could also be undefined behavior. The second call to glVertex2f would then use the state set by your first call to glTexCoord2f(), then the third call to glVertex2f() uses the state set by the second call to glTexCoord2(), and so on.
In the future, make sure to set the proper GL state before you call the functions which use those states, and you should be good to go.
The order in which you call glVertex and glTexCoord definitely matters! Whenever you specify vertex attributes like glTexCoord, glColor, etc.. they apply all future vertices that you draw, until you change one of those attributes again. So in the previous example, your first vertex was being drawn with some unspecified previous tex coord, the second vertex with tex coord (0.0, 1.0), etc..
Probably the best explanation there is online : Texture mapping - Tutorial
And also just to make sure, texture coordinates (texCoor) are as following :
And the order in which they are called matters!
(0,0) bottom left corner
(0,1) upper left corner
(1,0) bottom right corner
(1,1) upper right corner

gluCylinder() and texture coordinates offset / multiplier?

How can i set the texture coordinate offset and multiplier for the gluCylinder() and gluDisk() etc. functions?
So if normally the texture would start at point 0, i would like to set it start at point 0.6 or 3.2 etc. by multiplier i mean the texture would either get bigger or smaller.
The solution cant be glScalef() because 1) im using normals, 2) i want to adjust the texture start position as well.
Try using the texture matrix stack:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.6f, 3.2f, 0.0f);
glScalef(2.0f, 2.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
drawObject();
The solution has nothing to do with the GLU functions and is indeed glScalef (and glTranslatef for the offset adjustment), but applying it to the texture matrix (assuming you don't use shaders). The texture matrix, selected by calling glMatrixMode with GL_TEXTURE, transforms the vertices' texture coordinates before they are interpolated and used to access the texture (no matter how these texture coordinates are computed, in this case by GLU, which just computes them on the CPU and calls glTexCoord2f).
So to let the texture start at (0.1,0.2) (in texture space, of course) and make it 2 times as large, you just call:
glMatrixMode(GL_TEXTURE);
glTranslatef(0.1f, 0.2f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);
before calling gluCylinder. But be sure to revert these changes afterwards (probably wrapping it between glPush/PopMatrix).
But if you want to change the texture coordinates based on the world space coordinates, this might involve some more computation. And of course you can also use a vertex shader to have complete control over the texture coordinate generation.

Lighting issues in OpenGL

I have a triangle mesh that has no texture, but a set color (sort of blue) and alpha (0.7f). This mesh is run time generated and the normals are correct. I find that with lighting on, the color of my object changes as it moves around the level. Also, the lighting doesn't look right. When I draw this object, this is the code:
glEnable( GL_COLOR_MATERIAL );
float matColor[] = { cur->GetRed(), cur->GetGreen(), cur->GetBlue(), cur->GetAlpha() };
float white[] = { 0.3f, 0.3f, 0.3f, 1.0f };
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matColor);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, white);
Another odd thing I noticed is that the lighting fails, when I disable GL_FRONT_AND_BACK and use just GL_FRONT or GL_BACK.
Here is my lighting setup (done once at beginning of renderer):
m_lightAmbient[] = { 0.2f, 0.2f, 0.2f, 1.0f };
m_lightSpecular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
m_lightPosition[] = { 0.0f, 1200.0f, 0.0f, 1.0f };
glLightfv(GL_LIGHT0, GL_AMBIENT, m_lightAmbient);
glLightfv(GL_LIGHT0, GL_SPECULAR, m_lightSpecular);
glLightfv(GL_LIGHT0, GL_POSITION, m_lightPosition);
EDIT: I've done a lot to make the normals "more" correct (since I am generating the surface myself), but the objects color still changes depending where it is. Why is this? Does openGL have some special environment blending I don't know about?
EDIT: Turns out the color changing was because a previous texture was on the texture stack, and even though it wasn't being drawn, glMaterialfv was blending with it.
If your lighting fails when GL_FRONT_AND_BACK is disabled it's possible that your normals are flipped.
Could you post the code that initializes OpenGL? You're saying that all other meshes are drawn perfectly? Are you rendering them simultanously?
#response to stusmith:
Z-testing won't help you with transparent triangles, you'll need per-triangle alpha sorting too. If you have an object that at any time could have overlapping triangles facing the camera (a concave object) you must draw the farthest triangles first to ensure blending is performed correctly, since Z-testing doesn't take transparency into account.
Consider these two overlapping (and transparent) triangles and think about what happens when that little overlapped region is drawn, with or without Z-testing. You'll probably reach the conclusion that the drawing order does, in fact, matter. Transparency sucks :P
/\ /\
/ \ / \
/ \/ \
/ /\ \
/_____/__\_____\
I'm not convinced that this is your problem, but alpha sorting is something you need to take into account when dealing with partly transparent objects.
Turns out the color changing was because a previous texture was on the texture stack, and even though it wasn't being drawn, glMaterialfv was blending with it.
If your triangles are alpha-blended, won't you have to sort your faces by z-order from the camera? Otherwise you could be rendering a face at the back of the object on top of a face at the front.
#sebastion:
multiple draw calls, each object gets a glDrawArrays. some are textured, some colored, all with normals. gl init code is:
glMatrixMode(GL_MODELVIEW);
// Vertices!
glEnableClientState(GL_VERTEX_ARRAY);
// Depth func
glEnable(GL_DEPTH_TEST);
glDepthFunc( GL_LESS );
// Enable alpha blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Lighting
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, m_lightAmbient);
glLightfv(GL_LIGHT0, GL_SPECULAR, m_lightSpecular);
glLightfv(GL_LIGHT0, GL_POSITION, m_lightPosition);
// Culling
glDisable( GL_CULL_FACE );
// Smooth Shading
glShadeModel(GL_SMOOTH);
m_glSetupDone = true;
after this i have some camera set up, but thats completely standard, projection mode, frustum, modelview, look at, translate.
Are you sure your normals are normalized?
If not and you are specifying normals via glNormal calls, you could try to let OpenGL do the normalization for you, keep in mind that this should be avoided, but you can test it out:
glEnable(GL_NORMALIZE);
This way you are telling OpenGL to rescale all the normal vectors supplied via glNormal.
I had a transparency issue on my terrain display, slopes would seem transparent when looked from a certain angle. It only happened when lighting was enabled in the shader. Turns out that I had not turned on depth testing, and from a certain angle the terrain was overwritten by other terrain and displaying semi-transparent.
TLDR; check if you have depth testing enabled, having it off may give transparency-like effects when lighting is involved.
https://learnopengl.com/Advanced-OpenGL/Depth-testing