Open GL Lighting Problem - c++

I've been working on a game engine for a month now, and I've finished the basic OpenGL stuff. However, the one thing that I can't get to work like I expect it to is the lighting.
(Note: This is the first time I seriously worked with OpenGL)
What I want is a close to realistic lighting simulation - where the surfaces facing the light are lit up more than those farther, etc.. The basic light should have a position and a color. This is how I thought it could be implemented:
float lightv[4]; // populated with (0.6, 0.6, 0.6, 1)
float positionv[4]; // populated with (0, 10, 0, 1)
int lightID = GL_LIGHT0;
int attenuationType = GL_LINEAR_ATTENUATION;
float attenuationValue = 1;
glLightf(lightID, attenuationType, attenuationValue);
glLightfv(lightID, GL_DIFFUSE, lightv);
glLightfv(lightID, GL_POSITION, positionv);
glEnable(lightID);
Instead of doing what I expect it to do, it gives me lighting as if there was a light where the camera is! Every surface has the same lighting!
What am I doing wrong?
Thank you, I appreciate it.

The first thing to do is make sure you have glEnable(GL_LIGHTING); called at some point. Past that, the first thing to check is that your normals are correct. For lighting to work properly you will need to have a normal set for every vertex you draw. If you have already set your normals, you should make sure that they are all of unit length. If they do not have a length of one, lighting can act oddly. If that is all as it should be, you should keep in mind that when you set the position of a light, it is modified by the current Modelview matrix as if it were a vertex. If none of those things are relevant, I'll see if I can think of something further.

Set your light position after you set up your GL_MODELVIEW transform, since it's affected by the current transform matrix. Otherwise you get the "headlamp" effect you have discovered.

Related

openGL - understanding colors changes due to lightning

I am writing some software in C, to render a yellow and a brown cube. However once I programmed light, all the colors changes to light blue. Could someone explain me why the colors changed? And how I can prevent such an extreme change?
This is the code I used for the light:
GLfloat color1 = {0.633, 0.237, 0.170}; \\ changed to blue
void initLight()
{
GLfloat red[] = {1.0,0.0,0.0,1.0};
GLfloat white[] = {1.0,1.0,1.0,1.0};
GLfloat blueGreen[] = {0.0,0.4,1.0,1.0};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, white);
glLightfv(GL_LIGHT0,GL_AMBIENT,white);
glLightfv(GL_LIGHT0,GL_DIFFUSE,blueGreen);
glMaterialf(GL_FRONT,GL_SHININESS,127.0);
glEnable(GL_LIGHT0);
}
Based on the fact that you're using immediate mode, I assume you wrote something that looks like this setting up the vertices?
glBegin(GL_TRIANGLES);
glVertex3f(/*...*/);
glColor3f(/*...*/);
/*...*/
glEnd();
When you add lighting to the scene, the renderer no longer considers the color values you proposed for the individual vertices, and instead substitutes in white or grey (causing the light to turn those faces blueish-green). To fix that, you need to tell the renderer to treat the vertex colors as material colors. This code should be sufficient:
glEnable(GL_COLOR_MATERIAL);
This is, of course, also a reason why you really, really should not be using OpenGL's immediate mode or Fixed Function Pipeline rendering, as it causes problems like this. I recommend the tutorial found here for learning Modern OpenGL.
Edit: Fixed a Typo
DOUBLE EDIT COMBO:
Alright, so there's a few other things you'll need to take into account.
GLfloat lp0[] = {4.0,4.0,3.0,0.0};
Generally speaking, position vectors should have their w component (the last one) set to 1, not 0. You may want to change it to
GLfloat lp0[] = {4.0,4.0,3.0,1.0};
Beyond that, you'll want to try playing around with the position, particularly using things like matrix adjustments. Again; this is yet another reason not to use FFP, and in this case, it's because it's difficult to tell where the light is being positioned relative to the objects. Putting it at <4,4,3> in worldspace only makes sense if you know for certain that its position is being translated by the modelview and projection matrices correctly, and at least in the code that I'm seeing, that doesn't appear to be the case.
Immediately after:
glEnable(GL_COLOR_MATERIAL);
You probably should also call:
glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);
Also, your ambient light is very bright. You may need to bring the intensity of your ambient light down somewhat to get the proper effect from the tinted diffuse light.

OpenGL Fog does not appear

I wanted to create a coordinate system with some lines in it, and wanted to display one window with depth-fog.
My "fog-code" looks like this:
glEnable(GL_FOG);
float fogColor[4] = {0.8, 0.8, 0.8, 1};
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogfv(GL_FOG_COLOR, fogColor);
glFogf(GL_FOG_DENSITY,0.8);
glHint(GL_FOG_HINT, GL_NICEST);
glFogf(GL_FOG_START,0.1);
glFogf(GL_FOG_END,200);
and is placed in my main function (don't know yet if this could cause any problems, but just to be sure), right after the init()-call and before my display-function-call.
Update:
The problem was actually really simple: My problem was, that I worked solely on the GL_MODELVIEW-matrix, thinking there was no real difference to the GL_PROJECTION-matrix. According to this article and the post from Reto Koradi, there is a pretty significant difference. I hugely recommend reading the full article to better understand the system behind OpenGL (definitely helped me a lot).
The corrected code (for my init()-call) would then be:
void init2()
{
glClearColor (1.0, 1.0, 1.0, 0.0); // set background color to white
glMatrixMode(GL_PROJECTION); // switch to projection mode
glLoadIdentity(); // initialize a projection matrix
glOrtho(-300, 300, -300, 300, -800, 800); // map coordinates to the viewport
gluLookAt(2,2,10, 0,0,-0.5, 0,1,0);
glMatrixMode(GL_MODELVIEW); // now switch to modelview mode
}
The fog equation is evaluated based on the value of (quote from OpenGL 2.1 spec):
Otherwise, if the fog source is FRAGMENT DEPTH, then c is the eye-coordinate distance from the eye, (0,0,0,1) in eye coordinates, to the fragment center.
FRAGMENT_DEPTH is the default, so this applies in your case. Eye coordinate refers to the coordinates after the model-view transformation has been applied. So it's the distance from the origin after applying the model-view transform. The spec also allows implementations to use the absolute value of the z-coordinate instead of the distance from the origin.
One small observation on your code: GL_FOG_DENSITY does not matter if the mode is GL_LINEAR. It is only used for the exponential modes.
For GL_LINEAR mode, the behavior is pretty much as you would expect. The original fragment color is linearly blended with the fog color within the range GL_FOG_START to GL_FOG_END. So everything smaller than GL_FOG_START has the original fragment color, everything after GL_FOG_END has the fog color, and the values in between are linear interpolations between the two, with gradually more fog color and less original fragment color.
To get good results, you'll have to play with the GL_FOG_START and GL_FOG_END values. If you don't get as much for as desired, you can start by reducing the value of GL_FOG_END.
I peeked at the linked code, and noticed one problem: You're specifying the projection matrix while you're in GL_MODELVIEW matrix mode. You need to be careful that you specify the matrices in the correct matrix mode, which is GL_PROJECTION for the projection matrix.
Mixing up the matrix modes does not have an adverse effect on the resulting vertex coordinates, since both the model-view and projection matrices are applied to the vertices. So for very simple use, you can sometimes get away with using the wrong mode. But once lighting comes into play, it is critical to use the correct matrix mode, since lighting calculations are done after the model-view transformation has been applied, but before the projection transformation.
And yes, as others already pointed out, a lot of this actually gets simpler if you write your own shaders. The fact that I quoted the OpenGL 2.1 spec is probably a hint that this functionality is old and obsolete.
Like to many things that OpenGL-1.1 did, fog is calculated on a per vertex level. So if you have a long line, with only two points, fog is calculated only for the end points and then the color interpolated linear inbetween. Depending on how your line is aligned and which shading mode you use, this may result in no apparent fogging.
Two solutions:
Subdivide the lines into a couple of dozen line segments, so to sample the fog at more than two points.
or
Use a fragment shader instead and calculate the fog term therein. This is what I suggest doing.

glsl effect on low poly surface

I've got a vertex/fragment shader, point light and attenuation, I need to apply such shader to a cube face and I need to see a change in gradation of colours, if I use an high poly mesh
everything works quite well and the effect it's nice my goal is to have a gradient on this low poly mesh.
I tried to do this gl_FragColor = vec4(n,1) n = normal but I get a solid colour per surface
and this can be the reason why I don't see a gradation?
cheers
It is correct behaviour that you are observing. Cube is perfectly flat, thus it's normals per face vertex are the same.
Note however, that in calculations of Phong lighting you also should use the position of fragment, which is interpolated between 3 (or 4, when using quads) vertices of the given (sub)face. It can be used to calculate angle between light position and eye vector in the given fragment's position.
I've experienced similar problems lately, and I figured out that your cube really needs to shine, if you want to see something non-flat; and I mean literally. Set the shininess to reasonably high value (250-500). You should see a focused, moving point of light on the face that is reflecting directly to you. If not, your lightning shader is probably wrong.

Stop light from getting vertices transformations

I'm trying to build a scene where there's a perspective looking at a textured cube and a light.
What's supposed to happen is that the light should stay somewhere above the cube, while the cube rotates (against its center) below that light.
What actually happends is that the light also rotates when the cube rotates.
I've tried googling this and looking similar SO questions but I found nothing that helped. Following OpenGL docs I'm understanding that the light position is affected by modelview matrix transformations, but I also understood that pushing/popping the matrix before/after said transformations would make it so that the light is no longer affected by them. However I can't make it work, the light keeps on rotating no matter if I isolate the modelview transformations (for the cube) to push/pop or if I load the identity matrix afterwards.
Here's the code (it's in Java using JOGL):
//this array goes into lightPositionBuffer
private float[] lightPosition = {0, 0, 0, 1.0f};
public void display(GLAutoDrawable gLDrawable) {
//code for clearing screen, binding the textures, etc.
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL2.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, cubeVerticesBuffer);
gl.glTexCoordPointer(2, GL.GL_FLOAT, 0, cubeTextureCoordsBuffer);
//rotate the cube
gl.glPushMatrix();
gl.glTranslatef(cubeCenterXInitial,cubeCenterYInitial,cubeCenterZInitial);
gl.glRotatef(rotationAmountY, 1, 0, 0);
gl.glTranslatef(-cubeCenterXInitial,-cubeCenterYInitial,-cubeCenterZInitial);
rotationAmountY+=1.5f;
gl.glDrawElements(GL.GL_TRIANGLE_STRIP, cubeIndexes.length, GL.GL_UNSIGNED_BYTE, cubeIndexesBuffer);
gl.glPopMatrix();
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_TEXTURE_COORD_ARRAY);
//Position The Light:
gl.glPushMatrix();
//lightPositionBuffer is a FloatBuffer containing the lightPosition float array declared above:
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, lightPositionBuffer);
gl.glPopMatrix();
//Enable Light 0
gl.glEnable(GL2.GL_LIGHT0);
gl.glEnable(GL2.GL_LIGHTING);
}
Here's what's happening:
Ok I've fixed it. Here's the two important points:
As luke was mentioning in his answer, there's nothing wrong with my code as far as transformations are concerned. As it is, the transofrmations only affect the mesh, the light stays in fact still.
The reason why it looks like my cube has a light rotating around it (when in fact the light is still) is, in one word, NORMALS.
In more words: it's the lack of normals declaration for my cube's faces. Yep, who knew, I didn't. When you have some faces (like triangles and quads) and a light, not declaring normals will NOT simply have your surfaces not being impacted by the light. They are in fact impacted, but it's all weird (as you can see from my screenshots in the question text).
So basically what I did to fix it is look at Nehe's (I love those guys) lesson 07 :
http://nehe.gamedev.net/tutorial/texture_filters_lighting__keyboard_control/15002/
At the bottom of the page there you have the code for a ton of languages and libraries, I personally used the JOGL version but any one will do fine.
Then I also looked at the OpenGL docs just to see how I can declare my vertices/indices/normals as arrays (rather then individually):
http://www.songho.ca/opengl/gl_vertexarray.html
Try adding a glLoadIdentity() right before your second glPushMatrix() call:
gl.glPushMatrix();
gl.glLoadIdentity(); // This
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, lightPositionBuffer);
gl.glPopMatrix();
As it stands in your original post, that push/pop pair aren't doing anything. There aren't any matrix operations between them. glPushMatrix() does not reset the active matrix, it only preserves the current set of values so you can return to that state.
I don't think your error is in the code you've posted. Can you provide a minimal working example that still exhibits your problem?

Per-model local rotation breaks openGL Lighting

I'm having trouble with OpenGL lighting. My issue is this: When the object has 0 rotation, the lighting is fine- otherwise the lighting works, but rotates with the object, instead of staying fixed in regards to the scene.
Sounds simple, right? The OpenGL FAQ has some simple advice on this: coordinates passed to glLightfv(GL_LIGHT0, GL_POSITION...) are multiplied by the current MODELVIEW matrix. So I must be calling this at the wrong place... except I'm not. I've copied the MODELVIEW matrix into a variable to debug, and it stays the same regardless of how my object is rotated. So it has to be something else, but I'm at a loss as to what.
I draw the model using glDrawArrays, and position my model within the world using glMatrixMult on a matrix built from a rotation quaternion and a translation. All of this takes place within glPushMatrix/glPopMatrix, so shouldn't have any side effect on the light.
A cut down version of my rendering process looks like this:
//Setup our camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cameraMatrix = translate(Vector3D(Pos.mX,Pos.mY,Pos.mZ)) * camRot.QuatToMatrix();
glMultMatrixf((GLfloat*)&cameraMatrix);
//Position the light now
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat lp[4] = {lightPos.mX, lightPos.mY, lightPos.mZ, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) lp);
//Loop, doing this for each model: (mRot, mPos, and mi are model member variables)
matrix = translate(Vector3D(mPos.mX,mPos.mY,mPos.mZ)) * mRot.QuatToMatrix();
glPushMatrix();
glMultMatrixf((GLfloat*)&matrix);
glBindBuffer(GL_ARRAY_BUFFER, mi->mVertexBufHandle); //Bind the model VBO.
glDrawArrays(GL_TRIANGLES, 0, mi->verts); //Draw the object
glPopMatrix();
I thought the normals might be messed up, but when I render them out they look fine. Is there anything else that might effect openGL lighting? The FAQ mentions:
If your light source is part of a
light fixture, you also may need to
specify a modeling transform, so the
light position is in the same location
as the surrounding fixture geometry.
I took this to mean that you'd need to translate the light into the scene, kind of a no-brainer... but does it mean something else?
It might be minor, but in this line:
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) &lp);
remove the & (address operator). lp will already give you the array-address.
This was awhile back, but I did eventually figure out the problem. The issue I thought I was having was that the light's position got translated wrong. Picture this: the light was located at 0,0,0, but then I translated and rotated my mesh. If this had been the case, I'd have to do as suggested in the other answers and make certain I was placing my glLightfv calls in the right place.
The actual problem turned out to be much simpler, yet much more insidious. It turns out I wasn't setting the glNormalPointer correctly, and so it was being fed garbage data. While debugging, I'd render the normals to check that they were correct, but when doing so I'd manually draw them based on the positions I'd calculated. A recommendation to future debuggers: when drawing your debug info normal rays, make sure you feed the debug function /the same data/ as openGL gets. In my case, this would mean pointing my normal ray draw function's glVertexPointer to the same place as the model's glNormalPointer.
Basically an OpenGL light behaves like a vertex. So in your code it's transformed by cameraMatrix, while your meshes are transformed by cameraMatrix * matrix. Now, it looks like both cameraMatrix and matrix contain mrot.QuatToMatrix(), that is: there is a single rotation matrix there, and the light gets rotated once, while the objects get rotated twice. It doesn't look right to me, unless your actual code is different; the mRot matrix you use for each mesh should be its own, e.g. mRot[meshIndex].