Per-model local rotation breaks openGL Lighting - opengl

I'm having trouble with OpenGL lighting. My issue is this: When the object has 0 rotation, the lighting is fine- otherwise the lighting works, but rotates with the object, instead of staying fixed in regards to the scene.
Sounds simple, right? The OpenGL FAQ has some simple advice on this: coordinates passed to glLightfv(GL_LIGHT0, GL_POSITION...) are multiplied by the current MODELVIEW matrix. So I must be calling this at the wrong place... except I'm not. I've copied the MODELVIEW matrix into a variable to debug, and it stays the same regardless of how my object is rotated. So it has to be something else, but I'm at a loss as to what.
I draw the model using glDrawArrays, and position my model within the world using glMatrixMult on a matrix built from a rotation quaternion and a translation. All of this takes place within glPushMatrix/glPopMatrix, so shouldn't have any side effect on the light.
A cut down version of my rendering process looks like this:
//Setup our camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cameraMatrix = translate(Vector3D(Pos.mX,Pos.mY,Pos.mZ)) * camRot.QuatToMatrix();
glMultMatrixf((GLfloat*)&cameraMatrix);
//Position the light now
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat lp[4] = {lightPos.mX, lightPos.mY, lightPos.mZ, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) lp);
//Loop, doing this for each model: (mRot, mPos, and mi are model member variables)
matrix = translate(Vector3D(mPos.mX,mPos.mY,mPos.mZ)) * mRot.QuatToMatrix();
glPushMatrix();
glMultMatrixf((GLfloat*)&matrix);
glBindBuffer(GL_ARRAY_BUFFER, mi->mVertexBufHandle); //Bind the model VBO.
glDrawArrays(GL_TRIANGLES, 0, mi->verts); //Draw the object
glPopMatrix();
I thought the normals might be messed up, but when I render them out they look fine. Is there anything else that might effect openGL lighting? The FAQ mentions:
If your light source is part of a
light fixture, you also may need to
specify a modeling transform, so the
light position is in the same location
as the surrounding fixture geometry.
I took this to mean that you'd need to translate the light into the scene, kind of a no-brainer... but does it mean something else?

It might be minor, but in this line:
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) &lp);
remove the & (address operator). lp will already give you the array-address.

This was awhile back, but I did eventually figure out the problem. The issue I thought I was having was that the light's position got translated wrong. Picture this: the light was located at 0,0,0, but then I translated and rotated my mesh. If this had been the case, I'd have to do as suggested in the other answers and make certain I was placing my glLightfv calls in the right place.
The actual problem turned out to be much simpler, yet much more insidious. It turns out I wasn't setting the glNormalPointer correctly, and so it was being fed garbage data. While debugging, I'd render the normals to check that they were correct, but when doing so I'd manually draw them based on the positions I'd calculated. A recommendation to future debuggers: when drawing your debug info normal rays, make sure you feed the debug function /the same data/ as openGL gets. In my case, this would mean pointing my normal ray draw function's glVertexPointer to the same place as the model's glNormalPointer.

Basically an OpenGL light behaves like a vertex. So in your code it's transformed by cameraMatrix, while your meshes are transformed by cameraMatrix * matrix. Now, it looks like both cameraMatrix and matrix contain mrot.QuatToMatrix(), that is: there is a single rotation matrix there, and the light gets rotated once, while the objects get rotated twice. It doesn't look right to me, unless your actual code is different; the mRot matrix you use for each mesh should be its own, e.g. mRot[meshIndex].

Related

Stop light from getting vertices transformations

I'm trying to build a scene where there's a perspective looking at a textured cube and a light.
What's supposed to happen is that the light should stay somewhere above the cube, while the cube rotates (against its center) below that light.
What actually happends is that the light also rotates when the cube rotates.
I've tried googling this and looking similar SO questions but I found nothing that helped. Following OpenGL docs I'm understanding that the light position is affected by modelview matrix transformations, but I also understood that pushing/popping the matrix before/after said transformations would make it so that the light is no longer affected by them. However I can't make it work, the light keeps on rotating no matter if I isolate the modelview transformations (for the cube) to push/pop or if I load the identity matrix afterwards.
Here's the code (it's in Java using JOGL):
//this array goes into lightPositionBuffer
private float[] lightPosition = {0, 0, 0, 1.0f};
public void display(GLAutoDrawable gLDrawable) {
//code for clearing screen, binding the textures, etc.
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL2.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, cubeVerticesBuffer);
gl.glTexCoordPointer(2, GL.GL_FLOAT, 0, cubeTextureCoordsBuffer);
//rotate the cube
gl.glPushMatrix();
gl.glTranslatef(cubeCenterXInitial,cubeCenterYInitial,cubeCenterZInitial);
gl.glRotatef(rotationAmountY, 1, 0, 0);
gl.glTranslatef(-cubeCenterXInitial,-cubeCenterYInitial,-cubeCenterZInitial);
rotationAmountY+=1.5f;
gl.glDrawElements(GL.GL_TRIANGLE_STRIP, cubeIndexes.length, GL.GL_UNSIGNED_BYTE, cubeIndexesBuffer);
gl.glPopMatrix();
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_TEXTURE_COORD_ARRAY);
//Position The Light:
gl.glPushMatrix();
//lightPositionBuffer is a FloatBuffer containing the lightPosition float array declared above:
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, lightPositionBuffer);
gl.glPopMatrix();
//Enable Light 0
gl.glEnable(GL2.GL_LIGHT0);
gl.glEnable(GL2.GL_LIGHTING);
}
Here's what's happening:
Ok I've fixed it. Here's the two important points:
As luke was mentioning in his answer, there's nothing wrong with my code as far as transformations are concerned. As it is, the transofrmations only affect the mesh, the light stays in fact still.
The reason why it looks like my cube has a light rotating around it (when in fact the light is still) is, in one word, NORMALS.
In more words: it's the lack of normals declaration for my cube's faces. Yep, who knew, I didn't. When you have some faces (like triangles and quads) and a light, not declaring normals will NOT simply have your surfaces not being impacted by the light. They are in fact impacted, but it's all weird (as you can see from my screenshots in the question text).
So basically what I did to fix it is look at Nehe's (I love those guys) lesson 07 :
http://nehe.gamedev.net/tutorial/texture_filters_lighting__keyboard_control/15002/
At the bottom of the page there you have the code for a ton of languages and libraries, I personally used the JOGL version but any one will do fine.
Then I also looked at the OpenGL docs just to see how I can declare my vertices/indices/normals as arrays (rather then individually):
http://www.songho.ca/opengl/gl_vertexarray.html
Try adding a glLoadIdentity() right before your second glPushMatrix() call:
gl.glPushMatrix();
gl.glLoadIdentity(); // This
gl.glLightfv(GL2.GL_LIGHT0, GL2.GL_POSITION, lightPositionBuffer);
gl.glPopMatrix();
As it stands in your original post, that push/pop pair aren't doing anything. There aren't any matrix operations between them. glPushMatrix() does not reset the active matrix, it only preserves the current set of values so you can return to that state.
I don't think your error is in the code you've posted. Can you provide a minimal working example that still exhibits your problem?

How to use gluPerspective only once?

glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//set viewpoint
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(VIEW_ANGLE,Screen_Ratio,NEAR_CLIP,FAR_CLIP);
gluLookAt(0,5,5, 0,0,0, 0,1,0);
//transform model 1
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Theta, 0,1,0);
//draw model 1
glBegin(GL_QUADS);
...
glEnd();
The code above works fine, but is there any way to remove the call to gluPerspective?
What I mean is, I would like to call it only once in initialization, instead of repeatedly during each rendering.
You call gluPerspective there, because it belongs there. OpenGL is not a scene graph where you initialize things. It's a state driven drawing API. The projection matrix is a state and every serious graphics application changes this state multiple times throughout a single frame rendering.
OpenGL does not know geometrical objects, positions and cameras. It just pushes points, lines and triangles through a processing pipeline, and draws the result to the screen. After something has been drawn, OpenGL has no recollection of it, whatsoever.
I mean calling it only once in initialization.
OpenGL is not initialized (except creation of the rendering context, but actually this is part of the operating system's graphics stack, not OpenGL). Sure, you upload textures and buffer object data to it, but that can happen anytime.
Do not use gluLookAt on the projection matrix, as it defines the camera/view and therefore belongs to the modelview matrix, usually as the left-most transformation (the first after glLoadIdentity), where it makes up the view part of the word modelview. Although it also works your way, it's conceptually wrong. This would also solve your issue, as then you just don't have to touch the projection matrix every frame.
But actually datenwolf's approach is more conceptually clean regarding OpenGL's state machine architecture.
If you don't call glLoadIdentity() (which resets the current matrix to be the identity matrix, i.e. undoes what gluPerspective() has done) every frame and instead carefully push/pop the transform matrices you can get away with calling it only in initialization quite happily. Usually it's far easier just to call load identity each time your start drawing and then reset it. e.g.:
// Initalisation
glLoadIdentity();
gluPerspective(...);
Then later on:
// Drawing each frame
glClear(...);
glPushMatrix();
gluLookAt(...);
//draw stuff
glPopMatrix();

Open GL Lighting Problem

I've been working on a game engine for a month now, and I've finished the basic OpenGL stuff. However, the one thing that I can't get to work like I expect it to is the lighting.
(Note: This is the first time I seriously worked with OpenGL)
What I want is a close to realistic lighting simulation - where the surfaces facing the light are lit up more than those farther, etc.. The basic light should have a position and a color. This is how I thought it could be implemented:
float lightv[4]; // populated with (0.6, 0.6, 0.6, 1)
float positionv[4]; // populated with (0, 10, 0, 1)
int lightID = GL_LIGHT0;
int attenuationType = GL_LINEAR_ATTENUATION;
float attenuationValue = 1;
glLightf(lightID, attenuationType, attenuationValue);
glLightfv(lightID, GL_DIFFUSE, lightv);
glLightfv(lightID, GL_POSITION, positionv);
glEnable(lightID);
Instead of doing what I expect it to do, it gives me lighting as if there was a light where the camera is! Every surface has the same lighting!
What am I doing wrong?
Thank you, I appreciate it.
The first thing to do is make sure you have glEnable(GL_LIGHTING); called at some point. Past that, the first thing to check is that your normals are correct. For lighting to work properly you will need to have a normal set for every vertex you draw. If you have already set your normals, you should make sure that they are all of unit length. If they do not have a length of one, lighting can act oddly. If that is all as it should be, you should keep in mind that when you set the position of a light, it is modified by the current Modelview matrix as if it were a vertex. If none of those things are relevant, I'll see if I can think of something further.
Set your light position after you set up your GL_MODELVIEW transform, since it's affected by the current transform matrix. Otherwise you get the "headlamp" effect you have discovered.

Setting up a camera in OpenGL

I've been working on a game engine for awhile. I've started out with 2D Graphics with just SDL but I've slowly been moving towards 3D capabilities by using OpenGL. Most of the documentation I've seen about "how to get things done," use GLUT, which I am not using.
The question is how do I create a "camera" in OpenGL that I could move around a 3D environment and properly display 3D models as well as sprites (for example, a sprite that has a fixed position and rotation). What functions should I be concerned with in order to setup a camera in OpenGL camera and in what order should they be called in?
Here is some background information leading up to why I want an actual camera.
To draw a simple sprite, I create a GL texture from an SDL surface and I draw it onto the screen at the coordinates of (SpriteX-CameraX) and (SpriteY-CameraY). This works fine but when moving towards actual 3D models it doesn't work quite right. The cameras location is a custom vector class (i.e. not using the standard libraries for it) with X, Y, Z integer components.
I have a 3D cube made up of triangles and by itself, I can draw it and rotate it and I can actually move the cube around (although in an awkward way) by passing in the camera location when I draw the model and using that components of the location vector to calculate the models position. Problems become evident with this approach when I go to rotate the model though. The origin of the model isn't the model itself but seems to be the origin of the screen. Some googling tells me I need to save the location of the model, rotate it about the origin, then restore the model to its origal location.
Instead of passing in the location of my camera and calculating where things should be being drawn in the Viewport by calculating new vertices, I figured I would create an OpenGL "camera" to do this for me so all I would need to do is pass in the coordinates of my Camera object into the OpenGL camera and it would translate the view automatically. This tasks seems to be extremely easy if you use GLUT but I'm not sure how to set up a camera using just OpenGL.
EDIT #1 (after some comments):
Following some suggestion, here is the update method that gets called throughout my program. Its been updated to create perspective and view matrices. All drawing happens before this is called. And a similar set of methods is executed when OpenGL executes (minus the buffer swap). The x,y,z coordinates are straight an instance of Camera and its location vector. If the camera was at (256, 32, 0) then 256, 32 and 0 would be passed into the Update method. Currently, z is set to 0 as there is no way to change that value at the moment. The 3D model being drawn is a set of vertices/triangles + normals at location X=320, Y=240, Z=-128. When the program is run, this is what is drawn in FILL mode and then in LINE mode and another one in FILL after movement, when I move the camera a little bit to the right. It likes like may Normals may be the cause, but I think it has moreso to do with me missing something extremely important or not completely understanding what the NEAR and FAR parameters for glFrustum actually do.
Before I implemented these changes, I was using glOrtho and the cube rendered correctly. Now if I switch back to glOrtho, one face renders (Green) and the rotation is quite weird - probably due to the translation. The cube has 6 different colors, one for each side. Red, Blue, Green, Cyan, White and Purple.
int VideoWindow::Update(double x, double y, double z)
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glFrustum(0.0f, GetWidth(), GetHeight(), 0.0f, 32.0f, 192.0f);
glMatrixMode( GL_MODELVIEW );
SDL_GL_SwapBuffers();
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(0, 1.0f, 0.0f, 0.0f);
glRotatef(0, 0.0f, 1.0f, 0.0f);
glRotatef(0, 0.0f, 0.0f, 1.0f);
glTranslated(-x, -y, 0);
return 0;
}
EDIT FINAL:
The problem turned out to be an issue with the Near and Far arguments of glFrustum and the Z value of glTranslated. While change the values has fixed it, I'll probably have to learn more about the relationship between the two functions.
You need a view matrix, and a projection matrix. You can do it one of two ways:
Load the matrix yourself, using glMatrixMode() and glLoadMatrixf(), after you use your own library to calculate the matrices.
Use combinations of glMatrixMode(GL_MODELVIEW) and glTranslate() / glRotate() to create your view matrix, and glMatrixMode(GL_PROJECTION) with glFrustum() to create your projection matrix. Remember - your view matrix is the negative translation of your camera's position (As it's where you should move the world to relative to the camera origin), as well as any rotations applied (pitch/yaw).
Hope this helps, if I had more time I'd write you a proper example!
You have to do it using the matrix stack as for object hierarchy,
but the camera is inside the hierarchy so you have to put the inverse transform on the stack before drawing the objects as openGL only uses the matrix from 3D to camera.
If you have not checked then may be looking at following project would explain in detail what "tsalter" wrote in his post.
Camera from OGL SDK (CodeColony)
Also look at Red book for explanation on viewing and how does model-view and projection matrix will help you create camera. It starts with good comparison between actual camera and what corresponds to OpenGL API. Chapter 3 - Viewing

OpenGL Rotation

I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
But the result is not a triangle rotated 90 degrees.
Edit
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
glMatrixMode(GL_MODELVIEW);
Otherwise, you may be modifying either the projection or a texture matrix instead.
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.
Two major problems with the code as written:
Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.
You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.
The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow.
Regarding Projection matrix, you can find a good source to start with here:
http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx
It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).
transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:
Transform1 = Object coordinates system to World (for example - object rotation and scale)
Transform2 = World coordinates system to Camera (placing the object in the right place)
Transform3 = Camera coordinates system to Screen space (projecting to screen)
Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.
Have fun