Global Ambient Lighting? - c++

Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
I call
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
where global_ambient is 0.0, 0.0, 0.0, 1.0 and I have material parameters defined, that is glmaterial is never called. Would the global ambient lighting still work as in I will not be able to see the polygon? Or would I need to define material parameters.

Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
If that's true, then the lighting state is completely irrelevant. Fixed-function OpenGL lighting is per-vertex. You're not sending vertices; you're sending pixel data.

Related

Drawing a primitive ( GL_QUADS ) on top of a 2D texture - no quad rendered, texture colour changed

I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);

OpenGL Camera Movement - Shader vs. Primitive Rendering

In my OpenGL application, I am using gluLookAt() for transforming my camera. I then have two different render functions; one uses primitive rendering (glBegin()/glEnd()) to render a triangle.
glBegin(GL_TRIANGLES);
glVertex3f(0.25, -0.25, 0.5);
glVertex3f(-0.25, -0.25, 0.5);
glVertex3f(0.25, 0.25, 0.5);
glEnd();
The second rendering function uses a shader to display the triangle using the same coordinates and is called with the function glDrawArrays(GL_TRIANGLES, 0, 3). shader.vert is shown below:
#version 430 core
void main()
{
const vec4 verts[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1),
vec4(-0.25, -0.25, 0.5, 1),
vec4(0.25, 0.25, 0.5, 1));
gl_Position = verts[gl_VertexID];
}
Now here is my problem; if I move the camera around using the primitive rendering for the triangle, I see the triangle from different angles like one would expect. When I use the shader rendering function, the triangle remains stationary. Clearly I am missing something about world coordinates and how they related to objects rendered with shaders. Could someone point me in the right direction?
If you do not have an active shader program, you're using what is called the "fixed pipeline". The fixed pipeline performs rendering based on numerous attributes you set with OpenGL API calls. For example, you specify what transformations you want to apply. You specify material and light attributes that control the lighting of your geometry. Applying these attributes is then handled by OpenGL.
Once you use your own shader program, you're not using the fixed pipeline anymore. This means that most of what the fixed pipeline previously handled for you has to be implemented in your shader code. Applying transformations is part of this. To apply your transformation matrix, you have to pass it into the shader, and apply it in your shader code.
The matrix is typically declared as a uniform variable in your vertex shader:
uniform mat4 ModelViewProj;
and then applied to your vertices:
gl_Position = ModelViewProj * verts[gl_VertexID];
In your code, you will then use calls like glGetUniformLocation(), glUniformMatrix4fv(), etc., to set up the matrix. Explaining this in full detail is somewhat beyond this answer, but you should be able to find it in many OpenGL tutorials online.
As long as you're still using legacy functionality with the Compatibility Profile, there's actually a simpler way. You should be aware that this is deprecated, and not available in the OpenGL Core Profile. The Compatibility Profile makes certain fixed function attributes available to your shader code, including the transformation matrices. So you do not have to declare anything, and can simply write:
gl_Position = gl_ModelViewProjectionMatrix * verts[gl_VertexID];

can't properly disable gl color material

I have scene with lighting that works fine. I want to addd a sky box that will fade away, so i am using
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
along with glColor4f to do this and it works fine. But for the fade to work i need to enable GL_COLOR_MATERIAL which completly gets rid of my lighting effects. I tried sandwiching the sky box part
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
(..light details..)
glEnable(GL_COLOR_MATERIAL);
(drawSkyBox)|
glDisable(GL_COLOR_MATERIAL);
(draw rest of scene)
but that just lets the fade work and still doesn't show my lighting. Oddly enough if do
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
(..light details..)
glEnable(GL_COLOR_MATERIAL);
glDisable(GL_COLOR_MATERIAL);
(drawSkyBox)
(draw rest of scene)
I lose the fade and my lighting effects. Am I using GL_COLOR_MATERIAL properly? If I enable and disable right away, shouldn't there be no effect, leaving my lighting intact?
A sky box probably shouldn't be affected by lighting. There's nothing like a big shiny specularity on your sky and clouds to make it look very strange. So it makes sense to disable lighting while you render that. If lighting is off, the vertices just take whatever color you assign them (using glColor).
If lighting is on, the vertices will take on a combination of the lights' colors and the material properties. You can set the ambient, diffuse, and specular color filters separately for each vertex and they'll be multiplied with the associated colors from the lights along with some other math to take into account direction and fall-off. If, as is often the case, you're only changing the ambient and diffuse colors and you want them to be the same because you've already set the ambient and diffuse properties of your light source to both be white with the ambient significantly dimmer than the diffuse, then you can enable COLOR_MATERIAL and then setting glColor() is the same as calling glMaterial with GL_AMBIENT_AND_DIFFUSE.

Do I need multiple vertex buffers for similar objects in openGL?

For example, given two cubes with similar vertices, e.g.,
float pVerts[] =
{
0.0, 0.0, 0.0,
1.0, 0.0, 0.0,
...
};
glGenBuffer(1, &mVertexBuffer);
glBindBuffer(...);
glBufferData(...);
Can I just cache this set of vertices out for later usage? Or, in other words, if I wanted a second cube (with the exact same vertex data), do I need to generate another vertex buffer?
And with shaders, does the same apply? Can I use the same program for drawing these cubes?
You can use the same vertex buffer to draw as many objects as you want (shaders or not). If you want to draw a second object, just change the model matrix and draw it again.
Same for shaders, you can use the same shader to draw as many objects as you want. Just bind the shader and then fire off as many draw calls as you need.

What could globally affect texture coordinate values in OpenGL?

I'm writing a plugin for an application called Autodesk MotionBuilder, which has an OpenGL renderer, and I'm trying to render textured geometry into the scene. I have a window with a 3D View embedded in it, and every time my window is rendered, this is (in a nutshell) what happens:
I tell the renderer that I'm about to draw into a region with a given size
I tell the renderer to draw the MotionBuilder scene in that region
I draw some additional stuff into and/or on top of the scene
The challenge here is that I'm inheriting some arbitrary OpenGL state from MotionBuilder's renderer, which varies depending on what it's drawing and what's present in the scene. I've been dealing with this fine so far, but there's one thing I can't figure out. The way that OpenGL interprets my UV coordinates seems to change based on whatever MotionBuilder is doing behind my back.
Here's my rendering code. If there's no textured geometry in the scene, meaning MotionBuilder hasn't yet fiddled with any texture-related attributes, it works as expected.
// Tell MotionBuilder's renderer to draw the scene
RenderScene();
// Clear whatever arbitrary state MotionBuilder left for us
InitializeAttributes(); // includes glPushAttrib(GL_ALL_ATTRIB_BITS)
InitializePerspective(); // projects into the scene / loads matrices
// Enable texturing, bind to our texture, and draw a triangle into the scene
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mTexture);
glBegin(GL_TRIANGLES);
glColor4f(1.0, 1.0, 1.0, 0.5f);
glTexCoord2f(1.0, 0.0); glVertex3f(128.0, 0.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f( 0.0, 128.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f( 0.0, 0.0, 0.0);
glEnd();
// Clean up so we don't confound MotionBuilder's initial expectations
RestoreState(); // includes glPopAttrib()
Now, if I bring in some meshes with textures, something odd happens. My texture coordinates get scaled way up. Here's a before and after:
(source: awforsythe.com)
As you can see from the close-up on the right, when MotionBuilder is asked to render a texture whose file it can't find, it instead loads this small question mark texture and tiles it across the geometry. My only hypothesis is that MotionBuilder is changing some global texture coordinate scalar so that, for example, glTexCoord2f(0.5, 1.0) will instead be interpreted as if it were (50.0, 100.0). Is there such a feature in OpenGL? Any idea what I need to modify in order to preserve my texture coordinates as I've entered them?
Since typing the above and after doing a bit of research, I have discovered that there's a GL_TEXTURE matrix that's used to this effect. Neat! And indeed, when I get the value of this matrix initially, it's the good ol' identity matrix:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
When I check it again after MotionBuilder fudges up my texture coordinates:
16 0 0 0
0 16 0 0
0 0 1 0
0 0 0 1
How telling! But here's a slight problem: if I try to explicitly set the texture matrix before doing my own drawing, regardless of what MotionBuilder is doing, it seems like my texture coordinates have no effect and it simply samples the lower-left corner of the texture (0.0, 0.0) for every vertex.
Here's the attempted fix, placed after RenderScene in the code posted above:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
I can verify that the value of GL_TEXTURE_MATRIX is now the identity matrix, but no matter what coordinates I specify in glTexCoord2f, it's always drawn as if the coordinates for each vertex were (0.0, 0.0):
(source: awforsythe.com)
Any idea what else could be affecting how OpenGL interprets my texture coordinates?
Aha! These calls:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
...have to be made after GL_TEXTURE_2D is enabled.
...should be followed up by setting the matrix mode back to GL_MODELVIEW. It turns out, apparently, that some functions I was calling immediately after resetting the texture matrix (glViewport and/or gluPerspective?) affect the current matrix stack. So those calls were affecting the texture matrix, causing my texture coordinates to be transformed in unexpected ways.
I think I've got it now.