How does OpenGL lighting works? - c++

i want to know how is OpenGL's glLightfv() works i mean if a glortho is set to (10, -10, 10, -10, 10, -10) then if a traingle's vertices are within 5 (eg. glVertex3f(0, 5, 0), glVertex3f(-5, -5, -5) and so on until a 3d triangle is created) then where should i place the light?? From most tutorials they were 1, 1, 1, 0. but by this my Model which is a colour full 3d triangle, becomes full White, can anyone here make me understand how can the lighting function of opengl be used in program, with shadoes and colour, i hope i was clear, thanks in advance!!

Related

OpenGL Scaling/Translation coordinate system or order of statement error

I rewrote the OpenGL glTranslatef, glScalef, and glRotatef functions. I am using these functions to draw and transform a circle and compare to the built-in functions. All seems to be in working order, and my functions work exactly as the built-in ones do, with one small exception.
When I scale with the built-in function, I move the object back to the origin, like so:
glTranslatef(50, 50, 0);
glScalef(2.0, 1.0, 0.0);
glTranslatef(-50, -50, 0);
glDrawArrays(GL_LINE_LOOP, 0, numVertices)
But when using my own function, I find I have to switch the translation statements to achieve the same result as above:
MyTranslate(-50, -50, 0);
MyScale(2.0, 1.0, 0.0);
MyTranslate(50, 50, 0);
glDrawArrays(GL_LINE_LOOP, 0, numVertices)
That is, either the coordinate system gets messed up (unlikely) or the statements are being read in a different order (more likely).
The translate function isn't anything special:
GLfloat translate[16] = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, x, y, z, 1 };
GLfloat modelview[16] = { 0 };
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);
matrixMultiply(modelview, translate);
glLoadMatrixf(result);
I am confident that the scale and matrix multiplication functions are fine. Can anyone give some advice or explanation to the statement-switching phenomenon I am experiencing?
I apologize if the question was not clear or if it was constructed poorly. However, I found a solution and would like to share it with the StackOverflow community.
The problem was very simple: In OpenGL, the order of built-in functions is "reversed" such that the transformation closest to the draw object statement is applied first. When writing and using my own functions, which simply manipulate the Modelview Matrix, I had to "reverse" (that is, put in the correct order) the transformations, so that the transformation which appeared first chronologically in code is applied first.
Example with OpenGL built-in functions:
glTranslatef(30, 30, 0); // Translate / perform other transformations
glTranslatef(50, 50, 0); // Translate back to original position
glRotatef(20, 0, 0, 1); // Rotate object
glScalef(1.5, 2.5, 1.0); // Scale object
glTranslatef(-50, -50, 0); // Translate object to the origin
glDrawArrays(GL_TRIANGLE_FAN, 0, numVertices); // Draw the object
The order is critical here. But when using my functions, the order had to be reverse (put in "logical order") to apply changes to the Modelview Matrix correctly. Anyone with a basic understanding of transformation matrices knows that multiplying matrices in the wrong order will produce an entirely different result than desired.
Example with self-written functions:
MyTranslate(-50, -50, 0); // Translate object to the origin
MyScale(1.5, 2.5, 1.0); // Scale object
MyRotate(20, 0, 0, 1); // Rotate object
MyTranslate(50, 50, 0); // Translate back to original position
MyTranslate(30, 30, 0); // Translate / perform other transformations
glDrawArrays(GL_TRIANGLE_FAN, 0, numVertices); // Draw the object

setting up an opengl perspective projection

I am having an issue setting up the viewing projection. I am drawing a cube with the vertices (0, 0, 0) (0, 0, 1) (0, 1, 1) (0, 1, 0) (1, 0, 0) (1, 1, 0) (1, 1, 1) and (1, 0, 1). This is how I am initializing the view:
void initGL(int x,int y, int w, int h)
{
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH );
glutInitWindowPosition( x, y );
glutInitWindowSize( w, h );
glutCreateWindow( "CSE328 Project 1" );
glutDisplayFunc(draw);
glFrontFace(GL_FRONT_AND_BACK);
glMatrixMode(GL_PROJECTION);
glFrustum(-10.0, 10.0, -10.0, 10.0, 2.0, 40.0);
glMatrixMode(GL_MODELVIEW);
gluLookAt(10, 10, 10, 0.5, 0.5, 0, 0, 1.0, 0);
glutMainLoop();
}
For some reason, the cube is filling the entire screen. I have tried changing the values of the frustum and lookAt methods, and either the cube is not visible at all, or it fills the entire viewport. In glLookAt I assume the 'eye' is positioned at (10, 10, 10) and looking at the point (0.5, 0.5, 0), which is on the surface of the cube. I thought this would give enough distance so the whole cube would be visible. Am i thinking about this in the wrong way? I have also tried moving the cube in the z direction so that it lies from z = 10 to z = 11, and so is in the clipping plane, but it gives similar results.
The cube has length 1, the viewing volume spans 20 units in x and y dimensions. The cube occupies some pixels in the middle even with orthographic projection; unless there is some other transformation applied during drawing.
I suggest making the frustum smaller (e.g. +/- 2.0f) and moving the camera closer; (4.0f, 4.0f, 4.0f).
Moving the eye position further from the cube by changing the first 3 parameters of gluLookAt() should make it smaller.
You could also replace your call to glFrustum() with a call to gluPerspective() which would make it easier to configure the perspective projection to your liking.

Rotation in opengl

I have a plane and I want to rotate it around the y axis. The planes coordinates are in:
Vec4f(-1,-1, -5, 1),
Vec4f( 1,-1, -5, 1),
Vec4f( 1, 1, -5, 1),
Vec4f(-1, 1, -5, 1),
I just want the plane to rotate, not go around in circles, so I translate it back to origin a then do the rotation:
glTranslatef(0,0,-5);
glRotatef(50.0*t, 0, 1, 0);
draw(plane);
But the plane still makes a circle around the origin. What am I doing wrong?
Transformations apply in the opposite order in which you multiply them, also you might want to translate back to where it came from. So change it like this:
translation = -5;
if(translate_back) glTranslatef(0,0,-translation);
glRotatef(50.0*t, 0, 1, 0);
glTranslatef(0,0,+translation);

Texturing Quadrics in JOGL

I can't manage to texture a glu Quadric (gluSphere):
What i get instead of the texture, is an average color of the texture.
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glEnable(GL.GL_BLEND);
gl.glEnable(GL.GL_TEXTURE_GEN_S);
gl.glEnable(GL.GL_TEXTURE_GEN_T);
sunTexture = TextureIO.newTexture(new File("sun.jpg"),false);
float[] rgba = {1f, 1f, 1f};
gl.glMaterialfv(GL.GL_FRONT, GL.GL_AMBIENT, rgba, 0);
gl.glMaterialfv(GL.GL_FRONT, GL.GL_SPECULAR, rgba, 0);
gl.glMaterialf(GL.GL_FRONT, GL.GL_SHININESS, 0.5f);
sunTexture.enable();
sunTexture.bind();
GLUquadric sun = glu.gluNewQuadric();
glu.gluQuadricTexture(sun, true);
glu.gluSphere(sun, 5, DETAIL, DETAIL);
sunTexture.disable();
As GLU generates texture coordinates itsself and transmits them as glTexCoord, I think, there is no need to enable texcoord generation (GL_TEXTURE_GEN_S/T). I suppose the GLU-generated texCoords get overwritten with the ones from texgen.
I also see, that you submit an array of three floats to glMaterial, which expects RGBA (4 floats). But since I work with C++, I maybe wrong and this works in JoGL.
I found the problem:
i had set
gl.glFrustum(-20, 20, -20, 20, 0.1, 400);
after setting
gl.glFrustum(-20, 20, -20, 20, 1, 400);
it appears ok.

Image blending problem when rendering to texture

This is related to my last question. To get this image:
http://img252.imageshack.us/img252/623/picture8z.png
I draw a white background (color = (1, 1, 1, 1)).
I render-to-texture the two upper-left squares with color = (1, 0, 0, .8) and blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and then draw the texture with color = (1, 1, 1, 1) and blend function (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
I draw the lower-right square with color = (1, 0, 0, .8) and blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
By my calculation, the render-to-texture squares should have color
.8 * (1, 0, 0, .8) + (1 - .8) * (0, 0, 0, 0) = (.8, 0, 0, .64)
and so after drawing that texture on the white background, they should have color
(.8, 0, 0, .64) + (1 - .8) * (1, 1, 1, 1) = (1, .2, .2, .84)
and the lower-right square should have color
.8 * (1, 0, 0, .8) + (1 - .8) * (1, 1, 1, 1) = (1, .2, .2, .84)
which should look the same! Is my reasoning wrong? Is my computation wrong?
In any case, my goal is to cache some of my scene. How do I render-to-texture and then draw that texture so that it is equivalent to just drawing the scene inline?
If you want to render blended content to a texture and composite that texture to the screen, the simplest way is to use premultiplied alpha everywhere. It’s relatively simple to show that this works for your case: the color of your semi-transparent squares in premultiplied form is (0.8, 0, 0, 0.8), and blending this over (0, 0, 0, 0) with (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) essentially passes your squares’ color through to the texture. Blending (0.8, 0, 0, 0.8) over opaque white with (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) gives you (1.0, 0.2, 0.2, 1.0). Note that the color channels are the same as your third calculation, but the alpha channel is still 1.0, which is what you would expect for an opaque object covered by a blended object.
Tom Forsyth has a good article about premultiplied alpha. The whole thing is worth reading, but see the “Compositing translucent layers” section for an explanation of why the math works out in the general case.
Whoops, my computation is wrong! the second line should be
(.8, 0, 0, .64) + (1 - .64) * (1, 1, 1, 1) = (1, .36, .36, .84)
which indeed seems to match what I see (when I change the last square to color (1, .2, .2, .8), all three squares appear the same color).
Regarding your last question: Replacing parts of the scene by textures is not trivial. A good starting point is Stefan Jeschke's PhD thesis.