What is a normal in OpenGL? - opengl

I heard that I should use normals instead of colors, because colors are deprecated. (Is that true?) Normals have something to do with the reflection of light, but I can't find a clear and intuitive explanation. What is a normal?

A normal in general is a unit vector whose direction is perpendicular to a surface at a specific point. Therefore it tells you in which direction a surface is facing. The main use case for normals are lighting calculations, where you have to determine the angle (or practically often its cosine) between the normal at a given surface point and the direction towards a lightsource or a camera.

glNormal minimal example
glNormal is a deprecated OpenGL 2 method, but it is simple to understand, so let's look into it. The modern shader alternative is discussed below.
This example illustrates some details of how glNormal works with diffuse lightning.
The comments of the display function explain what each triangle means.
#include <stdlib.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
/* Triangle on the x-y plane. */
static void draw_triangle() {
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f( 1.0f, -1.0f, 0.0f);
glEnd();
}
/* A triangle tilted 45 degrees manually. */
static void draw_triangle_45() {
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f( 1.0f, -1.0f, 0.0f);
glEnd();
}
static void display(void) {
glColor3f(1.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
/*
Triangle perpendicular to the light.
0,0,1 also happens to be the default normal if we hadn't specified one.
*/
glNormal3f(0.0f, 0.0f, 1.0f);
draw_triangle();
/*
This triangle is as bright as the previous one.
This is not photorealistic, where it should be less bright.
*/
glTranslatef(2.0f, 0.0f, 0.0f);
draw_triangle_45();
/*
Same as previous triangle, but with the normal set
to the photorealistic value of 45, making it less bright.
Note that the norm of this normal vector is not 1,
but we are fine since we are using `glEnable(GL_NORMALIZE)`.
*/
glTranslatef(2.0f, 0.0f, 0.0f);
glNormal3f(0.0f, 1.0f, 1.0f);
draw_triangle_45();
/*
This triangle is rotated 45 degrees with a glRotate.
It should be as bright as the previous one,
even though we set the normal to 0,0,1.
So glRotate also affects the normal!
*/
glTranslatef(2.0f, 0.0f, 0.0f);
glNormal3f(0.0, 0.0, 1.0);
glRotatef(45.0, -1.0, 0.0, 0.0);
draw_triangle();
glPopMatrix();
glFlush();
}
static void init(void) {
GLfloat light0_diffuse[] = {1.0, 1.0, 1.0, 1.0};
/* Plane wave coming from +z infinity. */
GLfloat light0_position[] = {0.0, 0.0, 1.0, 0.0};
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_SMOOTH);
glLightfv(GL_LIGHT0, GL_POSITION, light0_position);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0_diffuse);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glColorMaterial(GL_FRONT, GL_DIFFUSE);
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_NORMALIZE);
}
static void reshape(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 7.0, -1.0, 1.0, -1.5, 1.5);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(800, 200);
glutInitWindowPosition(100, 100);
glutCreateWindow(argv[0]);
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return EXIT_SUCCESS;
}
Theory
In OpenGL 2 each vertex has its own associated normal vector.
The normal vector determines how bright the vertex is, which is then used to determine how bright the triangle is.
OpenGL 2 used the Phong reflection model, in which light is separated into three components: ambient, diffuse and specular. Of those, diffuse and specular components are affected by the normal:
if the diffuse light is perpendicular to the surface, it makes is brighter, no matter where the observer is
if the specular light hits the surface, and bounces off right into the eye of the observer, that point becomes brigher
glNormal sets the current normal vector, which is used for all following vertexes.
The initial value for the normal before we all glNormal is 0,0,1.
Normal vectors must have norm 1, or else colors change! glScale also alters the length of normals! glEnable(GL_NORMALIZE); makes OpenGL automatically set their norm to 1 for us. This GIF illustrates that beautifully.
Why it is useful to have normals per vertexes instead of per faces
Both spheres below have the same number of polygons. The one with normals on the vertexes looks much smoother.
OpenGL 4 fragment shaders
In newer OpenGL API's, you pass the normal direction data to the GPU as an arbitrary chunk of data: the GPU does not know that it represents the normals.
Then you write a hand-written fragment shader, which is an arbitrary program that runs in the GPU, which reads the normal data you pass to it, and implements whatever lightning algorithm you want. You can implement Phong efficiently if you feel like it, by manually calculating some dot products.
This gives you full flexibility to change the algorithm design, which is a major features of modern GPUs. See: https://stackoverflow.com/a/36211337/895245
Examples of this can be found in any of the "modern" OpenGL 4 tutorials, e.g. https://github.com/opengl-tutorials/ogl/blob/a9fe43fedef827240ce17c1c0f07e83e2680909a/tutorial08_basic_shading/StandardShading.fragmentshader#L42
Bibliography
https://gamedev.stackexchange.com/questions/50653/opengl-why-do-i-have-to-set-a-normal-with-glnormal
https://www.opengl.org/sdk/docs/man2/xhtml/glNormal.xml
http://www.tomdalling.com/blog/modern-opengl/06-diffuse-point-lighting/
http://learnopengl.com/#!Advanced-Lighting/Normal-Mapping

Many things are now deprecated, including normals and colors. That just means that you have to implement them yourself. With normals you can shade your objects. It's up to you to make the calculations but there are a lot of tutorials on e.g. Gouraud/Phong shading.
Edit: There are two types of normals: face normals and vertex normals. Face normals point away from the triangle, vertex normals point away from the vertex. With vertex normals you can achieve better quality, but there are many uses also for face normals, e.g. they can be used in collision detection and shadow volumes.

Related

Adding a Light Source to 3D objects in OpenGL

I was wondering if anyone could help me figure out how to add a light source to my 3D objects. I have four objects that are rotating and I want the light source to be at a fixed position, and I want to be able to see lighting on the object.
I tried doing this (********):
//*******Initializing the light position
GLfloat pos[] = {-2,4,5,1};
void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
//*******adding the light to the display method
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, pos);
// rectangle
glPushMatrix();
glTranslatef(0.0f, 2.5f, -8.0f);
glRotatef(angleRectangle, 0.0f, 1.0f, 0.0f);
drawRectangle();
glPopMatrix();
//small cylinder
glPushMatrix();
glTranslatef(0.0f, 2.0f, -8.0f);
glRotatef(90, 1, 0, 0);
glRotatef(anglePyramid, 0.0f, 0.0f, 1.0f);
drawCylinder(0.2, 0.7);
glPopMatrix();
//big cylinder
glPushMatrix();
glTranslatef(0.0f, 1.5f, -8.0f);
glRotatef(90, 1, 0, 0);
glRotatef(anglePyramid, 0.0f, 0.0f, 1.0f);
drawCylinder(0.7, 2.7);
glPopMatrix();
//pyramid
glPushMatrix();
glTranslatef(0.0f, -2.2f, -8.0f);
glRotatef(180, 1, 0, 0);
glRotatef(anglePyramid, 0.0f, 1.0f, 0.0f);
drawPyramid();
glPopMatrix();
glutSwapBuffers();
anglePyramid += k * 0.2f; //- is CW, + is CCW
angleRectangle += -k * 0.2f;
}
//******* Then i added these to the main method
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
However when I do this and I run the entire program, my objects turn gray, and at certain points in the rotation they turn white. And this isnt what I want. I want to keep my colorful objects, but I want to be able to see the light source on them.
Any help would be greatly appreciated. Also let me know if you need to see more of my code to figure out the issue. Thanks
When lighting (GL_LIGHTING) is enabled, then the color is taken from the material parameters (glMaterial).
If you still want to use the current color, the you have to enable GL_COLOR_MATERIAL
and to set the color material paramters (glColorMaterial):
glEnable(GL_LIGHTING);
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
See also Basic OpenGL Lighting.
But note, that drawing by glBegin/glEnd sequences, the fixed function pipeline matrix stack and fixed function pipeline per vertex light model, is deprecated since decades.
Read about Fixed Function Pipeline and see Vertex Specification and Shader for a state of the art way of rendering.

gl_lines and gl_clear in QGLWidget

I got a problem with drawing on a QGLWidget. I have several quads on the widget which I can move around when pressing some keys. As long as I just draw quads, everything works fine but now I want to add some lines using:
glBegin(GL_LINE);
glColor3f(c[0], c[1], c[2]);
glVertex3f(v1.x, v1.y, v1.z);
glVertex3f(v2.x, v2.y, v2.z);
glEnd;
The drawing also works fine, but the clearing of the glwidget doesn't work anymore. Means that I see everything I ever drawed on it. Just to mention.
I tried the same with GLUT using the same initializations and it worked, but since I have switched to Qt it doesn't work anymore.
paintGL(), resizeGL() and initializeGL() are below.
void GLWidget::paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0f, 0.0f, 0.0f, 0.0f, -10.0f, -20.0f, 0.0f, 20.0f, -10.0f);
glTranslatef(0.0f, -30.0f, -40.0f);
glRotatef(-90.0f, 1.0f, 0.0f, 0.0f);
glRotatef(s_deg, 0.0f, 0.0f, 1.0f);
glRotatef(s_deg2, cos(DEGRAD(s_deg)), sin(DEGRAD(s_deg)), 0.0f);
float colors[8][3] = {
0.5, 0.0, 0.0,
0.0, 0.5, 0.0,
0.0, 0.0, 0.5,
1.0, 0.5, 0.5,
0.5, 1.0, 0.5,
0.5, 0.5, 1.0,
0.9, 0.9, 0.9,
0.1, 0.1, 0.1,
}; //red, green, blue, red shiny, green shiny, blue shine, light grey, dark grey
for(int i=0;i<glBoxes.size();i++) {
glBoxes.at(i).setColor(colors[i]);
glBoxes.at(i).drawCuboid();
glBoxes.at(i).drawCuboidGrid();
}
}
void GLWidget::initializeGL() {
glDepthFunc(GL_LESS);
glClearColor(0.2, 0.2, 0.2, 0.2);
glClearDepth(1.0);
}
void GLWidget::resizeGL(int width, int height) {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0.0f, 0.0f, (float)width, (float)height);
glLoadIdentity();
gluPerspective(45.0f, (float)width/(float)height, 0.5f, 100.0f);
glMatrixMode(GL_MODELVIEW);
}
any Ideas?
The tokens accepted by glBegin are
GL_POINTS
GL_LINES
GL_TRIANGLES
GL_TRIANGLE_FAN
GL_TRIANGLE_STRIP
GL_QUADS
GL_QUAD_STRIP
and
GL_POLYGON
The token used by you, GL_LINE (not the missing trailing S) is not valid for glBegin.
The statement glEnd; will evaluate the address of the function glEnd and silently discard the result. Could it be, that you have a Pascal or Delphi background? In C like languages you have to add a matched pair of parentheses to make it a function call. Functions that don't take a parameter are called with an empty pair of parentheses. E.g. in your case glEnd();.
Not related to your problem. All of the code in resizeGL should go to the head of paintGL (use the widget's width() and height() getters). Also what you have in initializeGL belongs to paintGL.
The proper use of initializeGL is to do one-time initialization, like loading textures, and shaders, preparing FBOs and such.
resizeGL is meant to re-/initialize stuff that depends on the window's size and which is quite time consuming to change, like renderbuffers and/or textures used as attachment in FBOs used for window sized post-processing or similar. Setting the projection matrix does not belong there, and neither does the viewport. Those go into paintGL.
glDepthFunc, glClearColor and glClearDepth directly influence the drawing process and as such belong with the drawing code.
Also you should not use the immediate mode (glBegin … glEnd) at all. It's been outdated ever since OpenGL-1.1 was released over 15 years ago. Use Vertex Arrays, with the possible addition of Buffer Objects.
glEnd; should be glEnd();. This may actually fix your problem.
GL_LINE isn't a valid token for glBegin. To draw lines, you need GL_LINES (that's subtle, and you're in good company - this is a common mistake).
GL_LINE is used to control how polygons are rendered, which is controlled by the glPolygonMode function.
It must be GL_LINES instead of GL_LINE. The symbols accepted by glBegin are plural (i.e. GL_QUADS, GL_LINE_STRIPS...).

Skybox rotates out of sync with world

I've got a small scene with a loaded mesh, a ground plane and a skybox. I am generating a cube, and using the vertex positions as the cubemap texture co-ordinates.
Horizontal rotation (about the y-axis) works perfectly and the world movement is aligned with the skybox. Vertical rotation (about the camera's x-axis) doesn't seem to match up with the movement of the other objects, except that the strange thing is that when the camera is looking at the center of a cube face, everything seems aligned. In other words, the movement is non-linear and I'll try my best to illustrate the effect with some images:
First, the horizontal movement which as far as I can tell is correct:
Facing forward:
Facing left at almost 45Deg:
Facing left at 90Deg:
And now the vertical movement which seems to have some discrepancy in movement:
Facing forward again:
Notice the position of the ground plane in relation to the skybox in this image. I rotated slightly left to make it more apparent that the Sun is being obscured when it shouldn't.
Facing slightly down:
Finally, a view straight up to show the view is correctly centered on the (skybox) cube face.
Facing straight up:
Here's my drawing code, with the ground plane and mesh drawing omitted for brevity. (Note that the cube in the center is a loaded mesh, and isn't generated by the same function for the skybox).
void MeshWidget::draw() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glRotatef(-rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(-rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glDisable(GL_DEPTH_TEST);
glUseProgramObjectARB(shader_prog_ids_[2]);
glBindBuffer(GL_ARRAY_BUFFER, SkyBoxVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vec3), BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, SkyIndexVBOID);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glUseProgramObjectARB(shader_prog_ids_[0]);
glEnable(GL_DEPTH_TEST);
glPopMatrix();
glTranslatef(0.0f, 0.0f, -4.0f + zoom_factor_);
glRotatef(rot_[MOVE_CAMERA][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_CAMERA][2], 0.0f, 0.0f, 1.0f);
glPushMatrix();
// Transform light to be relative to world, not camera.
glRotatef(rot_[MOVE_LIGHT][1], 0.0f, 1.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][0], 1.0f, 0.0f, 0.0f);
glRotatef(rot_[MOVE_LIGHT][2], 0.0f, 0.0f, 1.0f);
float lightpos[] = {10.0f, 0.0f, 0.0f, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, lightpos);
glPopMatrix();
if (show_ground_) {
// Draw ground...
}
glPushMatrix();
// Transform and draw mesh...
glPopMatrix();
}
And finally, here's the GLSL code for the skybox, which generates the texture co-ordinates:
Vertex shader:
void main()
{
vec4 vVertex = vec4(gl_ModelViewMatrix * gl_Vertex);
gl_TexCoord[0].xyz = normalize(vVertex).xyz;
gl_TexCoord[0].w = 1.0;
gl_TexCoord[0].x = -gl_TexCoord[0].x;
gl_Position = gl_Vertex;
}
Fragment shader:
uniform samplerCube cubeMap;
void main()
{
gl_FragColor = texture(cubeMap, gl_TexCoord[0]);
}
I'd also like to know if using quaternions for all camera and object rotations would help.
If you need any more information (or images), please ask!
I think you should be generating your skybox texture lookup based on a worldspace vector (gl_Vertex?) not a view space vector (vVertex).
I'm assuming your skybox coordinates are already defined in worldspace as I don't see a model matrix transform before drawing it (only the camera rotations). In that case, you should be sampling the skybox texture based on the worldspace position of a vertex, it doesn't need to be transformed by the camera. You're already translating the vertex by the camera, you shouldn't need to translate the lookup vector as well.
Try replacing normalize(vVertex) with normalize(gl_Vertex) and see if that improves things.
Also I might get rid of the x = -x thing, I suspect that was put in to compensate for the fact that the texture was rotating in the wrong direction originally?
I'd also like to know if using quaternions for all camera and object rotations would help.
Help how? It doesn't offer any new functionality over using matrices. I've heard arguments both ways as to whether matrixes or quaternions have better performance, but I see no need to use them.

OpenGL lighting problem when rotating the camera

I draw buildings in my game world, i shade them with the following code:
GLfloat light_ambient[] = {0.0f, 0.0f, 0.0f, 1.0f};
GLfloat light_position[] = {135.66f, 129.83f, 4.7f, 1.0f};
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glColorMaterial(GL_FRONT, GL_AMBIENT);
It works nicely.
But when i start flying in the world, the lighting reacts to that as if the world was an object that is being rotated. So the lights changes when my camera angle changes.
How do i revert that rotation? so the lighting would think that i am not actually rotating the world, and then i could make my buildings have static shading which would change depending on where the sun is on the sky.
Edit: here is the rendering code:
int DrawGLScene()
{
// stuff
glLoadIdentity();
glRotatef(XROT, 1.0f, 0.0f, 0.0f);
glRotatef(YROT, 0.0f, 1.0f, 0.0f);
glRotatef(ZROT, 0.0f, 0.0f, 1.0f);
glTranslatef(-XPOS, -YPOS, -ZPOS);
// draw world
}
http://www.opengl.org/resources/faq/technical/lights.htm
See #18.050
In short, you need to make sure you're defining your light position in the right reference frame and applying the appropriate transforms each frame to keep it where you want it.
Edit:
With the removal of the fixed function pipeline in OpenGL 3.1 the code and answer here are deprecated. The correct answer now is to pass your light position(s) into your vertex/fragment shaders and perform your shading using that position in world space. This calculation varies based on the type of lighting you're doing (PBR, phong, deferred etc).
I believe your problem arises from the fact that OpenGL Lights are specified in World Coordinates, but stored internally in Camera Coordinates. This means that once you place a light, whenever you move your camera, your light will move with it.
To fix this, simply reposition your lights whenever you move your camera.

Problem with light and depth in OpenGL

glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
ifstream myFile("Coordinates.txt");
if (!myFile.is_open())
{
cout << "Unable to open file";
exit(1); // terminate with error
}
// Light values and coordinates
float ambientLight[] = { 0.3f, 0.3f, 0.3f, 1.0f };
float diffuseLight[] = { 0.7f, 0.7f, 0.7f, 1.0f };
float specular[] = { 1.0f, 1.0f, 1.0f, 1.0f};
float lightPos[] = { 0.0f, -150.0f, -150.0f, 1.0f };
glEnable(GL_CULL_FACE); // Do not calculate inside of jet
glFrontFace(GL_CCW); // Co unter clock-wise polygons face
// Enable lighting
glEnable(GL_LIGHTING);
// Setup and enable light 0
glLightfv(GL_LIGHT0,GL_AMBIENT, ambientLight);
glLightfv(GL_LIGHT0,GL_DIFFUSE,diffuseLight);
glLightfv(GL_LIGHT0,GL_SPECULAR,specular);
glLightfv(GL_LIGHT0,GL_POSITION,lightPos);
glEnable(GL_LIGHT0);
// Light values and coordinates
float specref[] = { 1.0f, 1.0f, 1.0f, 1.0f };
// Enable color tracking
glEnable(GL_COLOR_MATERIAL);
// Set Material properties to follow glColor values
glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE);
// All materials hereafter have full specular reflectivity
// with a high shine
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR,specref);
glMateriali(GL_FRONT_AND_BACK,GL_SHININESS,128);
while (! myFile.eof())
{
glPushMatrix();
myFile>>plot[0];
myFile>>plot[1];
myFile>>plot[2];
myFile>>plot[3]; //this data will not be used
glColor3f(0.60f/1.5,0.80f/1.5,0.90f/1.5);
glTranslatef((plot[0]-1.15)*26, (plot[2]-0.51)*45, (plot[1]-1)*30);
glutSolidSphere(2, 12, 12);
glLoadIdentity();
glPopMatrix();
axes += 0.00005f;
}
glRotatef(axes, 0.0f, 1.0f, 0.0f);
myFile.close();
glFlush();
glutSwapBuffers();
This is my 1st time playing with lighting.
My problem is that after i place all the light effect code from a tutorial the objects seem only exist in one plane which is the xy-plane thought my data have coordinated in all xyz and the reflection seems a bit off..
can anyone tell me why and how to fix it?
Have a look-see here: Avoiding 16 Common OpenGL Pitfalls
You haven't given enough information. What values are in your file? Why are you loading plot[3] when it goes unused? Do you mean that the glutSphere is rendering as a flat 2d object in the xy plane?
I'd recommend you familiarise yourself with the core OpenGL functionality before using the in-built lighting, this problem probably has nothing to do with lighting. I also wouldn't recommend using GL's inbuilt lighting for any thing other than testing and tiny projects anyway... its not very flexible and has lots of limitations too.