Bad shading result when applying material - opengl

I've got an openGL 3d scene with two simple objects (glutSolidCube and glutSolidTeapot). When I set up the lights with GL_COLOR_MATERIAL enabled, I get the following result:
Which is good. Then when I set up my own material like this:
//diffuse light color variables
GLfloat dlr = 0.4;
GLfloat dlg = 0.6;
GLfloat dlb = 0.9;
//ambient light color variables
GLfloat alr = 0.7;
GLfloat alg = 0.7;
GLfloat alb = 0.7;
//ambient light color variables
GLfloat slr = 0.4;
GLfloat slg = 0.4;
GLfloat slb = 0.4;
GLfloat DiffuseLight[] = {dlr, dlg, dlb}; //set DiffuseLight[] to the specified values
GLfloat AmbientLight[] = {alr, alg, alb}; //set AmbientLight[] to the specified values
GLfloat SpecularLight[] = {slr, slg, slb}; //set AmbientLight[] to the specified values
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (float *)&AmbientLight);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, (float *)&DiffuseLight);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, (float *)&SpecularLight);
I get this very different result, in which you can see it's not being shaded properly, it's like FLAT shading although I defined it as SMOOTH (Gouraud).
Where can the problem be? Is it on the material definition?

You forgot to set specular shininess.
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, 12.0f);
Set it to 10...25 and it'll look much better/shinier. It won't look as good as per-pixel lighting, though. Default value for shininess is zero which will look exactly like what you see - i.e. ugly.

I get this very different result, in which you can see it's not being shaded properly, it's like FLAT shading although I defined it as SMOOTH (Gouraud).
Well, you got smooth shading. However the OpenGL fixed function pipeline evaluates illumination values only at the vertices, then performs barycentric interpolation over the face. The result you got is exactly what to expect.
What you want is per pixel/fragment illumination. Only way to get this is by using a shader (well, it's also possible by tinkering with the so called "texture combiner environment", but getting that one to work properly is a lot of hard work. Implementing a Phong illumination shader is a matter of minutes).
By changing your lighting and material settings you're just putting emphasis on the shortcommings of the Gouraud shading model.

Related

How to stop positional light moving with camera

When I rotate and/or move the camera in my openGL project, it is almost as if there is a spotlight moving with it, however I have set up my gl light to be positional and given it a static position.
void Lighting::Display()
{
glPushMatrix();
glTranslatef(0.f, yoffset, 0.f); // move to start
glTranslatef(0.f, ceilHeight * scale[0], 0.f);
DrawLight();
glDisable(GL_LIGHTING);
glPushAttrib(GL_ALL_ATTRIB_BITS);
// Match colour of sphere to diffuse colour of light
glColor4fv(specular);
glTranslatef(0.f, -10.0f * scale[1], 0.f);
glutSolidSphere(5.0f * scale[0], 25, 25);
glPopAttrib();
glPopMatrix();
// Re-enable lighting after light source has been drawn
glEnable(GL_LIGHTING);
// Set properties GL_LIGHT0
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, specular);
//glLightf(GL_LIGHT0, GL_LINEAR_ATTENUATION, 0.0001f);
GLfloat pos[4] = { 0.f, 950.f, 0.f, 1.f };
glLightfv(GL_LIGHT0, GL_POSITION, pos);
// enable GL_LIGHT0 with these defined properties
glEnable(GL_LIGHT0);
}
I expected to have a single source of light hanging in the centre of the scene, with light being emitted equally in all directions from its position
however a trail of light seems to be emitted as a spotlight instead.
Here is an image showcasing the issue:
As you can see there is an odd line of light being emitted.
When the light position is set by glLightfv(GL_LIGHT0, GL_POSITION, pos), then pos is multiplied by the current model view matrix.
The intensity of the ambient light (Ia), diffuse light (Id) and specular light (Is), of the Blinn–Phong reflection model is calculated as follows:
N ... norlmal vector
L ... light vector (from the vertex position to the light)
V ... view vector (from the vertex position to the eye)
sh ... shininess
H = normalize(V + L)
NdotH = max(dot(N, H), 0.0)
Ia = 1
Id = max(dot(N, L), 0.0)
Is = pow(NdotH, sh)
So the ambient light (GL_AMBIENT) is independent of any direction.
The diffuse light (GL_DIFFUSE) depends on the normal vector of the surface (N) and the direction of the incident light (L). It stays constant on the lit surface, independent on the point of view.
The specular light (GL_SPECULAR) depends on the surfaces normal vector (N), the light direction (L). and the viewing direction (V). This causes that the specular highlights change, when you move in the scene, because the viewing direction to the lit surfaces changes.
Further note that the light calculations in the deprecated OpenGL fixed function light model are done per vertex (Gouraud Shading). The calculated light is interpolated on the area, between the the corners of the primitives. A modern implementation would be to do the light calculation per fragment (Phong shading).
Gouraud Shading cause the spotted look of the specular highlights at the ceiling and may increase an unexpected look. Tessellating the areas in smaller peaces may improve that, but the best solution would be to switch to modern OpenGL write a Shader and implement a per fragment light model for your needs.
Your pop your matrix right before setting the lighting position. As a result your position of your light is always what you set it too. You are then multiplying everything else by your View Matrix I assume. The View matrix essentially transforms the world into the view of the camera. Essentially a camera doesn't move, the world moves around the camera. Since the light is not multiplied by this view matrix ALSO you get a light that appears to maintain a constant position relative to the camera.

OpenGL shader light position changed in shader

First of all, I'm sorry if the title is misleading but I'm not quite sure how to describe the issue, if it is an issue at all.
I'm vert new to OpenGL, and I have just started to scratch the surface of GLSL following this tutorial.
The main part of the rendering funcion looks like this
GLfloat ambientLight[] = {0.5f, 0.5f, 0.5f, 1.0f};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientLight);
//Add directed light
GLfloat lightColor1[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.2, 0.2)
//Coming from the direction (-1, 0.5, 0.5)
GLfloat lightPos1[] = { 40.0 * cos((float) elapsed_time / 500.0) , 40.0 * sin((float) elapsed_time / 500.0), -20.0f, 0.0f};
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor1);
glLightfv(GL_LIGHT0, GL_POSITION, lightPos1);
glPushMatrix();
glTranslatef(0,0,-50);
glColor3f(1.0, 1.0, 1.0);
glRotatef( (float) elapsed_time / 100.0, 0.0,1.0,0.0 );
glUseProgram( shaderProg );
glutSolidTeapot( 10 );
glPopMatrix();
Where "shaderProg" is a shader program consisting of a vertex shader
varying vec3 normal;
void main(void)
{
normal = gl_Normal;
gl_Position = ftransform();
}
And a fragment shader
uniform vec3 lightDir;
varying vec3 normal;
void main() {
float intensity;
vec4 color;
intensity = dot(vec3(gl_LightSource[0].position), normalize(normal));
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
I have two issues.
First is that according to the tutorial the uniform lightDir should be usable, yet I only get results with vec3(gl_LightSource[0].position). Is there any difference between the two?
The other problem is that the setup rotates the light around the teapot differently when using the shader program. Without the shader the light orbits the teapot in the XY axis of the camera. Yet, if the shader is used, the light moves in the XZ axis of the camera. Have I made a mistake? Or have i forgot som translation in the shaders?
Thanks in advance : )
First is that according to the tutorial the uniform lightDir should be
usable, yet I only get results with vec3(gl_LightSource[0].position).
Is there any difference between the two?
That tutorial uses lightDir as a uniform variable. You have to set that yourself. via some glUniform call. If it is the same or not will depend on what exactly you set as the light position here. The lightDir as it is used here is the vector from the surface point you want to shade to the light source. The tutorial uses a directional light, so the light direction is the same everywhere in the scene and does not really depend on the position of the vertex/fragment. You can do the same with the fixed-function lighting by setting the w component of the light poisition to 0. If you don't do that, the results will be very different.
A side note: The GLSL code in that tutorial is unforunately relying on lots of deprecated features. If you learn GLSL, I would really recommend that you learn modern GL core profile.
lightDir is not a pre-defined uniform. The typical definition for a light direction vector is just a normalized vector to the light position in your shader, which you can easily calculate yourself by normalizing the position vector:
vec3 lightDir = normalize(gl_LightSource[0].position.xyz);
You could also pass it into the shader as a uniform you define yourself. For this approach, you would define the uniform in your fragment shader:
uniform vec3 lightDir;
and then get the uniform location with the glGetUniformLocation() call, and set a value with the glUniform3f() call. So once after linking the shader, you have this:
GLint lightDirLoc = glGetUniformLocation(shaderProg, "lightDir");
and then every time you want to change the light direction to (vx, vy, vz):
glUniform3f(lightDirLoc, vx, vy, vz);
For the second part of your question: The reason you get different behavior for the light position with the fixed pipeline compared to what you get with your own shader is that the fixed pipeline applies the current modelview matrix to the specified light position, which is not done in your shader.
As a number of others already suggested: If you learn OpenGL now, I strongly recommend that you skip the legacy features, which includes the fixed function light source parameters. In this case, you can simply use uniform variables you define yourself, as I already illustrated as an option for the lightDir variable above.

GLSL: How to lower 2D light center density?

I found a shader on the Internet which creates 2D lights.
What I'm curious about is that "How can I make the centre of the light less dense to be able to see other objects while still illuminating them?"
Here is the shader:
uniform vec2 lightLocation;
uniform vec3 lightColor;
uniform float screenHeight;
void main() {
float distance = length(lightLocation - gl_FragCoord.xy);
float attenuation = 1.0 / distance;
vec4 color = vec4(attenuation, attenuation, attenuation, attenuation) * vec4(lightColor, 1);
gl_FragColor = color;
}
This is how the light is rendered:
glUseProgram(lightShaderProgram);
glUniform2f(glGetUniformLocation(lightShaderProgram, "lightLocation"), location.getX(), location.getY());
glUniform3f(glGetUniformLocation(lightShaderProgram, "lightColor"), red, green, blue);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glBegin(GL_QUADS); {
glVertex2f(0, 0);
glVertex2f(0, Engine.getDisplayHeight());
glVertex2f(Engine.getDisplayWidth(), Engine.getDisplayHeight());
glVertex2f(Engine.getDisplayWidth(), 0);
} glEnd();
This is an image of a light created with this shader and a red rectangle being illuminated by the light.
From what I was able to understand from my Google searches, I guess there should be other variables in the shader but I couldn't figure out which one I need and how to implement them. Any help?
You can tinker with float attenuation = 1.0 / distance; If you want more rapid drop in brightness with distance you can, for example, square it or if you want to make it dimmer in general then you can subtract some constant from the attenuation. For example, http://glsl.heroku.com/e#18242.1

What exactly happens with alpha colors after blending?

I am working on a test project where one big 3D quad is intersected by several other 3D quads (rotated differently). All quads have transparency (ranging from fully opaque to fully transparent). The small quads never overlap, well, they might, but the camera placement and the Z-buffer make sure that only those parts that may overlap actually overlap.
To render this, I first render the big quad to a different rgba framebuffer for later lookup.
Now I start rendering to the screen. A skybox is first rendered, then the small quads are rendered with alpha blending enabled: GL_SRC_ALHPA and GL_ONE_MINUS_SRC_ALPHA. After that I render the big quad again with alpha blending also enabled this time. Of course, only the pixels in front of the quads will be rendered because of the Z-buffer.
That's why, in the fragment shader of the small quads, I do a raytrace to find the intersection with the big quad. If the intersection point is BEHIND the small quad, I fetch the texel from the originally created framebuffer and blend this texel manually with the calculated small quad texel color.
But the result is not the same: the colors in front are rendered correctly (GPU handles the blending there), the colors behind them are "weird". They are lighter or darker but I never get the same result. After consideration, I think this must be because I am not emitting the correct alpha value from my small-quads shader, so that the GPU performed blending changes the colors even more.
So, how exactly is the alpha value calculated when blending on the GPU when using the above mentioned blending method. In the opengl manual, I find: A * Srgba + (1 - A) * Drgba. When I emit that result, the blending is not the same. I am quite sure that this is because the result is again passing through the GPU blending again.
The ray tracing is correct, I'm certain of that.
So, what should I do with my manual blending?
Or, should I use another method to get the right effect?
Not really necessary I believe, but here is some code (optimization is for later):
vec4 vRayOrg = vec4(0.0, 0.0, 0.0, 1.0);
vec4 vRayDir = vec4(normalize(vBBPosition.xyz), 0.0);
vec4 vPlaneOrg = mView * vec4(0.0, 0.0, 0.0, 1.0);
vec4 vPlaneNormal = mView * vec4(0.0, 1.0, 0.0, 0.0);
float div = dot(vRayDir.xyz, vPlaneNormal.xyz);
float t = dot(vPlaneOrg.xyz - vRayOrg.xyz, vPlaneNormal.xyz) / div;
vec4 pIntersection = vRayOrg + t * vRayDir;
vec3 normal = normalize(vec4(vCoord, 0.0, 0.0)).xyz;
vec3 vLight = normalize(vPosition.xyz / vPosition.w);
float distance = length(vCoord) - 1.0;
float distf = clamp(0.225 - distance, 0.0, 1.0);
float diffuse = clamp(0.25 + dot(-normal, vLight), 0.0, 1.0);
vec4 cOut = diffuse * 9.0 * distf * cGlow;
if (distance > 0.0 && t >= 0.0 && pIntersection.z <= vBBPosition.z)
{
vec2 vTexcoord = (vProjectedPosition.xy / vProjectedPosition.w) * 0.5 + 0.5;
vec4 cLookup = texture(tLookup, vTexcoord);
cOut = cLookup.a * cLookup + (1.0 - cLookup.a) * cOut;
}
vFragColor = cOut;

OpenGL lighting small objects?

I'm having a problem with lighting when dealing with really small particles. I'm doing particle-based fluid simulation and am right now rendering the fluid as really tiny polygonized spheres (by really tiny I'm saying about 0.03 units radius for the spheres). The lighting in my scene isn't lighting them how I want and I can't get it to look right. I'm looking for something similar to the soft lighting on the particles in this image...
However my particles look like this...
See how my particles have bright white sections whereas the green particles are just lit up softly and don't have large white hotspots. I know the reason is either the settings for my light or simply the fact that the particles are so small that the light takes up a larger space (is that possible??). My settings for lighting are as follows...
GLfloat mat_ambient[] = {0.5, 0.5, 0.5, 1.0};
GLfloat mat_specular[] = {1.0, 1.0, 1.0, 1.0};
GLfloat mat_shininess[] = {10.0};
GLfloat light_position[] = {0.0, 0.1, p_z, 1.0};
GLfloat model_ambient[] = {0.5, 0.5, 0.5, 1.0};
glMaterialfv(GL_FRONT, GL_AMBIENT, mat_ambient);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, model_ambient);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glColorMaterial(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE);
glEnable(GL_COLOR_MATERIAL);
Thanks for all the suggestions everyone but unfortunately nothing worked. I sat down with my graphics professor and we determined that this problem was in fact related to the size of the particles and the fact that OpenGL treats directional lights as being infinitely far away from any vertex. The proper way to fix it was modifying the constant attenuation of the light source like this...
glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, 10.0);
Now my particles look like this...
which is exactly the lighting I was after!
The size of the particles isn't an issue - you're over-saturating your colours.
For each RGB component, you should have ambient + diffuse + specular <= 1.0
For a scene like this I'd expect ambient to be no more than 0.1 or so, diffuse of 0.6 or so, and specular making up the rest.
It looks like you need to turn down the specular component of your material, turn down the ambient a bit, and add some diffuse shading (GL_DIFFUSE). Consider also positioning the light behind the the viewport/camera.