glsl light doesn't seem to be working - c++

I'm working on some basic lighting in my application, and am unable to get a simple light to work (so far..).
Here's the vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 pvmMatrix;
uniform mat3 normalMatrix;
in vec3 in_Position;
in vec2 in_Texture;
in vec3 in_Normal;
out vec2 textureCoord;
out vec4 pass_Color;
uniform LightSources {
vec4 ambient = vec4(0.5, 0.5, 0.5, 1.0);
vec4 diffuse = vec4(0.5, 0.5, 0.5, 1.0);
vec4 specular = vec4(0.5, 0.5, 0.5, 1.0);
vec4 position = vec4(1.5, 7, 0.5, 1.0);
vec4 direction = vec4(0.0, -1.0, 0.0, 1.0);
} lightSources;
struct Material {
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
Material mymaterial = Material(
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
0.995
);
void main() {
gl_Position = pvmMatrix * vec4(in_Position, 1.0);
textureCoord = in_Texture;
vec3 normalDirection = normalize(normalMatrix * in_Normal);
vec3 lightDirection = normalize(vec3(lightSources.direction));
vec3 diffuseReflection = vec3(lightSources.diffuse) * vec3(mymaterial.diffuse) * max(0.0, dot(normalDirection, lightDirection));
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
pass_Color = vec4(diffuseReflection, 1.0);
}
And here's the fragment shader:
#version 150 core
uniform sampler2D texture;
in vec4 pass_Color;
in vec2 textureCoord;
void main() {
vec4 out_Color = texture2D(texture, textureCoord);
gl_FragColor = pass_Color;
//gl_FragColor = out_Color;
}
I'm rendering a textured wolf to the screen as a test. If I change the fragment shader to use out_Color, I see the wolf rendered properly. If I use the pass_Color, I see nothing on the screen.
This is what the screen looks like when I use out_Color in the fragment shader:
I know the diffuseReflection vector is full of 0's, by uncommenting this code in the vertex shader:
...
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
...
This will make the x component of the diffuseReflection vector 1.0, which turns the wolf red.
Does anyone see anything obvious I'm doing wrong here?

As suggested in the comments, try debugging incrementally. I see a number of ways your shader could be wrong. Maybe your normalMatrix isn't being passed properly? Maybe your in_Normal isn't mapped to the appropriate input? Maybe when you're casting lightSources.direction to a vec3, the compiler's doing something funky? Maybe your shader isn't even running at all, but you think it is? Maybe you have geometry or tessellation units and it's not passing correctly?
No one really has a chance of answering this correctly. As for me, it looks fine--but again, any of the factors above could happen--and probably more.
To debug this, you need to break it down. As suggested in the comments, try rendering the normals. Then, you should try rendering the light direction. Then render your n dot l term. Then multiply by your material parameters, then by your texture. Somewhere along the way you'll figure out the problem. As an additional tip, change your clear color to something other than black so that any black-rendered objects stand out.
It's worth noting that the above advice--break it down--is applicable to all things debugging, not just shaders. As I see it, you haven't done so here.

Related

How do I adjust the my GLSL code to add additional light sources?

In the code below I am trying to implement a fragment shader program for Phong:
// Inputs from application.
// Generally, "in" like the position and normal vectors for things that change frequently,
// and "uniform" for things that change less often (think scene versus vertices).
in vec3 position_cam, normal_cam;
uniform mat4 view_mat;
// This light setup would usually be passed in from the application.
vec3 light_position_world = vec3 (10.0, 25.0, 10.0);
vec3 Ls = vec3 (1.0, 1.0, 1.0); // neutral, full specular color of light
vec3 Ld = vec3 (0.8, 0.8, 0.8); // neutral, lessened diffuse light color of light
vec3 La = vec3 (0.12, 0.12, 0.12); // ambient color of light - just a bit more than dk gray bg
// Surface reflectance properties for Phong model below.
vec3 Ks = vec3 (1.0, 1.0, 1.0); // fully reflect specular light
vec3 Kd = vec3 (0.32, 0.18, 0.5); // purple diffuse surface reflectance
vec3 Ka = vec3 (1.0, 1.0, 1.0); // fully reflect ambient light
float specular_exponent = 400.0; // specular 'power' -- controls "roll-off"
// Shader programs can also designate outputs.
out vec4 fragment_color; // color of surface to draw in this case
void main ()
{
fragment_color = vec4 (Kd, 1.0);
}
I have two questions:
How do I add 2 additional directional light sources to my code? Do I simply add more vec3 Ld variables to my light setup or is there something else I must do?
How do I set the Phong exponent high enough to produce sharp and bright highlights?
In glsl you can use arrays and structures. Define an array of light sources. See Array constructors and Struct constructors:
const int no_of_lights = 2;
struct TLightSource
{
vec3 lightPos;
vec3 Ls;
vec3 Ld;
vec3 La;
float shininess;
};
TLightSource lightSources[no_of_lights] = TLightSource[no_of_lights](
TLightSource(vec3(10.0, 25.0, 10.0), vec3(1.0, 1.0, 1.0), vec3(0.8, 0.8, 0.8), vec3(0.12, 0.12, 0.12), 10.0),
TLightSource(vec3(-10.0, 25.0, 10.0), vec3(1.0, 0.0, 0.0), vec3(0.8, 0.0, 0.0), vec3(0.12, 0.0, 0.0), 10.0)
);
User a for loop to iterate through the light sources and sum up the light color for ambient, diffuse and specular light (e.g Phong reflection model):
void main()
{
vec3 normalInterp;
vec3 vertPos;
vec3 normal = normalize(normalInterp);
vec3 color = vec3(0.0);
for (int i=0; i < no_of_lights; i++)
{
color += Ka * lightSources[i].La;
vec3 lightDir = normalize(lightSources[i].lightPos - vertPos);
float lambertian = max(dot(lightDir, normal), 0.0);
color += lambertian * lightSources[i].Ld;
if (lambertian > 0.0)
{
vec3 viewDir = normalize(-vertPos);
vec3 reflectDir = reflect(-lightDir, normal);
float RdotV = max(dot(reflectDir, viewDir), 0.0);
float specular = pow(RdotV, lightSources[i].shininess/4.0);
color += specular * lightSources[i].Ls;
}
}
frag_color = vec4(color, 1.0);
}

Glitchy Facial Morph Target Animation in OpenGL (C++)

I've been trying to implement Morph Target animation in OpenGL with Facial Blendshapes but following this tutorial. The vertex shader for the animation looks something like this:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
//float smile_l_w = 0.9;
float neutral_w = 1 - 2 * smile_w;
clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_w * vNeutral + smile_w * vSmile_L + smile_w * vSmile_R;
vec3 vNormal = neutral_w * nNeutral + smile_w * nSmile_L + smile_w * nSmile_R;
//vec3 vPosition = vNeutral + (vSmile_L - vNeutral) * smile_w;
//vec3 vNormal = nNeutral + (nSmile_L - nNeutral) * smile_w;
normalize(vPosition);
normalize(vNormal);
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Although the algorithm for morph target animation works, I get missing faces on the final calculated blend shape. The animation somewhat looks like the follow gif.
The blendshapes are exported from a markerless facial animation software known as FaceShift.
But also, the algorithm works perfectly on a normal cuboid with it's twisted blend shape created in Blender:
Could it something wrong with the blendshapes I am using for the facial animation? Or I am doing something wrong in the vertex shader?
--------------------------------------------------------------Update----------------------------------------------------------
So as suggested, I made the changes required to the vertex shader, and made a new animation, and still I am getting the same results.
Here's the updated vertex shader code:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
float neutral_w = 1.0 - smile_w;
float neutral_f = clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_f * vNeutral + smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
vec3 vNormal = neutral_f * nNeutral + smile_w/2 * nSmile_L + smile_w/2 * nSmile_R;
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Also, my fragment shader looks something like this. (I just added new material settings as compared to earlier)
#version 400 core
uniform vec4 lightColor;
uniform vec4 diffuseColor;
in vec3 lPos;
in vec3 vPos;
in vec3 vNorm;
void main(){
//copper like material light settings
vec4 ambient = vec4(0.19125, 0.0735, 0.0225, 1.0);
vec4 diff = vec4(0.7038, 0.27048, 0.0828, 1.0);
vec4 spec = vec4(0.256777, 0.137622, 0.086014, 1.0);
vec3 L = normalize (lPos - vPos);
vec3 N = normalize (vNorm);
vec3 Emissive = normalize(-vPos);
vec3 R = reflect(-L, N);
float dotProd = max(dot(R, Emissive), 0.0);
vec4 specColor = lightColor*spec*pow(dotProd,0.1 * 128);
vec4 diffuse = lightColor * diff * (dot(N, L));
gl_FragColor = ambient + diffuse + specColor;
}
And finally the animation I got from updating the code:
As you can see, I am still getting some missing triangles/faces in the morph target animation. Any more suggestions/comments regarding the issue would be really helpful. Thanks again in advance. :)
Update:
So as suggested, I flipped the normals if dot(vSmile_R, nSmile_R) < 0 and I got the following image result.
Also, instead of getting the normals from the obj files, I tried calculating my own (face and vertex normals) and still I got the same result.
Not an answer attempt, I just need more formatting than available for comments.
I cannot tell which data was actually exported from Fasceshift and how that was put into the custom ADTs of the app; my crystal ball is currently busy with predicting the FIFA Wold Cup results.
But generally, a linear morph is a very simple thing:
There is one vector "I" of data for the initial mesh and a vector "F" of equal size for the position data of the final mesh; their count and ordering must match for the tessellation to remain intact.
Given j ∈ [0,count), corresponding vectors initial_ = I[j], final_ = F[j] and a morph factor λ ∈ [0,1] the j-th (zero-based) current vector current_(λ) is given by
current_(λ) = initial_ + λ . (final_ - initial_) = (1 - λ ) . initial_ + λ . final_.
From this perspective, this
vec3 vPosition = neutral_w * vNeutral +
smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
looks dubious at best.
As I said, my crystal ball is currently defunct, but the naming would imply that, given the OpenGL standard reference frame,
vSmile_L = vSmile_R * (-1,1,1),
this "*" denoting component-wise multiplication, and that in turn would imply cancelling out the morph x-component by above addition.
But apparently, the face does not degenerate into a plane (a line from the projected pov), so the meaning of those attributes is unclear.
That's the reason why I want to look at the effective data, as stated in the comments.
Another thing, not related to the effect in question, but to the the shading algorithm.
As stated in the answer to this
Can OpenGL shader compilers optimize expressions on uniforms?,
the shader optimizer could well optimize pure uniform expressions like the M/V/P calculations done with
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
/* */
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
but I find it highly optimistic to rely on such assumed optimizations.
if it is not optimized accordingly doing this means doing it once per frame per vertex so for a human face with a LOD of 1000 vertices, and 60Hz that would be done 60,000 times per second by the GPU, instead of once and for all by the CPU.
No modern CPU would give up soul if these calculations are put once on her shoulders, so passing the common trinity of M/V/P matrices as uniforms seems appropriate instead of constructing those matrices in the shader.
For reusing the code from the shaders - glm provides a very glsl-ish way to do GL-related maths in C++.
I had a very similar problem some time ago. As you eventually noticed, your problem most probably lies in the mesh itself. In my case, it was inconsistent mesh triangulation. Using the Triangluate Modifier in Blender solved the problem for me. Perhaps you should give it a try too.

OpenGL, diffuse shader

I'm trying to implement very simple diffuse shader in GLSL/openGL.
Here's what I got:
Vertex shader:
#version 130
in vec3 vertPos3D;
in vec3 vertNormal3D;
uniform mat3 transpMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
varying vec3 vertNormal;
varying vec3 lightVector;
void main()
{
vec4 res_pos = projectionMatrix * viewMatrix * vec4(vertPos3D.xyz, 1.0);
gl_Position = res_pos;
mat4 pm = projectionMatrix * viewMatrix;
vertNormal = (viewMatrix * vec4(vertNormal3D, 0)).xyz;
lightVector = (viewMatrix * vec4(lightPosition, 1.0)).xyz - (viewMatrix * vec4(vertPos3D.xyz, 1.0)).xyz;
}
Fragment Shader:
#version 130
out vec4 color;
varying vec3 lightVector;
varying vec3 vertNormal;
void main()
{
float dot_product = max(normalize(dot(lightVector, vertNormal)), 0.0);
color = dot_product * vec4( 1.0, 1.0, 1.0, 1.0 );
}
As soon as I multiply final color with dot_product, nothing displays. I remove dot_product, everything works (except diffuse lightning ofc.). I'm afraid it's something obvious I'm missing.
A problem:
normalize(dot(lightVector, vertNormal))
dot in GLSL 1.3 returns a float.
normalize accepts a vector, not a float.
documentation for dot
documentation for normalize
A Solution, at least to this problem:
In Fragment shader, replace
float dot_product = max(normalize(dot(lightVector, vertNormal)), 0.0);
with
float dot_product = clamp(dot(lightVector, vertNormal), 0., 1.);
It looks like you are using max and normalize to avoid negative numbers returned from dot. This is exactly what clamp is for. Here's the documentation for clamp
Use
float dot_product = max(dot(normalize(lightVector), normalize(normalVector)), 0.0);
Dylan Holmes answer is slightly incorrect:
Still the lightVector needs to be normalized!
And clamping is unnecessary. max was correct. A dot product never returns a value higher then 1.0 if input vectors are normalized.

Fragment shader input interfering with texture access

I am using a vertex, geometry and fragment shader to render a scene with shadows:
Vertex Shader:
#version 400
layout(location=0) in vec3 position;
out vec4 vShadowCoord;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 shadowMatrix;
void main(void)
{
vShadowCoord = shadowMatrix * vec4(position, 1.0);
gl_Position = modelViewProjectionMatrix * vec4(position, 1.0);
}
Geometry Shader:
#version 400
layout(triangles_adjacency) in;
layout(triangle_strip, max_vertices = 3) out;
in vec4 vShadowCoord[];
out vec4 gShadowCoord;
uniform vec3 lightPosition;
void main()
{
gShadowCoord = vShadowCoord[0];
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gShadowCoord = vShadowCoord[2];
gl_Position = gl_in[2].gl_Position;
EmitVertex();
gShadowCoord = vShadowCoord[4];
gl_Position = gl_in[4].gl_Position;
EmitVertex();
EndPrimitive();
}
Fragment Shader:
#version 400
in vec4 shadowCoord;
out vec4 fColor;
uniform sampler2DShadow shadowMap;
void main(void)
{
float shadow = textureProj(shadowMap, shadowCoord);
fColor = (shadow > 0.0) ? vec4(1.0, 1.0, 1.0, 1.0) : vec4(0.1, 0.1, 0.1, 1.0);
}
This successfully renders my scene with shadows. The cubes in my scene are lit and in shadow where I would expect them to be. The problem occurs when I try to pass one of the two colors in from the geometry shader. When I do this, my conditional statement always evaluates to false.
Geometry shader:
#version 400
layout(triangles_adjacency) in;
layout(triangle_strip, max_vertices = 3) out;
in vec4 vShadowCoord[];
out vec4 gShadowCoord; // Added
out vec4 gColorLit; // Added
uniform vec3 lightPosition;
void main()
{
gShadowCoord = vShadowCoord[0];
gColorLit = vec4(1.0, 1.0, 1.0, 1.0); // Added
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gShadowCoord = vShadowCoord[2];
gColorLit = vec4(1.0, 1.0, 1.0, 1.0); // Added
gl_Position = gl_in[2].gl_Position;
EmitVertex();
gShadowCoord = vShadowCoord[4];
gColorLit = vec4(1.0, 1.0, 1.0, 1.0); // Added
gl_Position = gl_in[4].gl_Position;
EmitVertex();
EndPrimitive();
}
Fragment Shader:
#version 400
in vec4 shadowCoord;
in vec4 gColorLit; // Added
out vec4 fColor;
uniform sampler2DShadow shadowMap;
void main(void)
{
float shadow = textureProj(shadowMap, shadowCoord);
// Changed
fColor = (shadow > 0.0) ? gColorLit : vec4(0.1, 0.1, 0.1, 1.0);
}
What could be causing this to happen?
This occurs on both Ubuntu 12.04 and Windows 7
Some of the code in my shaders might seem unnecessary, but that is because I have stripped as much as I could away while troubleshooting.
There's no reason for your second shaders to successfully link. Your geometry shader states that it's writing:
out vec4 gShadowCoord; // Added
But your fragment shader is expecting:
in vec4 shadowCoord;
You should have gotten a linker error.

Phong lighting model is not actually lighting anything

I currently have an assignment to implement the Phong Lighting Model in openGL / GLSL. The two shaders that I am currently working with are below. The problem is that in the fragment shader, if I do not add vColor to gl_FragColor then the entire shape is black. However, if I DO add vColor, then the entire shape is that color with no lighting at all. I have been trying to solve this for a couple of hours now to no luck. What is the reason for this? Is it a problem in my shaders, or a problem perhaps in the openGL code? I am using one material and one point light source, which I'll show after the shaders.
Edit: If I set gl_FragColor = vec4(N, 1.0) then the object looks like this:
vertex shader:
#version 150
in vec4 vPosition;
in vec3 vNormal;
uniform mat4 vMatrix;
uniform vec4 LightPosition;
out vec3 fNorm;
out vec3 fEye;
out vec3 fLight;
void main() {
fNorm = vNormal;
fEye = vPosition.xyz;
fLight = LightPosition.xyz;
if(LightPosition.w != 0.0) {
fLight = LightPosition.xyz - vPosition.xyz;
}
gl_Position = vMatrix * vPosition;
}
fragment shader:
#version 150
in vec3 fNorm;
in vec3 fLight;
in vec3 fEye;
uniform vec4 vColor;
uniform vec4 AmbientProduct, DiffuseProduct, SpecularProduct;
uniform mat4 vMatrix;
uniform vec4 LightPosition;
uniform float Shininess;
void main(){
vec3 N = normalize(fNorm);
vec3 E = normalize(fEye);
vec3 L = normalize(fLight);
vec3 H = normalize(L + E);
vec4 ambient = AmbientProduct;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd * DiffuseProduct;
float Ks = pow(max(dot(N, H), 0.0), Shininess);
vec4 specular = Ks * SpecularProduct;
if(dot(L,N) < 0.0)
specular = vec4(0.0, 0.0, 0.0, 1.0);
gl_FragColor = vColor + ambient + diffuse + specular;
}
Setting materials and light:
void init() {
setMaterials(vec4(1.0, 0.0, 0.0, 0.0), //ambient
vec4(1.0, 0.8, 0.0, 1.0), //diffuse
vec4(1.0, 1.0, 1.0, 1.0), //specular
100.0); //shine
setLightSource(vec4(1.0, 0.0, 0.0, 1.0), //ambient
vec4(1.0, 0.0, 0.0, 1.0), //diffuse
vec4(1.0, 0.0, 0.0, 1.0), //specular
vec4(1.0, 2.0, 3.0, 1.0)); //position
setProducts();
....
}
/*
* Sets the material properties for Phong lighting model.
*/
void setMaterials(vec4 amb, vec4 dif, vec4 spec, GLfloat s) {
ambient = amb;
diffuse = dif;
specular = spec;
shine = s;
glUniform1f(vShininess, shine);
}
/*
* Set light source properties.
*/
void setLightSource(vec4 amb, vec4 dif, vec4 spec, vec4 pos) {
ambient0 = amb;
diffuse0 = dif;
specular0 = spec;
light0_pos = pos;
glUniform4fv(vLightPosition, 1, light0_pos);
}
/*
* Find the products of materials components and light components.
*/
void setProducts(){
vec4 ambientProduct = ambient * ambient0;
vec4 diffuseProduct = diffuse * diffuse0;
vec4 specularProduct = specular * specular0;
glUniform4fv(vAmbientProduct, 1, ambientProduct);
glUniform4fv(vDiffuseProduct, 1, diffuseProduct);
glUniform4fv(vSpecularProduct, 1, specularProduct);
}
Your final lighting composition doesn't look right:
gl_FragColor = vColor + ambient + diffuse + specular;
It should be something like
gl_FragColor = vColor * (ambient + diffuse) + specular;
i.e. the illumination modulated by the albedo of the object.